Subject: Editorial | July 28, 2016 - 01:03 PM | Ryan Shrout
Tagged: XSPC, wings, windows 10, VR, video, titan x, tegra, Silverstone, sapphire, rx 480, Raystorm, RapidSpar, radeon pro ssg, quadro, px1, podcast, p6000, p5000, nvidia, nintendo nx, MX300, gp102, evga, dg-87, crucial, angelbird
PC Perspective Podcast #410 - 07/28/2016
Join us this week as we discuss the new Pascal based Titan X, an AMD graphics card with 1TB of SSD storage on-board, data recovery with RapidSpar and more!!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the Store (audio only)
- Google Play - Subscribe to our audio podcast directly through Google Play!
- RSS - Subscribe through your regular RSS reader (audio only)
- MP3 - Direct download link to the MP3 file
Hosts: Ryan Shrout, Allyn Malventano, Sebastian Peak, and Josh Walrath
First, Some Background
NVIDIA's Rumored GP102
When GP100 was announced, Josh and I were discussing, internally, how it would make sense in the gaming industry. Recently, an article on WCCFTech cited anonymous sources, which should always be taken with a dash of salt, that claimed NVIDIA was planning a second architecture, GP102, between GP104 and GP100. As I was writing this editorial about it, relating it to our own speculation about the physics of Pascal, VideoCardz claims to have been contacted by the developers of AIDA64, seemingly on-the-record, also citing a GP102 design.
I will retell chunks of the rumor, but also add my opinion to it.
In the last few generations, each architecture had a flagship chip that was released in both gaming and professional SKUs. Neither audience had access to a chip that was larger than the other's largest of that generation. Clock rates and disabled portions varied by specific product, with gaming usually getting the more aggressive performance for slightly better benchmarks. Fermi had GF100/GF110, Kepler had GK110/GK210, and Maxwell had GM200. Each of these were available in Tesla, Quadro, and GeForce cards, especially Titans.
Maxwell was interesting, though. NVIDIA was unable to leave 28nm, which Kepler launched on, so they created a second architecture at that node. To increase performance without having access to more feature density, you need to make your designs bigger, more optimized, or more simple. GM200 was giant and optimized, but, to get the performance levels it achieved, also needed to be more simple. Something needed to go, and double-precision (FP64) performance was the big omission. NVIDIA was upfront about it at the Titan X launch, and told their GPU compute customers to keep purchasing Kepler if they valued FP64.
Subject: Editorial, Graphics Cards | May 18, 2016 - 01:18 PM | Tim Verry
Tagged: rumor, Polaris, opinion, HDMI 2.0, gpu, gddr5x, GDDR5, GCN, amd, 4k
While Nvidia's Pascal has held the spotlight in the news recently, it is not the only new GPU architecture debuting this year. AMD will soon be bringing its Polaris-based graphics cards to market for notebooks and mainstream desktop users. While several different code names have been thrown around for these new chips, they are consistently in general terms referred to as Polaris 10 and Polaris 11. AMD's Raja Kudori stated in an interview with PC Perspective that the numbers used in the naming scheme hold no special significance, but eventually Polaris will be used across the entire performance lineup (low end to high end graphics).
Naturally, there are going to be many rumors and leaks as the launch gets closer. In fact, Tech Power Up recently came into a number of interesting details about AMD's plans for Polaris-based graphics in 2016 including specifications and which areas of the market each chip is going to be aimed at.
Citing the usual "industry sources" familiar with the matter (take that for what it's worth, but the specifications do not seem out of the realm of possibility), Tech Power Up revealed that there are two lines of Polaris-based GPUs that will be made available this year. Polaris 10 will allegedly occupy the mid-range (mainstream) graphics option in desktops as well as being the basis for high end gaming notebook graphics chips. On the other hand, Polaris 11 will reportedly be a smaller chip aimed at thin-and-light notebooks and mainstream laptops.
Now, for the juicy bits of the leak: the rumored specifications!
AMD's "Polaris 10" GPU will feature 32 compute units (CUs) which TPU estimates – based on the assumption that each CU still contains 64 shaders on Polaris – works out to 2,048 shaders. The GPU further features a 256-bit memory interface along with a memory controller supporting GDDR5 and GDDR5X (though not at the same time heh). This would leave room for cheaper Polaris 10 derived products with less than 32 CUs and/or cheaper GDDR5 memory. Graphics cards would have as much as 8GB of memory initially clocked at 7 Gbps. Reportedly, the full 32 CU GPU is rated at 5.5 TFLOPS of single precision compute power and runs at a TDP of no more than 150 watts.
Compared to the existing Hawaii-based R9 390X, the upcoming R9 400 Polaris 10 series GPU has fewer shaders and less memory bandwidth. The memory is clocked 1 GHz higher, but the GDDR5X memory bus is half that of the 390X's 512-bit GDDR5 bus which results in 224 GB/s memory bandwidth for Polaris 10 versus 384 GB/s on Hawaii. The R9 390X has a slight edge in compute performance at 5.9 TFLOPS versus Polaris 10's 5.5 TFLOPS however the Polaris 10 GPU is using much less power and easily wins at performance per watt! It almost reaches the same level of single precision compute performance at nearly half the power which is impressive if it holds true!
|R9 390X||R9 390||R9 380||R9 400-Series "Polaris 10"|
|GPU Code name||Grenada (Hawaii)||Grenada (Hawaii)||Antigua (Tonga)||Polaris 10|
|Rated Clock||1050 MHz||1000 MHz||970 MHz||~1343 MHz|
|Memory Clock||6000 MHz||6000 MHz||5700 MHz||7000 MHz|
|Memory Bandwidth||384 GB/s||384 GB/s||182.4 GB/s||224 GB/s|
|TDP||275 watts||275 watts||190 watts||150 watts (or less)|
|Peak Compute||5.9 TFLOPS||5.1 TFLOPS||3.48 TFLOPS||5.5 TFLOPS|
|MSRP (current)||~$400||~$310||~$199||$ unknown|
Note: Polaris GPU clocks esitmated using assumption of 5.5 TFLOPS being peak compute and accurate number of shaders. (Thanks Scott.)
Another comparison that can be made is to the Radeon R9 380 which is a Tonga-based GPU with similar TDP. In this matchup, the Polaris 10 based chip will – at a slightly lower TDP – pack in more shaders, twice the amount of faster clocked memory with 23% more bandwidth, and provide a 58% increase in single precision compute horsepower. Not too shabby!
Likely, a good portion of these increases are made possible by the move to a smaller process node and utilizing FinFET "tri-gate" like transistors on the Samsung/Globalfoundries 14LPP FinFET manufacturing process, though AMD has also made some architecture tweaks and hardware additions to the GCN 4.0 based processors. A brief high level introduction is said to be made today in a webinar for their partners (though AMD has come out and said preemptively that no technical nitty-gritty details will be divulged yet). (Update: Tech Altar summarized the partner webinar. Unfortunately there was no major reveals other than that AMD will not be limiting AIB partners from pushing for the highest factory overclocks they can get).
Moving on from Polaris 10 for a bit, Polaris 11 is rumored to be a smaller GCN 4.0 chip that will top out at 14 CUs (estimated 896 shaders/stream processors) and 2.5 TFLOPS of single precision compute power. These chips aimed at mainstream and thin-and-light laptops will have 50W TDPs and will be paired with up to 4GB of GDDR5 memory. There is apparently no GDDR5X option for these, which makes sense at this price point and performance level. The 128-bit bus is a bit limiting, but this is a low end mobile chip we are talking about here...
|R7 370||R7 400 Series "Polaris 11"|
|GPU Code name||Trinidad (Pitcairn)||Polaris 11|
925 MHz base (975 MHz boost)
|Memory||2 or 4GB||4GB|
|Memory Clock||5600 MHz||? MHz|
|Memory Bandwidth||179.2 GB/s||? GB/s|
|TDP||110 watts||50 watts|
|Peak Compute||1.89 TFLOPS||2.5 TFLOPS|
|MSRP (current)||~$140 (less after rebates and sales)||$?|
Note: Polaris GPU clocks esitmated using assumption of 2.5 TFLOPS being peak compute and accurate number of shaders. (Thanks Scott.)
Fewer details were unveiled concerning Polaris 11, as you can see from the chart above. From what we know so far, it should be a promising successor to the R7 370 series even with the memory bus limitation and lower shader count as the GPU should be clocked higher, (it also might have more shaders in M series mobile variants versus of the 370 and lower mobile series) and a much lower TDP for at least equivalent if not a decent increase in performance. The lower power usage in particular will be hugely welcomed in mobile devices as it will result in longer battery life under the same workloads, ideally. I picked the R7 370 as the comparison as it has 4 gigabytes of memory and not that many more shaders and being a desktop chip readers may be more widely familiar with it. It also appears to sit between the R7 360 and R7 370 in terms of shader count and other features but is allegedly going to be faster than both of them while using at least (on paper) less than half the power.
Of course these are still rumors until AMD makes Polaris officially, well, official with a product launch. The claimed specifications appear reasonable though, and based on that there are a few important takeaways and thoughts I have.
The first thing on my mind is that AMD is taking an interesting direction here. While NVIDIA has chosen to start out its new generation at the top by announcing "big Pascal" GP100 and actually launching the GP104 GTX 1080 (one of its highest end consumer chips/cards) yesterday and then over the course of the year introducing lower end products AMD has opted for the opposite approach. AMD will be starting closer to the lower end with a mainstream notebook chip and high end notebook/mainstream desktop GPU (Polaris 11 and 10 respectively) and then over a year fleshing out its product stack (remember Raja Kudori stated Polaris and GCN 4 would be used across the entire product stack) and building up with bigger and higher end GPUs over time finally topping off with its highest end consumer (and professional) GPUs based on "Vega" in 2017.
This means, and I'm not sure if this was planned by either Nvidia or AMD or just how it happened to work out based on them following their own GPU philosophies (but I'm thinking the latter), that for some time after both architectures are launched AMD and NVIDIA's newest architectures and GPUs will not be directly competing with each other. Eventually they should meet in the middle (maybe late this year?) with a mid-range desktop graphics card and it will be interesting to see how they stack up at similar price points and hardware levels. Then, of course once "Vega" based GPUs hit (sadly probably in time for NV's big Pascal to launch heh. I'm not sure if Vega is Fury X replacement only or even beyond that to 1080Ti or even GP100 competitor) we should see GCN 4 on the new smaller process node square up against NVIDIA and it's 16nm Pascal products across the board (entire lineup). Which will have the better performance, which will win out in power usage and performance/watt and performance/$? All questions I wish I knew the answers to, but sadly do not!!
Speaking of price and performance/$... Polaris is actually looking pretty good so far at hitting much lower TDPs and power usage targets while delivering at least similar performance if not a good bit more. Both AMD and NVIDIA appear to be bringing out GPUs better than I expected to see as far as technological improvements in performance and power usage (these die shrinks have really helped even though from here on out that trend isn't really going to continue...). I hope that AMD can at least match NV in these areas at the mid range even if they do not have a high end GPU coming out soon (not until sometime after these cards launch and not really until Vega, the high end GCN GPU successor). At least on paper based on the leaked information the GPUs so far look good. My only worry is going to be pricing which I think is going to make or break these cards. AMD will need to price them competitively and aggressively to ensure their adoption and success.
I hope that doing the rollout this way (starting with lower end chips) helps AMD to iron out the new smaller process node and that they are able to get good yields so that they can be aggressive with pricing here and eventually at the hgh end!
I am looking forward to more information on AMD's Polaris architecture and the graphics cards based on it!
- AMD Capsaicin GDC Live Stream and Live Blog TODAY!!
- AMD GPU Roadmap: Capsaicin Names Upcoming Architectures
- AMD's Raja Koduri talks moving past CrossFire, smaller GPU dies, HBM2 and more.
- AMD High-End Polaris Expected for 2016
- CES 2016: AMD Shows Polaris Architecture and HDMI FreeSync Displays
I will admit that I am not 100% up on all the rumors and I apologize for that. With that said, I would love to hear what your thoughts are on AMD's upcoming GPUs and what you think about these latest rumors!
Subject: Editorial, General Tech | April 13, 2016 - 01:57 PM | Jeremy Hellstrom
Tagged: creative assembly, warhammer fantasy, total war, dlc, gaming
After committing the double sin of pimping preorders and Day 1 DLC announced before the release date, The Creative Assembly seems to be trying to win back some of their fans by offering free new content for all some time down the road. There will be new Legendary Lords, magic items, quest chains, and units and towards the end of the year. If you want to play as Chaos you will still have to preorder the game or pay for it after release.
The offer of free content is appreciated, apart from one small problem; the game's release date is still over a month away. The offer of future free content seems to be a thinly veiled effort to increase the sales of preorders, since many of us have refused to take them up on their offer. Hopefully this is a hint that the industry is beginning to realize that publishing the actual game in full will draw more customers than releasing a partial game with DLC already planned.
Iceberg Interactive has a much better model, Endless Legends was released as planned and once they realized how popular the game was they put effort into adding entirely new features and races. Instead of taunting their customers with DLC announced at the same time as they released the game, they have treated it more as a reward for customer loyalty. Then again, perhaps their customers are the exception and The Creative Assembly's announcement will succeed in selling more copies of the game before the release date.
"Now, developers The Creative Assembly have released details of their post-release plans and that includes loads of free add-ons. There will be new Lords with their own quest chains, items and campaign bonuses, new magic, and, most intriguing of all, an entire new playable race."
Here is some more Tech News from around the web:
- Endless Legend Launches New Expansion, Holds Sale @ Rock, Paper, SHOTGUN
- Death, Despair And Lovely Cutscenes: The Banner Saga 2 @ Rock, Paper, SHOTGUN
- Titanfall 2 teaser trailer published by Respawn Entertainment @ HEXUS
- Hitman Review @ OCC
- Burly Men At Sea Offers Hyperstylised Folktales @ Rock, Paper, SHOTGUN
- Everybody's Gone to the Rapture arrives on PC/Steam tomorrow @ HEXUS
- Wot I Think: Baldur’s Gate: Siege Of Dragonspear @ Rock, Paper, SHOTGUN
- Dark Souls 3 review @ Polygon
Seeing Ryan transition from being a long-time Android user over to iOS late last year has had me thinking. While I've had hands on with flagship phones from many manufacturers since then, I haven't actually carried an Android device with me since the Nexus S (eventually, with the 4.0 Ice Cream Sandwich upgrade). Maybe it was time to go back in order to gain a more informed perspective of the mobile device market as it stands today.
So that's exactly what I did. When we received our Samsung Galaxy S7 review unit (full review coming soon, I promise!), I decided to go ahead and put a real effort forth into using Android for an extended period of time.
Full disclosure, I am still carrying my iPhone with me since we received a T-Mobile locked unit, and my personal number is on Verizon. However, I have been using the S7 for everything but phone calls, and the occasional text message to people who only has my iPhone number.
Now one of the questions you might be asking yourself right now is why did I choose the Galaxy S7 of all devices to make this transition with. Most Android aficionados would probably insist that I chose a Nexus device to get the best experience and one that Google intends to provide when developing Android. While these people aren't wrong, I decided that I wanted to go with a more popular device as opposed to the more niche Nexus line.
Whether you Samsung's approach or not, the fact is that they sell more Android devices than anyone else and the Galaxy S7 will be their flagship offering for the next year or so.
Subject: Editorial, General Tech | March 30, 2016 - 08:00 AM | Tim Verry
Tagged: U-Verse, opinion, isp, Internet, FTTN, FTTH, editorial, data cap, AT%26T
AT&T U-Verse internet users will soon feel the pain of the company's old school DSL users in the form of enforced data caps and overage charges for exceeding new caps. In a blog post yesterday, AT&T announced plans to roll out new data usage caps for U-Verse users as well as a ('Comcastic') $30 per month option for unlimited data use.
Starting on May 23, 2016 AT&T U-Verse (VDSL2 and Gigapower/Fiber) customers will see an increase to their usage allowance based on their speed tier. Currently, U-Verse FTTN customer have a 250 GB cap regardless of speed tier while FTTH customers in its Gigapower markets have a higher 500 GB cap. These caps were soft caps and not enforced meaning that customers were not charged anything for going over them. That will soon change, and all U-Verse customers will be charged for going over their cap at a rate of $10 for every 50 GB over the cap. (e.g. Even if you use only 1 GB over the cap, you will still be charged the full $10 fee.).
The new U-Verse caps (also listed in the chart below) range from 300 GB for speeds up to 6 Mbps and 600 GB for everything up to its bonded pair 75 Mbps tier. At the top end, customers lucky enough to get fiber to the home and speed plans up to 1 Gbps will have a 1 TB cap.
|Internet Tier||New Data Caps||Overage Charges|
|AT&T DSL (all speeds)||150 GB||$10 per 50GB|
|AT&T U-Verse (768 Kbps – 6 Mbps)||300 GB||$10 per 50GB|
|AT&T U-Verse (12 Mbps – 75Mbps)||600 GB||$10 per 50GB|
|AT&T U-Verse FTTH (100 Mbps – 1 Gbps)||1 TB||$10 per 50GB|
Uverse customers that expect to use more than 500 GB over their data cap ($100 is the maximum overage charge) or that simply prefer not to worry about tracking their data usage can opt to pay an additional $30 monthly fee to be exempt from their data cap.
It's not all bad news though. General wisdom has always been that U-Verse customers subscribed to both internet and TV would be exempt from the caps even if AT&T started to enforce them. This is not changing. U-Verse customers subscribed to U-Verse TV (IPTV) or Direct TV on a double play package with U-Verse internet will officially be exempt from the cap and will get the $30/month unlimited data option for free.
AT&T DSL users continue to be left behind here as they will not receive an increase in their 150 GB data allowance, and from the wording of the blog post it appears that they will further be left out of the $30 per month unlimited data option (which would have actually been a very welcome change for them).
Karl Bode over at DSLReports adds a bit of interesting history in mentioning that originally AT&T stated that U-Verse users would not be subject to a hard data cap because of the improved network architecture and its "greater capacity" versus the old school CO-fed DSL lines. With the acquisition of Direct TV and the way that AT&T has been heavily pushing Direct TV and pushing customers away from its IPTV U-Verse TV service, it actually seems like a perfect time to not enforce data caps since customers going with its Direct TV satellite TV would free up a great deal of bandwidth on the VDSL2 wireline network for internet!
This recent move is very reminiscent of Comcast's as it "trials" data caps and overages in certain markets as well as having it's own extra monthly charge for unlimited data use. Considering the relatively miniscule cost to deliver this data versus the monthly service charges, these new unlimited options really seem more about seeking profit than any increased costs especially since customers have effectively had unlimited data this whole time and will soon be charged for the same service they've possibly been using for years. I will give AT&T some credit for implementing more realistic data caps and bumping everyone up based on speed tiers (something Comcast should adopt if they are set on having caps). Also, letting Internet+TV customers keep unlimited data is a good thing, even if it is only there to encourage people not to cut the cord.
The final bit of good news is that existing U-Verse customers will have approximately four months before they will be charged for going over their data caps. AT&T claims that they will only begin charging for overages on the third billing cycle, giving customers at least two 'free' months of overages. Users can opt to switch between unlimited and capped options at will, even in the middle of a billing cycle, and the company will send as many as seven email reminders at various data usage points as they approach the cap in the first two months as a warning to the overages.
This is a lot to take in, but there is still plenty of time to figure out how the changes will affect you.
Are you a U-Verse or AT&T DSL user? What do you think about the new data caps for U-Verse users and the $30/month unlimited data option?
Subject: Editorial | March 28, 2016 - 08:44 PM | Scott Michaud
Tagged: windows 10, Oculus, microsoft
— Tim Sweeney (@TimSweeneyEpic) March 28, 2016
... and so am I.
When you develop software, you will always be reliant upon platforms. You use their interfaces to make your hardware do stuff. People who maintain these will almost always do so with certain conditions. In iOS's case, you must have all of your content certified by Apple before it can be installed. In Linux's case, if you make any changes to the platform and distribute them, you need to also release what those changes are.
Sometimes, they are enforced with copyright law. Recently, some platform vendors use chains of trust with strong, mathematical keys. This means that, unless Apple, Microsoft, Oculus, or whoever else made a mistake, members of society can be entirely locked out of creating and installing content.
This has pros and cons.
On the one hand, it can be used to revoke malware authors, scammers, and so forth. These platforms, being more compact, are usually easier to develop for, and might even be portable across deeper platforms, like x86 or ARM.
On the other hand, it can be used to revoke anything else. Imagine that you live in a jurisdiction where the government wants to ban encryption software. Imagine you live in a jurisdiction where the government wants to ban art featuring characters who are LGBT. Imagine you just want to use your hardware in a way that the vendor does not support, such as our attempts to measure UWP application performance.
We need to be extra careful when dealing with good intentions. These are the situations where people will ignore potential abuses because they are blinded by their justifications. This should not be taken lightly, because when you build something, you build it for everyone to use and abuse, intentionally, or even blinded by their own justifications, which often oppose yours.
For art and continued usability, Microsoft, Oculus, and everyone else needs to ensure that their platforms cannot be abused. They are not a government, and they have no legal requirement to grant users free expression, but these choices can have genuine harm. As owners of platforms, you should respect the power that your platform enables society to wield, and implement safeguards so that you can continue to provide it going forward.
Subject: Editorial | March 6, 2016 - 11:05 AM | Ryan Shrout
Tagged: video, streaming out loud, sol., pcper live, live
Missed the 12-hour event? Live the magic for yourself here:
Several weeks ago, I tossed out the idea of doing a long-form live stream with the goal of showcasing for our readers, viewers and fans what we do around here. Why not dedicate a full day to interviewing guests, playing some games, doing some Q&A and putting together some projects? Well that's what we are doing.
Let me introduce you to...
Streaming Out Loud - PCPer Live!
Starts: 9am PT / 12pm ET
Ends: 9pm PT / 12am ET
Need a reminder? Join our live mailing list!
That's right, we are hosting a 12-hour long live stream on PC Perspective in which we will drag as many guests in with us as possible to talk shop, giveaway some hardware and celebrate PC enthusiasts and technology!
- Patrick Norton, tekthing.com
- Tom Petersen, NVIDIA
- Andrew Coonrad, Logitech
- Jacob Freeman, EVGA
- David Hewlett, The Internet
- Dan Baker, Oxide Games
- Ben Kuchera, Polygon.com
- 650 GQ Power Supply
- 650 P2 Power Supply
- Z170 Classified K
- GTX 970 (3975)
- AOC G2460PF FreeSync 24" 1080p TN
- VOID Surround RGB Headset
- M65 RGB Mouse
- Strafe RGB Keyboard MX Silent
- G502 Proteus Spectrum mouse
- G810 Orion Spectrum keyboard
- G640 mouse pad
- X99S SLI Krait Edition motherboard
- 5x Thunder Storm gaming mouse pads
- OCZ Storage Solutions
- 2x Trion 150 480GB SSDs
- More to be confirmed!!
Activities (schedule to be determined):
- Allyn teaches soldering
- Future of VR discussion
- Q&A from chat and Twitter
- Building a table PC
- Gaming sesssions: Rocket League, UT2004, more
- Ken vs. Ryan Steam Controller Challenge
- Riveting game of RISK on a table-top PC
And of course, who wouldn't want to tune in and see the carnage of a team of wily computer nerds attempt to keep a live stream on and stable for the entirety of a 12 hour day? If nothing else, it might be fun to see what breaks, right?
I want to thank our friends and sponsors for getting together some prizes for us as well as to the guests that willingly are going to spend some of their Sunday with us, all in the name of PC gaming and PC hardware!
Have anything specific you want us to cover or discuss? Let me know in the comments below!! Don't forget to sign up for our PC Perspecgive Live! Mailing List to get the latest updates on dumb shit like this we will be doing in the future!
PS: You can find the schedule for Sunday's live stream festivities after the break!
28HPCU: Cost Effective and Power Efficient
Have you ever been approached about something and upon first hearing about it, the opportunity just did not seem very exciting? Then upon digging into things, it became much more interesting? This happened to me with this announcement. At first blush, who really cares that ARM is partnering with UMC at 28 nm? Well, once I was able to chat with the people at ARM, it is much more interesting than initially expected.
The new hotness in fabrication is the latest 14 nm and 16 nm processes from Samsung/GF and TSMC respectively. It has been a good 4+ years since we last had a new process node that actually performed as expected. The planar 22/20 nm products just were not entirely suitable for mass production. Apple was one of the few to actually develop a part for TSMC’s 20 nm process that actually sold in the millions. The main problem was a lack of power and speed scaling as compared to 28 nm processes. Planar was a bad choice, but the development of FinFET technologies hadn’t been implemented in time for it to show up at this time by 3rd party manufacturers.
There is a problem with the latest process generations, though. They are new, expensive, and are production constrained. Also, they may not be entirely appropriate for the applications that are being developed. There are several strengths with 28 nm as compared. These are mature processes with an excess of line space. The major fabs are offering very competitive pricing structures for 28 nm as they see space being cleared up on the lines with higher end SOCs, GPUs, and assorted ASICs migrating to the new process nodes.
TSMC has typically been on the forefront of R&D with advanced nodes. UMC is not as aggressive with their development, but they tend to let others do some of the heavy lifting and then integrate the new nodes when it fits their pricing and business models. TSMC is on their third generation of 28 nm. UMC is on their second, but that generation encompasses many of the advanced features of TSMC’s 3rd generation so it is actually quite competitive.
Subject: Editorial | January 27, 2016 - 01:27 PM | Josh Walrath
Tagged: Thrustmaster, T150, Rocket League, racing wheel, racing, project cars, livestream, GRID Autosport, gaming, force feedback, DiRT Rally, Assetto Corsa
Did you miss the live stream for yesterday racing action? No worries, catch up on the replay right here!
On Thursday, January 28th at 5:30 PM ET we will be hosting a livestream featuing some racing by several of our writers. We welcome our readers to join up and race with us! None of us are professionals, so there is a very good chance that anyone that joins can easily outrace us!
We have teamed up with Thrustmaster to give away the TM T150 Racing Wheel! The MSRP on this number is $199.99, but we are giving it away for free. This was reviewed a few months ago and the results were very good for the price point. You can read that entire review here!
We will be playing multiple games throughout the livestream, so get those Steam clients fired up and updated.
We will be racing through the Rallycross portion of DR. These are fun races and fairly quick. Don't forget the Joker lap!
This is another favorite and features a ton of tracks and cars with some interesting tire (tyre) physics thrown in for good measure!
Another fan favorite with lovely graphics and handling/physics that match the best games out there.
We will be announcing how to join up in the contest during the livestream! Be sure to tune in!