All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Subject: Memory | August 25, 2016 - 02:39 AM | Tim Verry
Tagged: TSV, SK Hynix, Samsung, hot chips, hbm3, hbm
Samsung and SK Hynix were in attendance at the Hot Chips Symposium in Cupertino, California to (among other things) talk about the future of High Bandwidth Memory (HBM). In fact, the companies are working on two new HBM products: HBM3 and an as-yet-unbranded "low cost HBM." HBM3 will replace HBM2 at the high end and is aimed at the HPC and "prosumer" markets while the low cost HBM technology lowers the barrier to entry and is intended to be used in mainstream consumer products.
As currently planned, HBM3 (Samsung refers to its implementation as Extreme HBM) features double the density per layer and at least double the bandwidth of the current HBM2 (which so far is only used in NVIDIA's planned Tesla P100). Specifically, the new memory technology offers up 16Gb (~2GB) per layer and as many as eight (or more) layers can be stacked together using TSVs into a single chip. So far we have seen GPUs use four HBM chips on a single package, and if that holds true with HBM3 and interposer size limits, we may well see future graphics cards with 64GB of memory! Considering the HBM2-based Tesla will have 16 and AMD's HBM-based Fury X cards had 4GB, HBM3 is a sizable jump!
Capacity is not the only benefit though. HBM3 doubles the bandwidth versus HBM2 with 512GB/s (or more) of peak bandwidth per stack! In the theoretical example of a graphics card with 64GB of HBM3 (four stacks), that would be in the range of 2 TB/s of theoretical maximum peak bandwidth! Real world may be less, but still that is many terabytes per second of bandwidth which is exciting because it opens a lot of possibilities for gaming especially as developers push graphics further towards photo realism and resolutions keep increasing. HBM3 should be plenty for awhile as far as keeping the GPU fed with data on the consumer and gaming side of things though I'm sure the HPC market will still crave more bandwidth.
Samsung further claims that HBM3 will operate at similar (~500MHz) clocks to HBM2, but will use "much less" core voltage (HBM2 is 1.2V).
Stacked HBM memory on an interposer surrounding a processor. Upcoming HBM technologies will allow memory stacks with double the number of layers.
HBM3 is perhaps the most interesting technologically; however, the "low cost HBM" is exciting in that it will enable HBM to be used in the systems and graphics cards most people purchase. There were less details available on this new lower cost variant, but Samsung did share a few specifics. The low cost HBM will offer up to 200GB/s per stack of peak bandwidth while being much cheaper to produce than current HBM2. In order to reduce the cost of production, their is no buffer die or ECC support and the number of Through Silicon Vias (TSV) connections have been reduced. In order to compensate for the lower number of TSVs, the pin speed has been increased to 3Gbps (versus 2Gbps on HBM2). Interestingly, Samsung would like for low cost HBM to support traditional silicon as well as potentially cheaper organic interposers. According to NVIDIA, TSV formation is the most expensive part of interposer fabrication, so making reductions there (and somewhat making up for it in increased per-connection speeds) makes sense when it comes to a cost-conscious product. It is unclear whether organic interposers will win out here, but it is nice to seem them get a mention and is an alternative worth looking into.
Both high bandwidth and low latency memory technologies are still years away and the designs are subject to change, but so far they are both plans are looking rather promising. I am intrigued by the possibilities and hope to see new products take advantage of the increased performance (and in the latter case lower cost). On the graphics front, HBM3 is way too far out to see a Vega release, but it may come just in time for AMD to incorporate it into its high end Navi GPUs, and by 2020 the battle between GDDR and HBM in the mainstream should be heating up.
What are your thoughts on the proposed HBM technologies?
Subject: Cases and Cooling | August 24, 2016 - 06:43 PM | Jeremy Hellstrom
Tagged: SX700-LPT, small form factor, SFX, 80 Plus Platinum, modular psu
Do you recall the new long playing version of the SFX PSU form factor; specifically Lee's review of the SilverStone SFX-L 700W PSU? Perhaps you have forgotten about the new form factor of PSU that offers similar cooling to a full ATX PSU but takes up a lot less room. Not to fret, [H]ard|OCP is here to remind you with a fresh review of the PSU. Their tests revealed the same strengths as Lee's, perhaps not outstanding but certainly a very good choice for a PSU. They did dock more points for the lack of an included adapter for ATX mounting, they are available but it does seem worth mentioning SilverStone's oversight.
"SilverStone has a new take on small form factor power supplies it is calling "SFX-L." This new form factor extends the standard SFX size by 30mm allowing SilverStone to install a quieter 120mm fan than the usual higher speed and noisier 80mm and 92mm fans. How does all this work out?"
Here are some more Cases & Cooling reviews from around the web:
- Seasonic Prime 650W Titanium @ Kitguru
- Cougar LX Series 600 W @ techPowerUp
- Seasonic Prime 850W Titanium @ Kitguru
- Cooler Master MasterWatt Maker 1200 @ Kitguru
Subject: General Tech | August 24, 2016 - 04:15 PM | Sebastian Peak
Tagged: utilities, SoC, snapdragon, Smart Ballpark, San Diego, qualcomm, Padres, OSIsoft, iot, industrial, baseball
Ever wonder how efficiently a major venue operates when it's only full of fans on game days? It turns out they don't operate all that efficiently, and the overhead is very expensive. This is where Qualcomm and OSIsoft step in, collaborating on a new “Smart Ballpark” project for San Diego's Petco Park.
“The San Diego Padres are utilizing edge intelligence gateways, powered by Qualcomm Snapdragon processors, to collect data from critical infrastructure systems and stream it in real-time to OSIsoft’s PI System in order to monitor utilities, improve operating efficiencies and drive sustainability across the team’s entire Petco Park ballpark.”
With usage monitoring for utilities (electrical and gas energy, potable and non-potable water) the Padres - San Diego’s Major League Baseball team that calls Petco Park home - see the potential to save more than 25% in the next five years.
“The edge intelligence gateways, using Snapdragon processors, connect to sensors and legacy systems throughout the ballpark using a broad range of communication methods, including wired and wireless technologies, analog and digital inputs and multiple communication protocols. These edge intelligence gateways acquire, store and stream data in real-time to the OSIsoft PI System which then presents the data to the Padres’ facilities managers using OSIsoft’s Visualization Suite and analytics, providing the operations team with deep situational awareness of everything happening in the venue.”
This is a mammoth implementation of IoT (Internet of Things), with OSIsoft’s PI system a major player on the industrial side. Qualcomm naturally needs no introduction, as the smartphone SoC maker found in so many devices across virtually all brands. Qualcomm has also worked on improving mobile data performance in large venues such as ballparks, with products like the X16 modem (expected in products starting in the second half of 2016) offering improved connections via carrier and link aggregation, and use of unlicensed spectrum.
Full press release after the break:
Subject: General Tech | August 24, 2016 - 03:30 PM | Jeremy Hellstrom
Tagged: gaming, deus ex: mankind divided
You are probably wondering what kind of performance you will see when you run the new Deus Ex after you purchase it; as obviously you did not pre-order the game. TechPowerUp has you covered as they have tested the retail version of the game with a variety of cards to give you an idea of the load your GPU will be under. They started out testing memory usage with a Titan, running Ultra settings at 4K will use up to 5.5GB of memory, so mid range cards will certainly suffer at that point. Since not many of us are sporting Titans in our cases they also tried out the GTX 1060, 980Ti and 1080 along with the RX 480 and Fury X at a variety of settings. Read through their review to garner a rough estimate of your expected performance in Mankind Divided.
"Deus Ex Mankind Divided has just been released today. We bring you a performance analysis using the most popular graphics cards, at four resolutions, including 4K, at both Ultra and High settings. We also took a closer look at VRAM usage."
Here is some more Tech News from around the web:
- Dawn of War 3: The most promising take on Warhammer 40K yet @ Ars Technica
- Total Warhammer: Grim & The Grave DLC Announced @ Rock, Paper, SHOTGUN
- Waaagh! WH40K: Armageddon – Da Orks Released @ Rock, Paper, SHOTGUN
- AMD and Nvidia tempt customers with new game bundles @ HEXUS
- How To Skip Deus Ex: Mankind Divided Intro Videos @ Rock, Paper, SHOTGUN
- Playerkind Divided – How’s Deus Ex Running For You? @ Rock, Paper, SHOTGUN
- Ubisoft showcases Watch Dogs 2, For Honor, The Crew and more @ HEXUS
- Premature Evaluation: Rogue System @ Rock, Paper, SHOTGUN
- Divinity: Original Sin 2 Smartly Reinvents The RPG Party @ Rock, Paper, SHOTGUN
- Dishonored 2 Gamescom Trailer Shows Emily’s Skillz @ Rock, Paper, SHOTGUN
Subject: General Tech | August 24, 2016 - 01:01 PM | Jeremy Hellstrom
Tagged: ultraportable, LPDDR4, Intel, apollo lake
A report from DigiTimes is bad news for those who like to upgrade their ultraportable laptops. To cut down on production costs companies like Acer, Lenovo, Asustek Computer, HP and Dell will use on-board memory as opposed to DIMMs on their Apollo Lake based machines. This should help keep the costs of flipbooks, 2 in 1's and other small machines stable or even lower them by a small amount but does mean that they cannot easily be upgraded. Many larger notebooks will also switch to this style of memory so be sure to do your research before purchasing a new mobile system.
"Notebook vendors have mostly adopted on-board memory designs in place of DIMMs to make their Intel Apollo Lake-based notebooks as slim as possible, according to sources from Taiwan's notebook supply chain"
Here is some more Tech News from around the web:
- Microsoft, Lenovo cross-licensing love-in: Android mobes knocked up with... Office apps @ The Register
- Fifth of science papers on genes contain errors caused by Excel @ The Inquirer
- Roomba vs Poop: Teaching Robots to Detect Pet Mess @ Hack a Day
- Google broke its own cloud by doing two updates at once @ The Register
- A Design Defect Is Plaguing Many iPhone 6 and 6 Plus Units @ Slashdot
- AVM FRITZ!Powerline 1240E WLAN Set Review @ NikKTech
Subject: Graphics Cards | August 24, 2016 - 10:34 AM | Ryan Shrout
Tagged: nvidia, market share, jpr, jon peddie, amd
As reported by both Mercury Research and now by Jon Peddie Research, in a graphics add-in card market that dropped dramatically in Q2 2016 in terms of total units shipped, AMD has gained significant market share against NVIDIA.
|GPU Supplier||Market share this QTR||Market share last QTR||Market share last year|
Source: Jon Peddie Research
Last year at this time, AMD was sitting at 18% market share in terms of units sold, an absolutely dismal result compared to NVIDIA's dominating 81.9%. Over the last couple of quarters we have seen AMD gain in this space, and keeping in mind that Q2 2016 does not include sales of AMD's new Polaris-based graphics cards like the Radeon RX 480, the jump to 29.9% is a big move for the company. As a result, NVIDIA falls back to 70% market share for the quarter, which is still a significant lead over the AMD.
Numbers like that shouldn't be taken lightly - for AMD to gain 7 points of market share in a single quarter indicates a substantial shift in the market. This includes all add-in cards: budget, mainstream, enthusiast and even workstation class products. One report I am received says that NVIDIA card sales specifically dropped off in Q2, though the exact reason why isn't known, and as a kind of defacto result, AMD gained sales share.
There are several other factors to watch with this data however. First, the quarterly drop in graphics card sales was -20% in Q2 when compared to Q1. That is well above the average seasonal Q1-Q2 drop, which JPR claims to be -9.7%. Much of this sell through decrease is likely due to consumers expecting releases of both NVIDIA Pascal GPUs and AMD Polaris GPUs, stalling sales as consumers delay their purchases.
The NVIDIA GeForce GTX 1080 launched on May 17th and the GTX 1070 on May 29th. The company has made very bold claims about product sales of Pascal parts so I am honestly very surprised that the overall market would drop the way it did in Q2 and that NVIDIA would fall behind AMD as much as it has. Q3 2016 may be the defining time for both GPU vendors however as it will show the results of the work put into both new architectures and both new product lines. NVIDIA reported record profits recently so it will be interesting to see how that matches up to unit sales.
Hey Ken, print me out a shroud! ASUS introduces 3D printed parts for their motherboards, GPUs and peripherals
Subject: General Tech | August 23, 2016 - 07:02 PM | Jeremy Hellstrom
Tagged: asus, 3d printing
ASUS have just released FreeCAD and 3D source files for you to print out embellishments and useful add-ons for your ASUS motherboards, graphics cards and peripherals such as the ROG Spatha. Below you can see a 3D-printed finger rest addition to the Spatha, just one of the possibilities this new program opens up.
Certain motherboards such as the Z170 Pro Gaming/Aura sport mounting points specifically designed for 3D printed parts, or you can use empty M.2 mount points as the designs ASUS have made available use the same screws as you would use in an M.2 port.
You could add a nameplate, an additional fan mount or even extra plastic shielding, in whatever colours you have available to print with. ASUS chose to use FreeCAD to design the parts so that you do not necessarily need a 3D printer yourself, services such as ShapeWays can print the parts out for you.
If you have SLI or CrossFire bridges that need sprucing up, they have designed covers to snap over your existing parts as well as cable combs to keep your cables under control. The current designs only scratch the surface of what you could create and add to your builds and you can bet ASUS will be adding more possibilities in the coming months. You can just add a little something to make your machine unique or go all out with modifications, just check out the designs and see what grabs your attention.
Subject: Graphics Cards | August 23, 2016 - 04:18 PM | Tim Verry
Tagged: water cooling, pascal, hybrid cooler, GTX 1080, evga
EVGA recently launched a water cooled graphics card that pairs the GTX 1080 processor with the company's FTW PCB and a closed loop (AIO) water cooler to deliver a heavily overclockable card that will set you back $730.
The GTX 1080 FTW Hybrid is interesting because the company has opted to use the same custom PCB design as its FTW cards rather than a reference board. This FTW board features improved power delivery with a 10+2 power phase, two 8-pin PCI-E power connectors, Dual BIOS, and adjustable RGB LEDs. The cooler is shrouded with backlit EVGA logos and has a fan to air cool the memory and VRMs that is reportedly quiet and uses a reverse swept blade design (like their ACX air coolers) rather than a traditional blower style fan. The graphics processor is cooled by a water loop.
The water block and pump sit on top of the GPU with tubes running out to the 120mm radiator. Luckily the fan on the radiator can be easily disconnected, allowing users to use their own fan if they wish. According to Youtuber Jayztwocents, the Precision XOC software controls the fan speed of the fan on the card itself but users can not adjust the radiator fan speed themselves. You can connect your own fan to your motherboard and control it that way, however.
Display outputs include one DVI-D, one HDMI, and three DisplayPort outputs (any four of the five can be used simultaneously).
Out of the box this 215W TDP graphics card has a factory overclock of 1721 MHz base and 1860 MHz boost. Thanks to the water cooler, the GPU stays at a frosty 42°C under load. When switched to the slave BIOS (which has a higher power limit and more aggressive fan curve), the card GPU Boosted to 2025 and hit 51°C (he managed to keep that to 44°C by swapping his own EK-Vardar fan onto the radiator). Not bad, especially considering the Founder's Edition hit 85°C on air in our testing! Unfortunately, EVGA did not touch the memory and left the 8GB of GDDR5X at the stock 10 GHz.
|GTX 1080||GTX 1080 FTW Hybrid||GTX 1080 FTW Hybrid Slave BIOS|
|Rated Clock||1607 MHz||1721 MHz||1721 MHz|
|Boost Clock||1733 MHz||1860 MHz||2025 MHz|
|Memory Clock||10000 MHz||10000 MHz||10000 MHz|
|TDP||180 watts||215 watts||? watts|
|MSRP (current)||$599 ($699 FE)||$730||$730|
The water cooler should help users hit even higher overclocks and/or maintain a consistent GPU Boost clock at much lower temperatures than on air. The GTX 1080 FTW Hybrid graphics card does come at a bit of a premium at $730 (versus $699 for Founders or ~$650+ for custom models), but if you have the room in your case for the radiator this might be a nice option! (Of course custom water cooling is more fun, but it's also more expensive, time consuming, and addictive. hehe)
What do you think about these "hybrid" graphics cards?
Subject: Graphics Cards | August 23, 2016 - 01:43 PM | Jeremy Hellstrom
Tagged: amd, nvidia, Tilt Brush, VR
[H]ard|OCP continues their foray into testing VR applications, this time moving away from games to try out the rather impressive Tilt Brush VR drawing application from Google. If you have yet to see this software in action it is rather incredible, although you do still require an artist's talent and practical skills to create true 3D masterpieces.
Artisic merit may not be [H]'s strong suite but testing how well a GPU can power VR applications certainly lies within their bailiwick. Once again they tested five NVIDIA GPUs and a pair of AMD's for dropped frames and reprojection caused by a drop in FPS.
"We are changing gears a bit with our VR Performance coverage and looking at an application that is not as GPU-intensive as those we have looked at in the recent past. Google's Tilt Brush is a virtual reality application that makes use of the HTC Vive head mounted display and its motion controllers to allow you to paint in 3D space."
Here are some more Graphics Card articles from around the web:
- PowerColor Red Devil RX 470 Overclocking @ [H]ard|OCP
- MSI GeForce GTX 1060 OC 6 GB @ techPowerUp
- ASUS STRIX GAMING GTX 1070 OC @ eTeknix
- EVGA GeForce GTX 1070 FTW GAMING ACX 3.0 @ Bjorn3d
Subject: General Tech | August 23, 2016 - 12:40 PM | Jeremy Hellstrom
Tagged: hololens, microsoft, Tensilica, Cherry Trail, hot chips
Microsoft revealed information about the internals of the new holographic processor used in their Hololens at Hot Chips, the first peek we have had. The new headset is another win for Tensilica as they provide the DSP and instruction extensions; previously we have seen them work with VIA to develop an SSD controller and with AMD for TrueAudio solutions. Each of the 24 cores has a different task it is hardwired for, offering more efficient processing than software running on flexible hardware.
The processing power for your interface comes from a 14nm Cherry Trail processor with 1GB of DDR and yes, your apps will run on Windows 10. For now the details are still sparse, there is still a lot to be revealed about Microsoft's answer to VR. Drop by The Register for more slides and info.
"The secretive HPU is a custom-designed TSMC-fabricated 28nm coprocessor that has 24 Tensilica DSP cores. It has about 65 million logic gates, 8MB of SRAM, and a layer of 1GB of low-power DDR3 RAM on top, all in a 12mm-by-12mm BGA package. We understand it can perform a trillion calculations a second."
Here is some more Tech News from around the web:
- Fujitsu: Why we chose 64-bit ARM over SPARC for our exascale super @ The Register
- Deus Ex: Mankind Divided Now Bundled with Select AMD CPUs @ Guru of 3D
- Google begins posting Nexus images for the Android 7.0 Nougat update @ Ars Technica
- Your wget is broken and should DIE, dev tells Microsoft @ The Register
- Epic Games forum hack exposes 800,000 credentials @ The Inquirer
- Open Source Hardware Comes of Age @ Hardware Secrets
- Total War : Warhammer Giveaway Contest @ TechARP
Subject: Processors | August 22, 2016 - 05:37 PM | Jeremy Hellstrom
Tagged: amd, a10-7870K
Leaving aside the questionable naming to instead focus on the improved cooler on this ~$130 APU from AMD. Neoseeker fired up the fun sized, 125W rated cooler on top of the A10-7870K and were pleasantly surprised at the lack of noise even under load. Encouraged by the performance they overclocked the chip by 500MHz to 4.4GHz and were rewarded with a stable and still very quiet system. The review focuses more the improvements the new cooler offers as opposed to the APU itself, which has not changed. Check out the review if you are considering a lower cost system that only speaks when spoken to.
"In order to find out just how much better the 125W thermal solution will perform, I am going to test the A10-7870K APU mounted on a Gigabyte F2A88X-UP4 motherboard provided by AMD with a set of 16 GB (2 x 8) DDR3 RAM modules set at 2133 MHz speed. I will then run thermal and fan speed tests so a comparison of the results will provide a meaningful data set to compare the near-silent 125W cooler to an older model AMD cooling solution."
Here are some more Processor articles from around the web:
Subject: General Tech | August 22, 2016 - 03:14 PM | Jeremy Hellstrom
Tagged: AiZO, MGK L80, Kailh, gaming keyboard, input
The supply of mechanical keyboards continues to grow, once Cherry MX was the only supplier of switches and only a few companies sold the products. Now we have choice in manufacturer as well as the switch type we want, beyond the choice of Red, Brown, Blue and so on. AiZO chose to use Kailh switches in their MGK L80 lineup, your choice of click type and also included a wrist rest for those who desire such a thing. Modders Inc tested out the three models on offer, they are a bit expensive but do offer a solid solution for your mechanical keyboard desires.
"The MGK L80 series is the latest line of gaming keyboards manufactured by AZIO. Available in red, blue or RGB backlighting, the MGK L80 offers mechanical gaming comfort with a choice of either Kailh brown or blue switch mounted on an elegant brushed aluminum surface."
Here is some more Tech News from around the web:
Subject: General Tech | August 22, 2016 - 01:26 PM | Jeremy Hellstrom
Tagged: microsoft, microsoft rewards, windows 10, bing, edge
If you remember Bing Rewards then this will seem familiar, otherwise the gist of the deal is that if you browse on Edge and use Bing to search for 30 hours every month you get a bribe similar to what credit card companies offer. You can choose between Skype credit, ad-free Outlook or Amazon gift cards, perhaps for aspirin to ease your Bing related headache; if such things seem worth your while. The Inquirer points out that this is another reminder that Microsoft tracks all usage of Edge, otherwise they would not be able to verify the amount of Bing you used.
Then again, to carry on the credit card analogy ...
"Microsoft Rewards is a rebrand of Bing Rewards, the firm's desperate attempt to get people using the irritating default search engine, and sure enough the bribes for using Edge apply only if you use Bing too."
Here is some more Tech News from around the web:
- Plexistor unveils storage-stack-perplexing PMoF tech @ The Register
- Building A DIY Heat Pipe @ Hack a Day
- Microsoft buys Genee AI calendaring app and will close it down in a matter of days @ The Inquirer
- Two-speed Android update risk: Mobes face months-long wait @ The Register
Subject: General Tech | August 20, 2016 - 05:36 PM | Scott Michaud
Tagged: mozilla, webvr, Oculus
Earlier this month, the W3C published an Editor's Draft for WebVR 1.0. The specification has not yet been ratified, but the proposal is backed by engineers from Mozilla and Google. It enables the use of VR headsets in the web browser, including all the security required, such as isolating input to a single tab (in case you need to input a password while the HMD is on your face).
Firefox Nightly, as of August 16th, now supports the draft 1.0 specification.
The browser currently supports Oculus CV1 and DK2 on Windows. It does not work with DK1, although Oculus provided backers of that KickStarter with a CV1 anyway, and it does not (yet) support the HTC Vive. It also only deals with the headset itself, not any motion controllers. I guess, if your application requires this functionality, you will need to keep working on native applications for a little while longer.
GlobalFoundries Will Allegedly Skip 10nm and Jump to Developing 7nm Process Technology In House (Updated)
Subject: Processors | August 20, 2016 - 03:06 PM | Tim Verry
Tagged: Semiconductor, lithography, GLOBALFOUNDRIES, global foundries, euv, 7nm, 10nm
UPDATE (August 22nd, 11:11pm ET): I reached out to GlobalFoundries over the weekend for a comment and the company had this to say:
"We would like to confirm that GF is transitioning directly from 14nm to 7nm. We consider 10nm as more a half node in scaling, due to its limited performance adder over 14nm for most applications. For most customers in most of the markets, 7nm appears to be a more favorable financial equation. It offers a much larger economic benefit, as well as performance and power advantages, that in most cases balances the design cost a customer would have to spend to move to the next node.
As you stated in your article, we will be leveraging our presence at SUNY Polytechnic in Albany, the talent and know-how gained from the acquisition of IBM Microelectronics, and the world-class R&D pipeline from the IBM Research Alliance—which last year produced the industry’s first 7nm test chip with working transistors."
An unexpected bit of news popped up today via TPU that alleges GlobalFoundries is not only developing 7nm technology (expected), but that the company will skip production of the 10nm node altogether in favor of jumping straight from the 14nm FinFET technology (which it licensed from Samsung) to 7nm manufacturing based on its own in house design process.
Reportedly, the move to 7nm would offer 60% smaller chips at three times the design cost of 14nm which is to say that this would be both an expensive and impressive endeavor. Aided by Extreme Ultraviolet (EUV) lithography, GlobalFoundries expects to be able to hit 7nm production sometime in 2020 with prototyping and small usage of EUV in the year or so leading up to it. The in house process tech is likely thanks to the research being done at the APPC (Advanced Patterning and Productivity Center) in Albany New York along with the expertise of engineers and design patents and technology (e.g. ASML NXE 3300 and 3300B EUV) purchased from IBM when it acquired IBM Microelectronics. The APPC is reportedly working simultaneously on research and development of manufacturing methods (especially EUV where extremely small wavelengths of ultraviolet light (14nm and smaller) are used to etch patterns into silicon) and supporting production of chips at GlobalFoundries' "Malta" fab in New York.
Advanced Patterning and Productivity Center in Albany, NY where Global Foundries, SUNY Poly, IBM Engineers, and other partners are forging a path to 7nm and beyond semiconductor manufacturing. Photo by Lori Van Buren for Times Union.
Intel's Custom Foundry Group will start pumping out ARM chips in early 2017 followed by Intel's own 10nm Cannon Lake processors in 2018 and Samsung will be offering up its own 10nm node as soon as next year. Meanwhile, TSMC has reportedly already tapped out 10nm wafers and will being prodction in late 2016/early 2017 and claims that it will hit 5nm by 2020. With its rivals all expecting production of 10nm chips as soon as Q1 2017, GlobalFoundries will be at a distinct disadvantage for a few years and will have only its 14nm FinFET (from Samsung) and possibly its own 14nm tech to offer until it gets the 7nm production up and running (hopefully!).
Previously, GlobalFoundries has stated that:
“GLOBALFOUNDRIES is committed to an aggressive research roadmap that continually pushes the limits of semiconductor technology. With the recent acquisition of IBM Microelectronics, GLOBALFOUNDRIES has gained direct access to IBM’s continued investment in world-class semiconductor research and has significantly enhanced its ability to develop leading-edge technologies,” said Dr. Gary Patton, CTO and Senior Vice President of R&D at GLOBALFOUNDRIES. “Together with SUNY Poly, the new center will improve our capabilities and position us to advance our process geometries at 7nm and beyond.”
If this news turns out to be correct, this is an interesting move and it is certainly a gamble. However, I think that it is a gamble that GlobalFoundries needs to take to be competitive. I am curious how this will affect AMD though. While I had expected AMD to stick with 14nm for awhile, especially for Zen/CPUs, will this mean that AMD will have to go to TSMC for its future GPUs or will contract limitations (if any? I think they have a minimum amount they need to order from GlobalFoundries) mean that GPUs will remain at 14nm until GlobalFoundries can offer its own 7nm? I would guess that Vega will still be 14nm, but Navi in 2018/2019? I guess we will just have to wait and see!
- To 7nm And Beyond (Interview @ Semiconductor Engineering)
- GloFo Looks For 7nm Leadership @ Electronics Weekly
- GlobalFoundries develops 7nm and 10nm technologies in-house @ KitGuru
- SUNY Poly and GLOBALFOUNDRIES Announce New $500M R&D Program in Albany To Accelerate Next Generation Chip Technology @ GlobalFoundries (PR)
- AMD GPU Roadmap: Capsaicin Names Upcoming Architectures @ PC Perspective
- Next Gen Graphics and Process Migration: 20 nm and Beyond @ PC Perspective
Subject: Memory | August 20, 2016 - 01:25 AM | Tim Verry
Tagged: X99, Samsung, ripjaws, overclocking, G.Skill, ddr4, Broadwell-E
Early this week at the Intel Developer Forum in San Francisco, California G.Skill showed off new low latency DDR4 memory modules for desktop and notebooks. The company launched two Trident series DDR4 3333 MHz kits and one Ripjaws branded DDR4 3333 MHz SO-DIMM. While these speeds are not close to the fastest we have seen from them, these modules offer much tighter timings. All of the new memory modules use Samsung 8Gb chips and will be available soon.
On the desktop side of things, G.Skill demonstrated a 128GB (8x16GB) DDR4-3333 kit with CAS latencies of 14-14-14-34 running on a Asus ROG Rampage V Edition 10 motherboard with an Intel Core i7 6800K processor. They also showed a 64GB (8x8GB) kit clocked at 3333 MHz with timings of 13-13-13-33 running on a system with the same i7 6800K and Asus X99 Deluxe II motherboard.
G.Skill demonstrating 128GB DDR4-3333 memory kit at IDF 2016.
In addition to the desktop DIMMs, G.Skill showed a 32GB Ripjaws kit (2x16GB) clocked at 3333 MHz running on an Intel Skull Canyon NUC. The SO-DIMM had timings of 16-18-18-43 and ran at 1.35V.
Nowadays lower latency is not quite as important as it once was, but there is still a slight performance advantage to be had tighter timings and pure clockspeed is not the only important RAM metric. Overclocking can get you lower CAS latencies (sometimes at the cost of more voltage), but if you are not into that tedious process and are buying RAM anyway you might as well go for the modules with the lowest latencies out of the box at the clockspeeds you are looking for. I am not sure how popular RAM overclocking is these days outside of benchmark runs and extreme overclockers though to be honest.
Overclocking Innovation session at IDF 2016.
With regards to extreme overclocking, there was reportedly an "Overclocking Innovation" event at IDF where G.Skill and Asus overclocker Elmor achieved a new CPU overclocking record of 5,731.78 MHz on the i7 6950X running on a system with G.Skill memory and Asus motherboard. The company's DDR4 record of 5,189.2 MHz was not beaten at the event, G.Skill notes in its press release (heh).
Are RAM timings important to you when looking for memory? What are your thoughts on the ever increasing clocks of new DDR4 kits with how overclocking works on the newer processors/motherboards?
Subject: Cases and Cooling | August 19, 2016 - 04:15 PM | Sebastian Peak
Tagged: Lian Li, SFX, SFX-L, power supply, PSU, small form-factor, 550W, 750w, PE-550, PE-750
Announced back in April, Lian Li has now released a pair of SFX-L power supplies for small form-factor systems with the PE-550 and PE-750. The pair offer fully-modular designs with flat, ribbon-style cables, and carry 80 Plus Gold and Platinum certifications.
The PE-750 power supply
"The PE-550 is 80Plus Gold-rated for a maximum 89.5% efficiency; the PE-750 is 80Plus Platinum-rated for a maximum 92% efficiency. Both use a near-silent 120mm smart fan and minimize noise by operating fanlessly when output power is below 30%. Both PSUs use a single 12V rail design for the best possible stability under heavy system load, matched with myriad protection features to ensure reliable operation."
The PE-550 power supply
For more information and full specs, the product page for the 550W PE-550 is here, and the 750W PE-750 page is here. The PE-550 and PE-750 retail for $115 and $169 respectively, and both are available now.
Subject: Mobile | August 19, 2016 - 03:46 PM | Sebastian Peak
Tagged: UHD, ROG, Republic of Gamers, notebook, laptop, GX800, GTX 1080, gaming, g-sync, asus, 4k
ASUS has announced perhaps the most impressively-equipped gaming laptop to date. Not only does the new ROG GX800 offer dual NVIDIA GeForce GTX 1080 graphics in SLI, but these cards are powering an 18-inch 4K display panel with NVIDIA G-SYNC.
Not enough? The system also offers liquid cooling (via the liquid-cooling dock) which allows for overclocking of the CPU, graphics, and memory.
"ROG GX800 is the world’s first 18-inch real 4K UHD gaming laptop to feature the latest NVIDIA GeForce GTX 1080 in 2-Way SLI. It gives gamers desktop-like gaming performance, silky-smooth gameplay and detailed 4K UHD gaming environments. The liquid-cooled ROG GX800 features the ROG-exclusive Hydro Overclocking System that allows for extreme overclocking of the processor, graphics card and DRAM. In 3DMark 11 and Fire Strike Ultra benchmark tests, a ROG GX800 equipped with the Hydro Overclocking System scored 76% higher than other gaming laptops in the market.
ROG GX800 comes with NVIDIA G-SYNC technology and has plug-and-play compatibility with leading VR headsets to allow gamers to enjoy truly immersive VR environments. It has the MechTAG (Mechanical Tactile Advanced Gaming) keyboard with mechanical switches and customizable RGB LED backlighting for each key."
Specifics on availability and pricing were not included in the announcement.
Subject: Mobile | August 19, 2016 - 03:15 PM | Jeremy Hellstrom
Tagged: asus, ROG, gtx 1070, G752VS OC Edition, pascal, gaming laptop
The mobile version of the GTX 1070, referred to here as the GTX 1070M even if NVIDIA doesn't, is a interesting part sporting 128 more cores than the desktop version albeit at a lower clock. Hardware Canucks received the ASUS RoG G752VS OC Edition gaming laptop which uses the mobile GTX 1070, overclocked by 50MHz on the Core and by 150MHz on the 8GB of RAM, along with an i7-6820 running at 3.8GHz. This particular model will set you back $3000US and offers very impressive performance on either it's 17.3" 1080p G-SYNC display or on an external display of your choice. The difference in performance between the new GTX 1070(M) and the previous GTX 980M is marked, check out the full review to see just how much better this card is ... assuming the price tag doesn't immediately turn you off.
"The inevitable has finally happened: NVIDIA's Pascal architecture has made its way into gaming notebooks....and it is spectacular. In this review we take a GTX 1070-totting laptop out for a spin. "
Here are some more Mobile articles from around the web:
More Mobile Articles
- ASUS ROG Strix GL702VT 17.3in Gaming Notebook @ Kitguru
- Asus ZenBook Flip UX360CA @ Kitguru
- Wise Pad W7 Windows 10 Phablet @ TechARP
Subject: General Tech | August 19, 2016 - 01:06 PM | Jeremy Hellstrom
Tagged: yuy2, windows 10, skype, microsoft, idiots
In their infinite wisdom, Microsoft has disabled MJPEG and H.264 encoding on USB webcams for Skype in their Adversary Update to Windows 10, leaving only YUY2 encoding as your choice. The supposed reasoning behind this is to ensure that there is no duplication of encoding which could lead to poor performance; ironically the result of this change is poor performance for the majority of users such as Josh. Supposedly there will be a fix released some time in September but for now the only option is to roll back your AU installation, assuming you are not already past the 10 day deadline. You can thank Brad Sams over at Thurrott.com for getting to the bottom of the issue which has been plaguing users of Skype and pick up some more details on his post.
"Microsoft made a significant change with the release of Windows 10 and support for webcams that is causing serious problems for not only consumers but also the enterprise. The problem is that after installing the update, Windows no longer allows USB webcams to use MJPEG or H264 encoded streams and is only allowing YUY2 encoding."
Here is some more Tech News from around the web:
- lackberry and Windows phones hold just 7 percent of smartphone market combined @ The Inquirer
- Arduino + Software Defined Radio = Millions of Vulnerable Volkswagens @ Hack a Day
- Cisco joins Microsoft and flings out Skype-friendly collab app @ The Register
- Fujifilm X-T2 Mirrorless Camera Hands-On Preview @ TechARP
- instax SHARE SP-2 Mobile Printer Hands-On Preview @ TechARP
- Linksys LGS318P 18-Port Smart PoE+ Gigabit Switch Review @ NikKTech
- MSI Pro-Modding competition and GT73 VR at Gamescom 2016 @ Kitguru