All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Subject: Graphics Cards | October 5, 2016 - 08:37 PM | Scott Michaud
Tagged: graphics drivers, amd
Earlier today, AMD has released their Radeon Software Crimson Edition 16.10.1 drivers. These continue AMD's trend of releasing drivers alongside major titles, which, this time, are Mafia III (October 7th) and Gears of War 4 (October 11th). Both of these titles are multiple days out, apart from a handful of insiders with advanced copies, which makes it nice for gamers by letting them optimize their machine ahead of time, on their own schedule, before launch.
The driver also includes a handful of interesting fixes. First, a handful of games, such as Overwatch, Battlefield 1, and Paragon, should no longer flicker when set to CrossFire mode. Also, performance issues in The Crew should be fixed with this release.
You can download AMD Radeon Software Crimson Edition 16.10.1 from their website.
Subject: Storage | October 5, 2016 - 07:57 PM | Scott Michaud
Tagged: ssd, mozilla, google, firefox, endurance, chrome
A couple of weeks ago, I saw a post pop up on Twitter a few times about Firefox performing excessive writes to SSDs, which total up to 32GBs in a single day. The author attributes it mostly to a fast-updating session restore feature, although cookies were also resource hogs in their findings. In an update, they also tested Google Chrome, which, itself, clocked in over 24GB of writes in a day.
This, of course, seemed weird to me. I would have thought that at least one browser vendor might notice an issue like this. Still, I passed the link to Allyn because he would be much more capable in terms of being able to replicate these results. In our internal chat at the time, he was less skeptical than I was. I've since followed up with him, and he said that his initial results “wasn't nearly as bad as their case”. He'll apparently elaborate on tonight's podcast, and I'll update this post with his findings.
Subject: Motherboards | October 5, 2016 - 06:50 PM | Jeremy Hellstrom
Tagged: LGA 1151, kaby lake, asus
ASUS is releasing UEFI updates for 87 LGA 1151 motherboard models which will add support for Intel's Kaby Lake processors. They have a table listing all models and UEFI versions which you should update to in order to get full support for the new processors. If you are wondering about picking up one of these motherboards during the inital release of Kaby Lake, ASUS has tested and verified that their USB BIOS flashback tool will enable you to update your UEFI even if it does not want to immediately boot with a Kaby Lake processor installed.
They have spent the last few months with samples of Kaby Lake chips and have tested them for compatibility as well as enhancing the features the motherboard can take advantage of to ensure you get the most out of your shiny new CPU. Regardless if you use a Z170, H110 or even a C232 chipset your motherboard will be compatible. Get out your USB drives and download the new versions to flash to or use EZ Flash 3's Internet option to get the latest version right from ASUS.
Subject: Mobile | October 5, 2016 - 06:26 PM | Scott Michaud
Tagged: Samsung, galaxy note 7
Last week, we passed along a Bloomberg report about a Galaxy Note 7 that caught fire in China. It was allegedly a replacement device from Samsung's recall, which was supposed to fix this issue. We have not heard anything about this phone since, but, at the time, we suggested keeping your replacement device powered off and disconnected from the charger until we receive further info.
Now a second, allegedly post-recall device has caught fire. This time, it occurred this morning on a plane. The Boeing 737 was about ten minutes from take-off when the passenger, who claims the phone was both shut down and in his pocket, noticed the device begin to smoke. He tossed it onto the floor when it begun to billow a thick, gray-green smoke, and burned through the carpet. He claims that it had the green battery icon to indicate that it was a fixed device, which should rule out a pre-recall Note7 getting incorrectly classified as post-recall by, for instance, a retail store goof.
All of that said, we don't know if either of the two cases are accurate yet. Samsung's released a statement over today's issue, which we include below via The Verge, that basically says no comment until they can perform their own investigation.
- Until we are able to retrieve the device, we cannot confirm that this incident involves the new Note7. We are working with the authorities and Southwest now to recover the device and confirm the cause. Once we have examined the device we will have more information to share.
Obviously, we could speculate over a number of things that could be to blame. Part of the issue is just physics -- you're storing a lot of energy in a small volume. This is inherently difficult, and a rapid release of a lot of energy tends to be explosive. It's always good to remember this, even though it's the company's responsibility to produce devices that are safe from all but the most unreasonable of uses.
Subject: General Tech | October 5, 2016 - 12:43 PM | Jeremy Hellstrom
Tagged: security, hack, iot
The good news about this hack is that you would need good timing and physical proximity to the wireless remote which instructs the pump to administer insulin; the bad news is that this is all that is needed and it could result in the death or hospitalization of the target. The vulnerability stems from the usual problem, the transmission between the remote and pump is done in the clear letting anyone who is looking retrieve serial numbers and codes. With that information you can then trigger a dose to be delivered or quite feasibly change the default amount of dosage the pump delivers, as was done previous with a different model.
IoT security as it applies to fridges and toasters is one thing; medical devices quite another. News of unauthorized access to pacemakers and other drug delivery systems which could result in death is not uncommon, yet companies continue to produce insecure systems. Adding even simply encryption to transmissions as well as firmware based dosage sizes should be trivial after the release of a product and even easier before it is released. Keep this in mind when you are seeking medical care, choosing devices which are less likely to kill you because of shoddy security makes sense. You can pop by Slashdot for links to some stories or wade into the comments if you so desire.
"Johnson and Johnson has revealed that its JJ Animas OneTouch Ping insulin pump is vulnerable to hackers, who could potentially force the device to overdose diabetic patients -- however, it declares that the risk of this happening is very low."
Here is some more Tech News from around the web:
- Let's not meet up with JPEG 2000 – researchers find security hole in image codec @ The Register
- Apple's Use Of 'Sapphire' in iPhone Camera Lens Questioned in New Tests @ Slashdot
- DRAM contract prices to rise nearly 30% in 4Q16, says DRAMeXchange @ DigiTimes
- Win Loot with the Enlightened Raspberry Pi Contest @ Hack a Day
- Lenovo exec: Nope, not building Windows Phones @ The Register
- KNOXout: Samsung Knox vulnerabilities give hackers 'full control' of devices @ The Inquirer
Subject: General Tech | October 4, 2016 - 11:47 PM | Tim Verry
Tagged: usb-c, Snapdragon 821, pixel, Kryo, google, android assistant, adreno 530, 802.11ac
Google introduced its own premium smartphone today in the form of the Pixel and Pixel XL. Running Android Nougat 7.1, the Pixel smartphones will not only run the latest operating system but will be the new premium experience with the best Android features including Google Assistant and Smart Storage with unlimited cloud storage of photos and videos.
Google is definitely taking a greater interest in promoting Pixel than they have with even their Nexus devices. It will be interesting to see how other Android manufacturers react to this news but I would imagine that they are not all that pleased and Google will be in a similar position to Microsoft with its Surface products and Nvidia with it's Founder's Edition graphics cards.
Google's Pixel lineup includes the Pixel (5.6 x 2.7 x 0.2-0.3") and the Pixel XL (6 x 2.9 x 0.2-0.34") that wrap their respective 5-inch 1080p (441 PPI) and 5.5-inch 1440p (534 PPI) displays in a full aluminum and glass unibody design that will come in one of three colors: Very Black, Quite Silver and Really Blue. The smartphones feature curved corners and rounded edges with Corning Gorilla Glass 4 on the front and half of the back. Google has put a fingerprint sensor on the back of the phone and power, volume, three microphones, a USB-C port, and, yes, a 3.5mm audio jack.
There are both front and rear cameras and Google is claiming that the rear camera in particular is the best smartphone camera yet (with a DxOMark score of 89 points). The rear camera (which sits flush with the back of the phone) is rated at 12.3 MP with a f/2.0 aperture, and 1.55µm pixels. The camera further features an IMX378 sensor. electronic image stabilization, and both phase detection and laser auto focus. The Pixel can take HDR+ photos and videos at up to 4K30, 1080p120, or 720p240. Users can adjust white balance and use automatic exposure or auto focus locking. The front camera is less impressive at 8MP with fixed focus lens and f/2.4.
Internally, Google has opted to use the Qualcomm Snapdragon 821 (MSM8996) which is a 2+2 design that pairs two Kryo cores at 2.15 GHz with two Kryo cores at 1.6 GHz along with an Adreno 530 GPU, an impressive 4GB of LPDDR4 memory, and either 32GB or 128GB of internal storage which is regrettably non-expandable. The smartphones can tap into up to Category 11 LTE (Cat 9 in the US), 802.11ac Wi-Fi, Bluetooth 4.2, and NFC. Sensors include GPS, proximity, accelerometer, gyroscope, magnetometer, barometer, and hall sensors.
The Pixel features a 2,770 mAh battery and the Pixel Xl uses a slightly larger 3,450 mAh battery. In either case, Google rates the Pixel and Pixel XL at 13 hours and 14 hours of internet browsing and video playback respectively. Further, the batteries are able to be quick charged enough for up to "seven hours of use" after just 15 minutes of charging time using the included 18W USB-C charger.
Pricing works out to $649 for the 32GB Pixel, $749 for the 128GB Pixel, $769 for the 32GB Pixel XL, and $869 for the 128GB Pixel XL. In the US Google has partnered with Verizon for brick-and-mortar availability in addition to it being available on the Google store and other online retailers.
Google is banking a lot on these devices and asking a very premium price tag for the unlocked phones. It is certainly a gamble whether users will find the unique features enough to go with the Pixel over other flagships. What do you think about Google's increased interest in the smartphone space with the launch of its own hardware? How well will Pixel fit into the existing environment – will Pixel lead Android hardware and the OS to success or simply fragment it more?
I do like the look of the Pixel (especially the blue one) and the feature lists sounds good enough that maybe I could live without a removable battery and non-expandable storage (I'll be holding onto my old T-Mobile unlimited plan for as long as possible! heh). Pricing is a bit steep though and I think that will trip a lot of people up when searching for their next device.
Subject: General Tech | October 4, 2016 - 05:45 PM | Jeremy Hellstrom
Tagged: powerlink, pascal, evga, deals
EVGA sent along a newsletter which is worth mentioning as there are a few good deals to be had, even if you have already picked up one of their cards. Anyone who recently bought a Pascal based EVGA card or is planning to in the near future can get up to four EVGA Powerlink cable management ... thingies. It is for your PCIe power connectors and wraps around your GPU, allowing you to power your card without exposing those wires and connectors, great for modders or those who prefer a clean looking build. You do need to create an EVGA account and register your card, do keep that in mind.
The PCIe power connectors on the Powerlink are adjustable, no matter which card you purchased you will be able to use the adapter. There are capacitors inside which are intended to help ensure smooth power delivery, so this not simply an extenstion cord. They also have some deals on previous generation NVIDIA cards as well as their TORQ mouse.
There is also a rather unique deal for those who game on the go as well as at home. As it says below, every purchase of an EVGA SC17 980m laptop (758-21-2633-T1) comes with a free GTX 1070 FTW as long as supplies last.
Subject: General Tech | October 4, 2016 - 04:28 PM | Tim Verry
Tagged: google, chromecast, media streaming, 4k, hdr, google home
During Google's #madebygoogle event (embedded below), the company introduced a number of new pieces of hardware including a new Chromecast. The Chromecast Ultra is aimed at owners of 4K televisions and supports both 4K Ultra HD and HDR content from the likes of Netflix, YouTube, and other apps. Like previous models, the Chromecast takes input from Android, iOS, Mac OSX, and Windows devices that "cast" media to the TV. Additionally, it can be paired with Google Home where users can use voice commands such as "Ok, Google. Play the sneezing panda video on my TV."
The Chromecast Ultra is a small circular puck with a Micro USB port and a short flexible flat HDMI cable that is permanently attached to the device. The Micro USB port is used for both power and data. One neat feature about the new Chromecast Ultra is that the power adpater has an Ethernet port on it so that users can hook the streaming device up to their wired network for better performance (important for streaming 4K content). Not to worry if you rely on WiFi though because it does support dual band 802.11ac.
Google has not yet revealed what hardware is under the hood of its new 4k capable Chromecast, unfortunately. They did release pricing information though: the Chromecast Ultra will be $69 and is "coming soon". If you are interested you can sign up to be notified when it becomes available.
Subject: General Tech | October 4, 2016 - 04:09 PM | Tim Verry
Tagged: media streaming, fire tv, amazon
Later this month Amazon will be releasing a new Fire TV Stick with upgraded internals and Alexa Voice controls. The refreshed media streamer features a 1.3 GHz MediaTek MT8127 SoC with four ARM Cortex A7 cores and a Mali 450 GPU, 1GB of RAM, 8GB of internal storage (for apps mainly, and not expandable), and support for newer 802.11ac (dual band, dual antenna) Wi-Fi, and Bluetooth 4.1 wireless technologies.
While that particular SoC is ancient by smartphone standards, it is a decent step up from its predecessor's dual 1GHz ARM A9 cores and VideoCore 4 GPU. It supports h.265 and HEVC decode along with 1080p60 output. The inclusion of 802.11ac WiFi should help the streaming device do its job effectively even in areas littered with WiFi networks (like apartment buildings or townhomes).
The big change from the old Fire TV Stick is the integration of Alexa Voice control and a new remote control with microphone input. Using voice input, users can control media playback, open apps, search for content, and even order pizza. There is no 4K support or expandable storage here (for that you would have to move to the $99 Fire TV) but it is less than half the price.
The refreshed Fire TV Stick will be available on Amazon for $39.99 on October 20th. Pricing along with the additional voice input makes it a competitive option versus Roku's streaming stick and Google's Chromecast.
- Amazon Takes On Apple TV, Roku, and Ouya With $99 Fire TV Streaming Box
- Amazon Echo Overview (video) @ PC Perspective
Subject: Cases and Cooling | October 4, 2016 - 04:02 PM | Jeremy Hellstrom
Tagged: enermax, ETS-T50 Axe, RGB
The new heatsink from Enermax has, among other things, "Circular-type LEDs" to allow you to make your cooler glow in variety of colours. It is a bit smaller than the big towers, 135x65x160mm and 860g and the performance suffers a bit as a result. The Axe is far better than a stock cooler but does not outperform the competition and is further hurt by the premium price the cooler sells for. On the other hand there are those who will pay extra for a light show, check out the review at [H]ard|OCP if you are one.
"Enermax keeps up its onslaught of CPU air cooler designs today with a tower cooler that uses five direct touch heatpipes to move all those BTUs. It has LED lighted fans along with a stealthy black exterior. Interestingly, Enermax has included a ducting system on the back in order to hopefully help better exhaust all that hot air from your CPU."
Here are some more Cases & Cooling reviews from around the web:
- Cryorig C7 Low-Profile CPU Cooler @ eTeknix
- be quiet! Silent Loop 240mm Liquid CPU Cooler @ Kitguru
- be quiet! Silent Loop 280mm AIO @ eTeknix
- Deepcool Captain 120 EX AIO @ Kitguru
- EKWB EK-XLC Predator 280 @ techPowerUp
- BitFenix Aurora Case @ Kitguru
- Cooler Master's MasterBox 5 and Zalman's Z9 Neo cases @ The Tech Report
- Raijintek Aeneas @ techPowerUp
- Thermaltake Core X71 Full Tower Review @ NikKTech
Subject: General Tech | October 4, 2016 - 02:29 PM | Jeremy Hellstrom
Tagged: microsoft, server 2016
Ars Technica have put together an overview of the new Windows Server, three pages which broadly cover the new features you will find. As has often been discussed there will be three ways of installing the new Server OS, the familiar Desktop experience as well as Core and Nano. Nano is similar to the Core installation which we saw introduced in Server 2012 but further reduces the interface and attack surface by removing the last remnants of the GUI, no support for 32bit apps and the Microsoft installer; all you get is a basic control console. The Core and Desktop versions remain the same as in the 2012 version.
If you are curious about the inclusion of Docker features such as the Linux-like containers and changes to Hyper-V or deployment techniques drop by for a read.
"Like a special breed of kaiju, Microsoft's server platform keeps on mutating, incorporating the DNA of its competitors in sometimes strange ways. All the while, Microsoft's offering has constantly grown in its scope, creating variants of itself in the process."
Here is some more Tech News from around the web:
- Memristor behaves like a synapse @ Nanotechweb
- Google Fiber Is Now a Fiber and Wireless ISP @ Slashdot
- A year living with the Nexus 5X – the good, the bad, and the Nougat @ The Register
- Researchers Develop System To Send Passwords, Keys Through Users' Bodies @ Slashdot
- Apple takes tips from Microsoft as macOS Sierra becomes an automatic download @ The Inquirer
- Microsoft Azure sets up shop in France @ The Register
Subject: Storage | October 4, 2016 - 08:30 AM | Allyn Malventano
Tagged: usb 3.0, Type-C, Type-A, hdd, External, Drobo 5C, drobo, DAS, 5-bay
We looked at the third-gen 4-bay Drobo over a year back, and while the performance and price were great, it was held back by its limited number of drive bays. Drobo fixed that today:
The new Drobo 5C is basically an evolution of the 4-bay model. Performance is similar, which justifies the choice to stick with USB 3.0 (5 Gbit), but we now have a Type-C port on the Drobo side (a Type-C to Type-A cable is included to cover most potential users). The added bay helps users increase potential capacity or alternatively select BeyondRAID's Dual Drive Redundancy mode without as much of an ultimate capacity hit compared to its 4-bay predecessor.
The Drobo 5C supersedes the old 4-bay unit in their lineup.
The new Drobo 5C is available today for $349, with drive package deals offered direct from Drobo. Drobo is also offering a limited-time $50 discount to 2nd and 3rd gen 4-bay Drobo owners (valid until 11 Oct 2016). I have confirmed here that a disk pack from a 4-bay model can be moved directly to the new 5-bay model with no issue.
We have a full review of the Drobo 5C coming, but we have a few questions out to them that need answering before our article goes live.
Subject: Storage | October 3, 2016 - 05:03 PM | Jeremy Hellstrom
Tagged: kingston, ssdnow KC400, Phison PS3110-S10, mlc, sata ssd
Kitguru has another Phison PS3110-S10 based SSD up for review, the Kingston SSDNow KC400 512GB SATA SSD. This drive is heavily packaged compared to others, with sixteen 32GB 15nm MLC NAND packages and a 256MB DDR3L-1600 paired with the eight channel controller. The drive is marketed at businesses and with an 800TB lifetime, 450GB of writes everyday for the five year warranty as well as SmartECC and SmartRefresh it would fit that bill. Consumers and businesses alike will appreciate the sequential read/write performance of 550MB/s and 530MB/s. Overall it is another drive that fits into the existing pack of drives and is worth your consideration, especially if you have need of its error correction features. Read the full review for more information.
"Kingston’s SSDNow KC400 family is part of the company’s business-oriented SSD product line which features end-to-end data path protection, technologies to protect data in the NAND and guard against read errors, as well as good endurance."
Here are some more Storage reviews from around the web:
- Crucial MX300 2TB @ eTeknix
- Plextor M8PeG 256GB M.2 NVMe @ eTeknix
- QNAP TS-451A-4G 4-bay NAS @ Kitguru
- Drobo 5N 5-Bay NAS @ eTeknix
- LaCie Porsche Design Mobile Drive 2TB USB 3.0 Review @ NikKTech
Subject: General Tech | October 3, 2016 - 03:17 PM | Jeremy Hellstrom
Tagged: kingston hyper x, kingston, gaming headset, Cloud Stinger, audio
Kingston have updated their line of gaming headsets with the new HyperX Cloud Stinger, available already for ~$50. This makes them attractive for those who do not often use a gaming headset but might want one around just in case. The low price could make you underestimate the design, Kingston used 50mm drivers and the microphone mutes itself the moment you swing it away from your voice hole. That said, Overclockers Club were not in love with the quality of the sound compared to expensive headphones, but for this price point they have no qualms about recommending these for casual use.
"Overall, I'm quite impressed with the HyperX Cloud Stinger Gaming Headset. A mouth full just to say that – but after disliking the HyperX Cloud Revolver as much as I did – I'm actually quite happy with this drop in price and slight redesign. With closed ear cups I would have expected a little more in the bass-land, it wasn't the end of the world. The overall sound is nice and flat, and movies, music, and games are all quite tolerable in the closed environment."
Here is some more Tech News from around the web:
- Kingston HyperX Cloud Stinger @ Modders Inc
- HyperX Cloud Revolver Gaming Headset @ Custom PC Review
- Jabra ECLIPSE Wireless Headset Review @ NikKTech
- Sennheiser PC 373D 7.1 Dolby Surround Sound Gaming Headset @ Kitguru
- OZONE TriFX In-Ear Pro Gaming Headset Review @ NikKTech
Subject: General Tech | October 3, 2016 - 01:27 PM | Jeremy Hellstrom
Tagged: Windows 7, windows 10, microsoft, market share
A change of one percent may seem tiny at first glance but historically it is an incredibly large shift in market share for an operating system. Unfortunately for Microsoft it is Windows 7 which has gained share, up to 48.27% of the market with Windows 10 dropping half a point to 22.53% while the various flavours of Windows 8 sit at 9.61%. This would make it almost impossible for Microsoft to reach their goal of
two one billion machines running Windows 10 in the two years after release and spells bad news for their income from consumers.
Enterprise have barely touched the new OS for a wide variety of reasons, though companies still provide significant income thanks to corporate licenses for Microsoft products and older operating systems. It should be very interesting to see how Microsoft will react to this information, especially if the trend continues. The sales data matches many of the comments we have seen here; the changes which they made were not well received by their customer base and the justifications they've used in the design of the new OS are not holding water. It shouldn't be long before we here more out of Redmond, in the mean time you can pop over to The Inquirer to see Net Applications' data if you so desire.
"The latest figures from Net Applications’ Netmarketshare service show Windows 7, now over seven years old, gain a full percentage point to bolster its place as the world’s most popular desktop operating system with 48.27 per cent (+1.02 on last month)."
Here is some more Tech News from around the web:
- HUDWAY Glass Head-Up Display Review @ NikKTech
- AMD prepares Zen for CES 2017 launch; aggressively clearing inventory for platform transition @ DigiTimes
- How to steal the mind of an AI: Machine-learning models vulnerable to reverse engineering @ The Register
- Linus Torvalds Officially Announces the Release of Linux Kernel 4.8 @ Slashdot
- Security analyst says Yahoo!, Dropbox, LinkedIn, Tumblr all popped by same gang @ The Register
- Source code for 'record-breaking' Mirai IoT botnet released online @ The Inquirer
- iPhone 7 Finishes Last In New Test of Battery Life @ Slashdot
Subject: Graphics Cards | October 2, 2016 - 12:12 PM | Sebastian Peak
Tagged: rumor, report, pascal, nvidia, GTX 1050 Ti, graphics card, gpu, GP107, geforce
A report published by VideoCardz.com (via Baidu) contains pictures of an alleged NVIDIA GeForce GTX 1050 Ti graphics card, which is apparently based on a new Pascal GP107 GPU.
Image credit: VideoCardz
The card shown is also equipped with 4GB of GDDR5 memory, and contains a 6-pin power connector - though such a power requirement might be specific to this particular version of the upcoming GPU.
Image credit: VideoCardz
Specifications for the GTX 1050 Ti were previously reported by VideoCardz, with a reported GPU-Z screenshot. The card will apparently feature 768 CUDA cores and a 128-bit memory bus, with clock speeds (for this particular sample) of 1291 MHz base, 1392 MHz boost (with some room to overclock, from this screenshot).
Image credit: VideoCardz
An official announcement for the new GPU has not been made by NVIDIA, though if these PCB photos are real it probably won't be far off.
Subject: Motherboards | October 1, 2016 - 11:20 PM | Tim Verry
Tagged: Zen, micro ATX, Excavator, Bristol Ridge, b350, amd, AM4
Thanks to a recent leak over at Bodnara.co.kr (which has since been taken down), pictures emerged online that give a first look at an AMD socket AM4 motherboard using the mid-range B350 chipset. The Gigabyte B350M-DS3H is a Micro ATX motherboard supporting Bristol Ridge processors at launch and Zen-based processors next year.
The mid-range AM4 board has a very simple layout that leaves little mystery. There are no large heatsinks and no northbridge thanks to AMD moving most of the connectivity to the SoC itself. In fact there is only a small passively cooled chip in the bottom right corner (the B350 chipset) that between the SoC and it can offer up PCI-E 3.0, SATA 6.0, USB 3.1, USB 3.0, NVMe SSD, and DDR4 memory support. This post outlines how the duties are split between the processor and southbridge.
The B350M-DS3H is powered by a 24-pin ATX and 8-pin EPS and Gigabyte is using a seven phase VRM to power the processor and memory. The board hosts a 1331 pin AM4 socket up top with four DDR4 slots to the right. The CMOS battery is placed just above the PCI-E slots in a position that Morry would be proud of (so long as your CPU cooler is not too massive). Below that are two PCI-E 3.0 x16 slots (electrically x16/x4 or x8/x8), a single PCI-E 3.0 x1 slot, and a NVMe M.2 (PCI-E) slot. The bottom right corner of the board hosts six SATA 6 Gbps ports.
Rear I/O on the AMD motherboard includes:
- 2 x USB 2.0
- 1 x PS/2
- 3 x Video Outputs
- 1 x VGA
- 1 x DVI
- 1 x HDMI
- 4 x USB 3.0
- 2 x USB 3.1
- 1 x Gigabit Ethernet
- 3 x Audio Jacks
Several websites are reporting that AMD will be unleashing the floodgates of socket AM4 motherboards using the A320 and B350 chipsets in October (it is saving the launch of the enthusiast X370 chipset for next year alongside Summit Ridge). I have to say that it is nice to see an AMD motherboard with updated I/O which is a nice change from the ancient 990X AM3+ platform and even the FM2+ motherboards which were newer but still .ot as full featured as the competition.
- AMD Officially Launches Bristol Ridge Processors And Zen-Ready AM4 Platform
- Report: AMD Socket AM4 Compatible with Existing AM2/AM3 Coolers
- AMD Zen Architecture and Performance Preview
- AMD Introduces 7th Generation APUs: Bristol Ridge Takes Center Stage
Subject: Processors | October 1, 2016 - 06:11 PM | Tim Verry
Tagged: xavier, Volta, tegra, SoC, nvidia, machine learning, gpu, drive px 2, deep neural network, deep learning
Earlier this week at its first GTC Europe event in Amsterdam, NVIDIA CEO Jen-Hsun Huang teased a new SoC code-named Xavier that will be used in self-driving cars and feature the company's newest custom ARM CPU cores and Volta GPU. The new chip will begin sampling at the end of 2017 with product releases using the future Tegra (if they keep that name) processor as soon as 2018.
NVIDIA's Xavier is promised to be the successor to the company's Drive PX 2 system which uses two Tegra X2 SoCs and two discrete Pascal MXM GPUs on a single water cooled platform. These claims are even more impressive when considering that NVIDIA is not only promising to replace the four processors but it will reportedly do that at 20W – less than a tenth of the TDP!
The company has not revealed all the nitty-gritty details, but they did tease out a few bits of information. The new processor will feature 7 billion transistors and will be based on a refined 16nm FinFET process while consuming a mere 20W. It can process two 8k HDR video streams and can hit 20 TOPS (NVIDIA's own rating for deep learning int(8) operations).
Specifically, NVIDIA claims that the Xavier SoC will use eight custom ARMv8 (64-bit) CPU cores (it is unclear whether these cores will be a refined Denver architecture or something else) and a GPU based on its upcoming Volta architecture with 512 CUDA cores. Also, in an interesting twist, NVIDIA is including a "Computer Vision Accelerator" on the SoC as well though the company did not go into many details. This bit of silicon may explain how the ~300mm2 die with 7 billion transistors is able to match the 7.2 billion transistor Pascal-based Telsa P4 (2560 CUDA cores) graphics card at deep learning (tera-operations per second) tasks. Of course in addition to the incremental improvements by moving to Volta and a new ARMv8 CPU architectures on a refined 16nm FF+ process.
|Drive PX||Drive PX 2||NVIDIA Xavier||Tesla P4|
|CPU||2 x Tegra X1 (8 x A57 total)||2 x Tegra X2 (8 x A57 + 4 x Denver total)||1 x Xavier SoC (8 x Custom ARM + 1 x CVA)||N/A|
|GPU||2 x Tegra X1 (Maxwell) (512 CUDA cores total||2 x Tegra X2 GPUs + 2 x Pascal GPUs||1 x Xavier SoC GPU (Volta) (512 CUDA Cores)||2560 CUDA Cores (Pascal)|
|TFLOPS||2.3 TFLOPS||8 TFLOPS||?||5.5 TFLOPS|
|DL TOPS||?||24 TOPS||20 TOPS||22 TOPS|
|TDP||~30W (2 x 15W)||250W||20W||up to 75W|
|Process Tech||20nm||16nm FinFET||16nm FinFET+||16nm FinFET|
|Transistors||?||?||7 billion||7.2 billion|
For comparison, the currently available Tesla P4 based on its Pascal architecture has a TDP of up to 75W and is rated at 22 TOPs. This would suggest that Volta is a much more efficient architecture (at least for deep learning and half precision)! I am not sure how NVIDIA is able to match its GP104 with only 512 Volta CUDA cores though their definition of a "core" could have changed and/or the CVA processor may be responsible for closing that gap. Unfortunately, NVIDIA did not disclose what it rates the Xavier at in TFLOPS so it is difficult to compare and it may not match GP104 at higher precision workloads. It could be wholly optimized for int(8) operations rather than floating point performance. Beyond that I will let Scott dive into those particulars once we have more information!
Xavier is more of a teaser than anything and the chip could very well change dramatically and/or not hit the claimed performance targets. Still, it sounds promising and it is always nice to speculate over road maps. It is an intriguing chip and I am ready for more details, especially on the Volta GPU and just what exactly that Computer Vision Accelerator is (and will it be easy to program for?). I am a big fan of the "self-driving car" and I hope that it succeeds. It certainly looks to continue as Tesla, VW, BMW, and other automakers continue to push the envelope of what is possible and plan future cars that will include smart driving assists and even cars that can drive themselves. The more local computing power we can throw at automobiles the better and while massive datacenters can be used to train the neural networks, local hardware to run and make decisions are necessary (you don't want internet latency contributing to the decision of whether to brake or not!).
I hope that NVIDIA's self-proclaimed "AI Supercomputer" turns out to be at least close to the performance they claim! Stay tuned for more information as it gets closer to launch (hopefully more details will emerge at GTC 2017 in the US).
What are your thoughts on Xavier and the whole self-driving car future?
- NVIDIA Teases Xavier, a High-Performance ARM SoC for Drive PX & AI @ AnandTech
- Tegra Related News @ PC Perspective
- Tesla P4 Specifications @ NVIDIA
- CES 2016: NVIDIA Launches DRIVE PX 2 With Dual Pascal GPUs Driving A Deep Neural Network @ PC Perspective
Subject: General Tech | September 30, 2016 - 10:58 PM | Scott Michaud
Blender 2.78 has been a fairly anticipated release. First off, people who have purchased a Pascal-based graphics card will now be able to GPU-accelerate their renders in Cycles. Previously, it would outright fail, complaining that it didn't have a compatible CUDA kernel. At the same time, the Blender Foundation fixed a few performance issues, especially with Maxwell-based GM200 parts, such as the GeForce 980 Ti. Pre-release builds included these fixes for over a month, but 2.78 is the first build for the general public that supports it.
In terms of actual features, Blender 2.78 starts to expand the suite's feature set into the space that is currently occupied by Adobe Animate CC (Flash Professional). The Blender Foundation noticed that users were doing 2D animations using the Grease Pencil, so they have been evolving the tool in that direction. You can now simulate different types of strokes, parent these to objects, paint geometry along surfaces, and so forth. It also has onion skinning, to see how the current frame matches its neighbors, but I'm pretty sure that is not new to 2.78, though.
As you would expect, there are still many differences between these two applications. Blender does not output to Flash, and interactivity would need to be done through the Blender Game Engine. On the other hand, Blender allows the camera, itself, to be animated. In Animate CC, you would need to move, rotate, and scale objects around the stage by the amount of pixels on an individual basis. In Blender, you would just fly the camera around.
This leads in to what the Blender Foundation is planning for Blender 2.8x. This upcoming release focuses on common workflow issues. Asset management is one area, but Viewport Renderer is a particularly interesting one. Blender 2.78 increases the functionality that materials can exhibit in the viewport, but Blender 2.8x is working toward a full physically-based renderer, such as the one seen in Unreal Engine 4. While it cannot handle the complex lighting effects that their full renderer, Cycles, can, some animations don't require this. Restricting yourself to the types of effects seen in current video games could decrease your render time from seconds or minutes per frame to around real-time.
Subject: General Tech | September 30, 2016 - 10:07 PM | Scott Michaud
Tagged: microsoft, windows 10
I've been seeing a lot of people discussing how frequently Windows 10 seems to be getting updated. This discussion usually circles back to how many issues have been reported with the latest Anniversary Update, and how Microsoft has been slow in rolling it out. The thing is, while the slow roll-out is interesting, the way Windows 10 1607 is being patched is not too unusual.
The odd part is how Microsoft has been releasing the feature updates, themselves.
In the past, Microsoft has tried to release updates on the second Tuesday of every month. This provides a predictable schedule for administrators to test patches before deploying them to an entire enterprise, in case the update breaks something that is mission-critical. With Windows 10, Microsoft has declared that patches will be cumulative and can occur at any time. This led to discussion about whether or not “Patch Tuesday” is dead. Now, a little over a year has gone by, and we can actually quantify how the OS gets updated.
There seems to be a pattern that starts with each major version release, which has (thus far) been builds 10240, 10586, and 14393. Immediately before and after these builds start to roll out to the public, Microsoft releases a flurry of updates to fix issues.
For instance, Windows 10 version 1507 had seven sub-versions of 10240 prior to general release, and five hotfixes pushed down Windows Update within the first month of release. The following month, September 2015, had an update on Patch Tuesday, as well as an extra one on September 30th. The following month also had two updates, the first of which on October's Patch Tuesday. It was then patched once for every following Patch Tuesday.
The same trend occurred with Build 10586 (Windows 10 version 1511). Microsoft released the update to the public on November 12th, but pushed a patch through Windows Update on November 10th, and five more over Windows Update in the following month-and-a-bit. It mostly settled down to Patch Tuesday after that, although a few months had a second hotfix sometime in the middle.
We are now seeing the same trend happen with Windows 10 version 1607. Immediately after release, Microsoft pushed a bunch of hotfixes. If history repeats itself, we should start to see about two updates per month for the next couple of months, then we will slow down to Patch Tuesday until Redstone 2 arrives sometime in 2017.
So, while this seems to fit a recurring trend, I do wonder why this trend exists.
Part of it makes sense. When Microsoft is developing Windows 10, it is trying to merge additions from a variety of teams into a single branch, and do so once or twice each year. This likely means that Microsoft has a “last call” date for these teams to merge their additions into the public branch, and then QA needs to polish this up for the general public. While they can attempt to have these groups check in mid-way, pushing their work out to Windows Insiders in a pre-release build, you can't really know how the final build will behave until after the cut-off.
At the same time, the massive flood of patches within the first month would suggest that Microsoft is pushing the final build to the public about a month or two too early. If this trend continues, it would make the people who update within the first month basically another ring of the Insider program. The difference is that it is less out-in, because you get it when Windows Update tells you to.
It will be interesting to see how this continues going forward, too. Microsoft has already delayed Redstone 2 until 2017, as I mentioned earlier. This could be a sign that Microsoft is learning from past releases, and optimizing their release schedule based on these lessons. I wonder how soon before release will Microsoft settle on a “final build” next time. It seems like Microsoft could avoid many stability problems by simply setting an earlier merge date, and aggressively performing QA for a longer period until it is released to the public.
Or I could be completely off. What do you all think?