Subject: Mobile | March 1, 2017 - 07:26 PM | Tim Verry
Tagged: Snapdragon 625, opinion, MWC, keyone, enterprise, Cortex A53, blackberry, Android 7.1, Android
February is quite the busy month with GDC, MWC, and a flurry of technology announcements coming out all around the same time! One of the more surprising announcements from Mobile World Congress in Barcelona came from BlackBerry in the form of a new mid-range smartphone it is calling the KEYone. The KEYone is an Android 7.1 smartphone actually built by TCL with an aluminum frame, "soft touch" plastic back, curved edges, and (in traditional CrackBerry fashion) a full physical QWERTY keyboard!
The black and silver candy bar style KEYone (previously known as "Mercury") measures 5.78" x 2.85" x 0.37" and weighs 0.39 pounds. The left, right, and bottom edges are rounded and the top edge is flat. There are two bottom firing stereo speakers surrounding a USB Type-C port (Type-C 1.0 with USB OTG), a headphone jack up top, and volume, power, and convenience key buttons on the right side. The front of the device, which BlackBerry has designed to be comfortable using one handed, features a 4.5" 1620 x 1080 LCD touchscreen (434 PPI) protected by Gorilla Glass 4, a front facing camera with LED flash, and a large physical keyboard with straight rows of keys that have a traditional BlackBerry feel. The keyboard, in addition to having physical buttons, supports touch gestures such as swiping, and the spacebar has a fingerprint reader that early hands on reports indicate works rather well for quickly unlocking the phone. Further, every physical key can be programmed as a hot key to open any application with a long press (B for browser, E for email, ect).
On the camera front, BlackBerry is using the same sensor found in the Google Pixel which is a Sony IMX378. There is a 12MP f/2.0 rear camera with dual LED flash and phase detect auto focus on the back as well as a front facing 8MP camera. Both cameras can record 1080p30 video as well as support HDR and software features like face detection. Android Central reports that the camera software is rather good (it even has a pro mode) and the camera is snappy at taking photos.
Internally, BlackBerry has opted to go with squarely mid-range hardware which is disappointing but not the end of the world. Specifically, the KEYone is powered by a Snapdragon 625 (MSM8953) with eight ARM Cortex A53 cores clocked at 2GHz and an Adreno 506 GPU paired with 3GB of RAM and 32GB of internal storage. Wireless support includes dual band 802.11ac, FM, Bluetooth 4.2, GPS, NFC, and GSM/HSPA/LTE cellular radios. The smartphone uses a 3,505 mAh battery that is not user removable but at least supports Quick Charge 3.0 which can reportedly charge the battery to 50% in 36 minutes. Storage can be expanded via MicroSD cards. The smartphone is running Android 7.1.1 with some BlackBerry UI tweaks but is otherwise fairly stock. Under the hood however BlackBerry has hardened the OS and includes its DTEK security sftware along with promising monthly updates.
Not bad right? Looking at the specifications and reading/watching the various hands-on reports coming out it is really looking like BlackBerry (finally) has a decent piece of hardware for enterprise customers, niche markets (lawyers, healthcare, ect), and customers craving a physical keyboard in a modern phone. At first glance the BlackBerry KEYone hits all the key marks to a competitive Android smartphone... except for its $549 price tag. The KEYone is expected to launch in April.
No scroll ball? Blasphemy! (hehe)
Unfortunately, that $549 price is not a typo, and is what kills it even for a CrackBerry addict like myself. After some reflection and discussion with our intrepid smartphone guru Sebastian, I feel as though BlackBerry would have a competitive smartphone on its hands at $399, but at $549 even business IT departments are going to balk much less consumers (especially as many businesses embrace the BYOD culture or have grown accustomed to pricing out and giving everyone whatever basic Android or iPhone they can fit into the budget).
While similarly specced Snapdragon 625 smartphones are going for around $300, (e.g. Asus ZenPhone 3 at $265.98), there is some precedent for higher priced MSM8953-based smartphones such as the $449 Moto Z Play. There is some inherent cost in integrating a physical keyboard and BlackBerry has also hardened the Android 7.1.1 OS which I can see them charging a premium for and that business customers (or anyone that does a lot of writing on the go) that values security can appreciate. It seems like BlackBerry (and hardware partner TCL) has finally learned how to compete on the hardware design front in this modern Android-dominated market, and now they must learn how to compete on price especially as more and more Americans are buying unlocked and off-contract smartphones! I think the KEYone is a refreshing bit of hardware to come out of BlackBerry (I was not a fan of the Priv design) and I would like to see it do well and give the major players (Apple, Samsung, LG, Asus, Huawei, ect) some healthy competition with the twist of its focus on better security but in order for that to happen I think the BlackBerry KEYone needs to be a bit cheaper.
What are your thoughts on the KEYone and the return of the physical keyboard? Am I onto something or simply off my Moto Rokr on this?
Subject: General Tech | March 1, 2017 - 06:23 PM | Jeremy Hellstrom
Tagged: tesla motors, battery
Hack a Day posted a video of a teardown of the battery that powers the Tesla Model S, for those curious about how it is set up. This is not recommended for you to try at home, not only are there a huge number of bolts and Torx screws, it seems that each has a specific torque amount which must be adhered to. Inside are 16 battery packs, each of which contain 444 cells with a total of 24V, for a sum of 5.3 kWh. Do not test the charge on these batteries with your tongue! Click on through to watch the video.
"Tesla famously build their battery packs from standard 18650 lithium-ion cells, but it’s safe to say that the pack in the Model S has little in common with your laptop battery. Fortunately for those of a curious nature, [Jehu Garcia] has posted a video showing the folks at EV West tearing down a Model S pack from a scrap car, so we can follow them through its construction."
Here is some more Tech News from around the web:
- Security slip-ups in 1Password and other password managers 'extremely worrying' @ The Register
- Windows 7 market share rises at the expense of Windows 10 @ The Inquirer
- Amazon's AWS S3 cloud storage evaporates: Top websites, Docker stung @ The Register
- HTC to launch mobile VR devices for year-end holiday season @ DigiTimes
- Google Pulls the Plug On Its Pixel Laptops @ Slashdot
- The 15 New AMD Ryzen 7 CPU Coolers Revealed @ TechARP
- Nvidia unveils the GTX 1080 Ti at GDC @ The Tech Report
Linked Multi-GPU Arrives... for Developers
The Khronos Group has released the Vulkan 220.127.116.11 specification, which includes experimental (more on that in a couple of paragraphs) support for VR enhancements, sharing resources between processes, and linking similar GPUs. This spec was released alongside a LunarG SDK and NVIDIA drivers, which are intended for developers, not gamers, that fully implement these extensions.
I would expect that the most interesting feature is experimental support for linking similar GPUs together, similar to DirectX 12’s Explicit Linked Multiadapter, which Vulkan calls a “Device Group”. The idea is that the physical GPUs hidden behind this layer can do things like share resources, such as rendering a texture on one GPU and consuming it in another, without the host code being involved. I’m guessing that some studios, like maybe Oxide Games, will decide to not use this feature. While it’s not explicitly stated, I cannot see how this (or DirectX 12’s Explicit Linked mode) would be compatible in cross-vendor modes. Unless I’m mistaken, that would require AMD, NVIDIA, and/or Intel restructuring their drivers to inter-operate at this level. Still, the assumptions that could be made with grouped devices are apparently popular with enough developers for both the Khronos Group and Microsoft to bother.
A slide from Microsoft's DirectX 12 reveal, long ago.
As for the “experimental” comment that I made in the introduction... I was expecting to see this news around SIGGRAPH, which occurs in late-July / early-August, alongside a minor version bump (to Vulkan 1.1).
I might still be right, though.
The major new features of Vulkan 18.104.22.168 are implemented as a new classification of extensions: KHX. In the past, vendors, like NVIDIA and AMD, would add new features as vendor-prefixed extensions. Games could query the graphics driver for these abilities, and enable them if available. If these features became popular enough for multiple vendors to have their own implementation of it, a committee would consider an EXT extension. This would behave the same across all implementations (give or take) but not be officially adopted by the Khronos Group. If they did take it under their wing, it would be given a KHR extension (or added as a required feature).
The Khronos Group has added a new layer: KHX. This level of extension sits below KHR, and is not intended for production code. You might see where this is headed. The VR multiview, multi-GPU, and cross-process extensions are not supposed to be used in released video games until they leave KHX status. Unlike a vendor extension, the Khronos Group wants old KHX standards to drop out of existence at some point after they graduate to full KHR status. It’s not something that NVIDIA owns and will keep it around for 20 years after its usable lifespan just so old games can behave expectedly.
How long will that take? No idea. I’ve already mentioned my logical but uneducated guess a few paragraphs ago, but I’m not going to repeat it; I have literally zero facts to base it on, and I don’t want our readers to think that I do. I don’t. It’s just based on what the Khronos Group typically announces at certain trade shows, and the length of time since their first announcement.
The benefit that KHX does bring us is that, whenever these features make it to public release, developers will have already been using it... internally... since around now. When it hits KHR, it’s done, and anyone can theoretically be ready for it when that time comes.
VR Performance Evaluation
Even though virtual reality hasn’t taken off with the momentum that many in the industry had expected on the heels of the HTC Vive and Oculus Rift launches last year, it remains one of the fastest growing aspects of PC hardware. More importantly for many, VR is also one of the key inflection points for performance moving forward; it requires more hardware, scalability, and innovation than any other sub-category including 4K gaming. As such, NVIDIA, AMD, and even Intel continue to push the performance benefits of their own hardware and technology.
Measuring and validating those claims has proven to be a difficult task. Tools that we used in the era of standard PC gaming just don’t apply. Fraps is a well-known and well-understood tool for measuring frame rates and frame times utilized by countless reviewers and enthusiasts. But Fraps lacked the ability to tell the complete story of gaming performance and experience. NVIDIA introduced FCAT and we introduced Frame Rating back in 2013 to expand the capabilities that reviewers and consumers had access to. Using more sophisticated technique that includes direct capture of the graphics card output in uncompressed form, a software-based overlay applied to each frame being rendered, and post-process analyzation of that data, we were able to communicate the smoothness of a gaming experience, better articulating it to help gamers make purchasing decisions.
VR pipeline when everything is working well.
For VR though, those same tools just don’t cut it. Fraps is a non-starter as it measures frame rendering from the GPU point of view and completely misses the interaction between the graphics system and the VR runtime environment (OpenVR for Steam/Vive and OVR for Oculus). Because the rendering pipeline is drastically changed in the current VR integrations, what Fraps measures is completely different than the experience the user actually gets in the headset. Previous FCAT and Frame Rating methods were still viable but the tools and capture technology needed to be updated. The hardware capture products we used since 2013 were limited in their maximum bandwidth and the overlay software did not have the ability to “latch in” to VR-based games. Not only that but measuring frame drops, time warps, space warps and reprojections would be a significant hurdle without further development.
VR pipeline with a frame miss.
NVIDIA decided to undertake the task of rebuilding FCAT to work with VR. And while obviously the company is hoping that it will prove its claims of performance benefits for VR gaming, it should not be overlooked the investment in time and money spent on a project that is to be open sourced and free available to the media and the public.
NVIDIA FCAT VR is comprised of two different applications. The FCAT VR Capture tool runs on the PC being evaluated and has a similar appearance to other performance and timing capture utilities. It uses data from Oculus Event Tracing as a part of the Windows ETW and SteamVR’s performance API, along with NVIDIA driver stats when used on NVIDIA hardware to generate performance data. It will and does work perfectly well on any GPU vendor’s hardware though with the access to the VR vendor specific timing results.
Subject: Graphics Cards | March 1, 2017 - 03:59 AM | Ryan Shrout
Tagged: pascal, nvidia, gtx 1080 ti, gp102, geforce
Tonight at a GDC party hosted by CEO Jen-Hsun Huang, NVIDIA announced the GeForce GTX 1080 Ti graphics card, coming next week for $699. Let’s dive right into the specifications!
|GTX 1080 Ti||Titan X (Pascal)||GTX 1080||GTX 980 Ti||TITAN X||GTX 980||R9 Fury X||R9 Fury||R9 Nano|
|GPU||GP102||GP102||GP104||GM200||GM200||GM204||Fiji XT||Fiji Pro||Fiji XT|
|Base Clock||1480 MHz||1417 MHz||1607 MHz||1000 MHz||1000 MHz||1126 MHz||1050 MHz||1000 MHz||up to 1000 MHz|
|Boost Clock||1600 MHz||1480 MHz||1733 MHz||1076 MHz||1089 MHz||1216 MHz||-||-||-|
|Memory Clock||11000 MHz||10000 MHz||10000 MHz||7000 MHz||7000 MHz||7000 MHz||500 MHz||500 MHz||500 MHz|
|Memory Interface||352-bit||384-bit G5X||256-bit G5X||384-bit||384-bit||256-bit||4096-bit (HBM)||4096-bit (HBM)||4096-bit (HBM)|
|Memory Bandwidth||484 GB/s||480 GB/s||320 GB/s||336 GB/s||336 GB/s||224 GB/s||512 GB/s||512 GB/s||512 GB/s|
|TDP||220 watts||250 watts||180 watts||250 watts||250 watts||165 watts||275 watts||275 watts||175 watts|
|Peak Compute||10.6 TFLOPS||10.1 TFLOPS||8.2 TFLOPS||5.63 TFLOPS||6.14 TFLOPS||4.61 TFLOPS||8.60 TFLOPS||7.20 TFLOPS||8.19 TFLOPS|
The GTX 1080 Ti looks a whole lot like the TITAN X launched in August of last year. Based on the 12B transistor GP102 chip, the new GTX 1080 Ti will have 3,584 CUDA core with a 1.60 GHz Boost clock. That gives it the same processor count as Titan X but with a slightly higher clock speed which should make the new GTX 1080 Ti slightly faster by at least a few percentage points and has a 4.7% edge in base clock compute capability. It has 28 SMs, 28 geometry units, 224 texture units.
Interestingly, the memory system on the GTX 1080 Ti gets adjusted – NVIDIA has disabled a single 32-bit memory controller to give the card a total of 352-bit wide bus and an odd-sounding 11GB memory capacity. The ROP count also drops to 88 units. Speaking of 11, the memory clock on the G5X implementation on GTX 1080 Ti will now run at 11 Gbps, a boost available to NVIDIA thanks to a chip revision from Micron and improvements to equalization and reverse signal distortion.
The TDP of the new part is 220 watts, falling between the Titan X and the GTX 1080. That’s an interesting move considering that the GP102 was running at 250 watts with the Titan product. The cooler has been improved compared to the GTX 1080, offering quieter fan speeds and lower temperatures when operating at the same power envelope.
Performance estimates from NVIDIA put the GTX 1080 Ti about 35% faster than the GTX 1080, the largest “kicker performance increase” that we have seen from a flagship Ti launch.
Pricing is going to be set at $699 so don't expect to find this in any budget builds. But for the top performing GeForce card on the market, it's what we expect. It should be on virtual shelves starting next week.
(Side note, with the GTX 1080 getting a $100 price drop tonight, I think we'll find this new lineup very compelling to enthusiasts.)
NVIDIA did finally detail its tiled caching rendering technique. We'll be diving more into that in a separate article with a little more time for research.
One more thing…
In another interesting move, NVIDIA is going to be offering “overclocked” versions of the GTX 1080 and GTX 1060 with +1 Gbps memory speeds. Partners will be offering them with some undisclosed price premium.
I don’t know how much performance this will give us but it’s clear that NVIDIA is preparing its lineup for the upcoming AMD Vega release.
We’ll have more news from NVIDIA and GDC as it comes!
Subject: Graphics Cards | March 1, 2017 - 03:55 AM | Tim Verry
Tagged: pascal, nvidia, GTX 1080, GDC
Update Feb 28 @ 10:03pm It's official, NVIDIA launches $699 GTX 1080 Ti.
NVIDIA is hosting a "Gaming Celebration" live event during GDC 2017 to talk PC gaming and possibly launch new hardware (if rumors are true!). During the event, NVIDIA CEO Jen-Hsun Huang made a major announcement regarding its top-end GTX 1080 graphics card with a price drop to $499 effective immediately.
The NVIDIA GTX 1080 is a pascal based graphics card with 2560 CUDA cores paired with 8GB of GDDR5X memory. Graphics cards based on this GP104 GPU are currently selling for around $580 to $700 (most are around $650+/-) with the "Founders Edition" having an MSRP of $699. The $499 price teased at the live stream represents a significant price drop compared to what the graphics cards are going for now. NVIDIA did not specify if the new $499 MSRP was the new Founders Edition price or an average price that includes partner cards as well but even if it only happened on the reference cards, the partners would have to adjust their prices downwards accordingly to compete.
I suspect that NVIDIA is making such a bold move to make room in their lineup for a new product (the long-rumored 1080 Ti perhaps?) as well as a pre-emptive strike against AMD and their Radeon RX Vega products. This move may also be good news for GTX 1070 pricing as they may also see price drops to make room for cheaper GTX 1080 partner cards that come in below the $499 price point.
If you have been considering buying a new graphics card, NVIDIA has sweetened the pot a bit especially if you had already been eyeing a GTX 1080. (Note that while the price drop is said to be effective immediately, at the time of writing Amazon was still showing "normal"/typical prices for the cards. Enthusiasts might have to wait a few hours or days for the retailers to catch up and update their sites.)
This makes me a bit more excited to see what AMD will have to offer with Vega as well as the likelihood of a GTX 1080 Ti launch happening sooner rather than later!
Subject: Processors | March 1, 2017 - 02:06 AM | Tim Verry
Tagged: Zen, Ryzen 1800X, ryzen, overclocking, LN2, Cinebench, amd
During AMD’s Ryzen launch event a team of professional overclockers took the stage to see just how far they could push the top Zen-based processor. Using a bit of LN2 (liquid nitrogen) and a lot of voltage, the overclocking team was able to hit an impressive 5.20 GHz with all eight cores (16 threads) enabled!
In addition to the exotic LN2 cooling, the Ryzen 7 1800X needed 1.875 volts to hit 5.20 GHz. That 5.20 GHz was achieved by setting the base clock at 137.78 MHz and the multiplier at 37.75. Using these settings, the chip was even stable enough to benchmark with a score of 2,363 on Cinebench R15’s multi-threaded test.
According to information from AMD, a stock Ryzen 7 1800X comes clocked at 3.6 GHz base and up to 4 GHz boost (XFR can go higher depending on HSF) and is able to score 1,619 in Cinebench. The 30% overclock to 5.20 GHz got the overclockers an approximately 45% higher CInebench score.
Further, later in the overclocking event, they managed to break a Cinebench world record of 2,445 points by achieving a score of 2,449 (it is not clear what clockspeed this was at). Not bad for a brand-new processor!
The overclocking results are certainly impressive, and suggest that Ryzen may be a decent overclocker so long as you have the cooling setup to get it there (the amount of voltage needed is a bit worrying though heh). Interestingly, HWBot shows a Core i7 6900K (also 8C/16T) hitting 5.22 GHz and scoring 2,146 in CInebench R15. That Ryzen can hit similar numbers with all cores and threads turned on is promising.
I am looking forward to seeing what people are able to hit on air and water cooling and if XFR will work as intended and get most of the way to a manual overclock without the effort of manually overclocking. I am also curious how the power phases and overclocking performance will stack up on motherboards using the B350 versus X370 chipsets. With the eight core chips able to hit 5.2, I expect the upcoming six core Ryzen 5 and four core Ryzen 3 processors to clock even higher which would certainly help gaming performance for budget builds!
Austin Evans was able to get video of the overclocking event which you can watch here (Vimeo).
- Zen and the Art of CPU Design a novella by Josh Walrath
- AMD Ryzen Pre-order Starts Today, Specs and Performance Revealed
Subject: General Tech | February 28, 2017 - 10:46 PM | Ken Addison
Tagged: amd, Vega, radeon rx vega, radeon, gdc 2017, capsaicin, rtg, HBCC, FP16
Today at the AMD Capsaicin & Cream event at GDC 2017, Senior VP of the Radeon Technologies Group, Raja Koduri officially revealed the branding that AMD will use for their next generation GPU products.
While we usually see final product branding deviate from their architectural code names (e.g. Polaris becoming the Radeon RX 460, 470 and 480), AMD this time has decided to embrace the code name for the retail naming scheme for upcoming graphics cards featuring the new GPU – Radeon RX Vega.
However, we didn't just get a name for Vega-based GPUs. Raja also went into some further detail and showed some examples of technologies found in Vega.
First off is the High-Bandwidth Cache Controller found in Vega products. We covered this technology during our Vega architecture preview last month at CES, but today we finally saw a demo of this technology in action.
Essentially, the High-Bandwidth Cache Controller (HBCC) allows Vega GPUs to address all available memory in the system (including things like NVMe SSDs, system DRAM and network storage.) AMD claims that by using the already fast memory you have available on your PC to augment onboard GPU memory (such as HBM2) they will be able to offer less expensive graphics cards that ultimately offer access to much more memory than current graphics cards.
The demo that they showed on stage featured Deus Ex: Mankind Divided running on a system with a Vega GPU running with 2GB of VRAM, and Ryzen CPU. By turning HBCC on, they were able to show a 50% increase in average FPS, and a 100% increase in minimum FPS.
While we probably won't actually see a Vega product with such a small VRAM implementation, it was impressive to see how HBCC was able to dramatically improve the playability of a 2GB GPU on a game that has no special optimizations to take advantage of the High-Bandwidth Cache.
The other impressive demo running on Vega at the Capsaicin & Cream event centered around what AMD is calling Rapid Pack Math.
Rapid Pack Math is an implementation of something we have been hearing and theorizing a lot about lately, the use of FP16 shaders for some graphic effects in games. By using half-precision FP16 shaders instead of the current standard FP32 shaders, developers are able to get more performance out of the same GPU cores. In specific, Rapid Pack Math allows developers to run half-precision FP16 shaders at exactly 2X the speed of traditional standard-precision FP32 shaders.
While the lower precision of FP16 shaders won't be appropriate for all GPU effects, AMD was showing a comparison of their TressFX hair rendering technology running on both standard and half-precision shaders. As you might expect, AMD was able to render twice the amount of hair strands per second, making for a much more fluid experience.
Just like we saw with the lead up to the Polaris GPU launch, AMD seems to be releasing a steady stream of information on Vega. Now that we have the official branding for Vega, we eagerly await getting our hands on these new High-end GPUs from AMD.
Subject: Motherboards | February 28, 2017 - 09:24 PM | Jeremy Hellstrom
Tagged: intel z270, Aorus Z270X Gaming 9, gigabyte
What an interesting time it will be with Intel slinging Z270's at the same time AMD's Z370 arrives on the scene; there is no possible way some people could get confused. It will also make the next generation of board names interesting as the two companies fight for numbering rights. GIGABYTE's Aorus Z270X Gaming 9 comes with an impressive price tag of $500, so it will be interesting to see if [H]ard|OCP finds the feature set on the board worth of that investment. The four 16x PCIe 3.0 slots will support four GPUs simultaneously and there are both a pair of M.2 and U.2 slots, to say nothing of the onboard SoundBlaster. Head on over to read through the full review.
"GIGABYTE’s Z270X Gaming 9 is one of the most feature rich and ultra-high end offerings you’ll see for the Z270 chipset this year. We were super fond of last year’s similar offering and as a result, the Z270X Gaming 9 has very large shoes to fill. With its massive feature set and overclocking prowess, it is poised to be one of the best motherboards of the year."
Here are some more Motherboard articles from around the web:
- MSI Z270 SLI PLUS Review @ OCC
- MSI Z270 Gaming M7 Motherboard Review @ Hardware Canucks
- MSI Z270 Gaming M7 @ Kitguru
- ASUS ROG Maximus IX Formula Review @ OCC
- Biostar Racing Z270GT4 @ techPowerUp
Subject: General Tech | February 28, 2017 - 08:48 PM | Jeremy Hellstrom
Tagged: microsoft, oops, Lawsuit
If you purchased anything from the Microsoft store between November 2013 and February 24 of this year and live in the USA you could be eligible for up to $100 in cash damages. It seems that the credit card information they provided on receipts contained more than half of your credit card numbers which is in violation of a law implemented in 2003 which states that no more than five numbers can be shown on receipts. Now that the judgment against Microsoft is in, the proposed settlement for Microsoft to set aside $1,194,696US for customers who were affected by this issue. The settlement needs to be approved by the judge so you cannot claim your money immediately, keep an eye out for more new. The Register have posted links to the original lawsuit as well as the judgment right here.
"On Friday, the Redmond giant agreed to give up roughly seven minutes of its quarterly revenue to a gaggle of Microsoft Store customers who claimed that their receipts displayed more of their payment card numbers than legally allowed."
Here is some more Tech News from around the web:
- CloudPets IoT Toys Leaked and Ransomed, Exposing Kids' Voice Messages @ Slashdot
- iPhones are now more failure-prone than Android devices @ The Inquirer
- Softbank gros fromage: ARM will knock out a trillion IoT chips by 2040 @ The Register
- Raspberry Pi Zero W adds WiFi and Bluetooth 4.0 support @ The Inquirer
- Splitsville: Toshiba prepares to lose its memory @ The Register
Zen vs. 40 Years of CPU Development
Zen is nearly upon us. AMD is releasing its next generation CPU architecture to the world this week and we saw CPU demonstrations and upcoming AM4 motherboards at CES in early January. We have been shown tantalizing glimpses of the performance and capabilities of the “Ryzen” products that will presumably fill the desktop markets from $150 to $499. I have yet to be briefed on the product stack that AMD will be offering, but we know enough to start to think how positioning and placement will be addressed by these new products.
To get a better understanding of how Ryzen will stack up, we should probably take a look back at what AMD has accomplished in the past and how Intel has responded to some of the stronger products. AMD has been in business for 47 years now and has been a major player in semiconductors for most of that time. It really has only been since the 90s where AMD started to battle Intel head to head that people have become passionate about the company and their products.
The industry is a complex and ever-shifting one. AMD and Intel have been two stalwarts over the years. Even though AMD has had more than a few challenging years over the past decade, it still moves forward and expects to compete at the highest level with its much larger and better funded competitor. 2017 could very well be a breakout year for the company with a return to solid profitability in both CPU and GPU markets. I am not the only one who thinks this considering that AMD shares that traded around the $2 mark ten months ago are now sitting around $14.
AMD Through 1996
AMD became a force in the CPU industry due to IBM’s requirement to have a second source for its PC business. Intel originally entered into a cross licensing agreement with AMD to allow it to produce x86 chips based on Intel designs. AMD eventually started to produce their own versions of these parts and became a favorite in the PC clone market. Eventually Intel tightened down on this agreement and then cancelled it, but through near endless litigation AMD ended up with a x86 license deal with Intel.
AMD produced their own Am286 chip that was the first real break from the second sourcing agreement with Intel. Intel balked at sharing their 386 design with AMD and eventually forced the company to develop its own clean room version. The Am386 was released in the early 90s, well after Intel had been producing those chips for years. AMD then developed their own version of the Am486 which then morphed into the Am5x86. The company made some good inroads with these speedy parts and typically clocked them faster than their Intel counterparts (eg. Am486 40 MHz and 80 MHz vs. the Intel 486 DX33 and DX66). AMD priced these points lower so users could achieve better performance per dollar using the same chipsets and motherboards.
Intel released their first Pentium chips in 1993. The initial version was hot and featured the infamous FDIV bug. AMD made some inroads against these parts by introducing the faster Am486 and Am5x86 parts that would achieve clockspeeds from 133 MHz to 150 MHz at the very top end. The 150 MHz part was very comparable in overall performance to the Pentium 75 MHz chip and we saw the introduction of the dreaded “P-rating” on processors.
There is no denying that Intel continued their dominance throughout this time by being the gold standard in x86 manufacturing and design. AMD slowly chipped away at its larger rival and continued to profit off of the lucrative x86 market. William Sanders III set the bar higher about where he wanted the company to go and he started on a much more aggressive path than many expected the company to take.
Subject: Storage | February 27, 2017 - 10:23 PM | Jeremy Hellstrom
Tagged: sdxc, sd card, patriot, lx series
You may recall a while back Allyn put together an article detailing the new types of SD cards hitting the market which will support 4K recording in cameras. Modders Inc just wrapped up a review of one of these cards, Patriot's 256GB LX Series SDXC card with an included adapter for those who need it. The price certainly implies it is new technology, $200 for 256GB of storage is enough to make anyone pause, so the question becomes why one would pay such a premium. Their benchmarks offer insight into this, with 83Mb/s write and 96Mb/s read in both ATTO and CrystalDisk proving that this is a far cry from the performance of older SD cards and worthy of that brand new ultra high definition camera you just picked up. Lets us hope the prices plummet as they did with the previous generations of cards.
"Much like Mary Poppins bag of wonders, Patriot too has a method of fitting a substantial amount of goodness in a small space with the release of their 256GB LX Series SDXC class 10 memory card. Featuring an impressive 256GB of storage and boasting this as an “ultra high speed” card for QHD video production and high resolution photos."
Here are some more Storage reviews from around the web:
- Synology DiskStation Manager (DSM) 6.1 @ eTeknix
- Asustor AS3202T Multifunctional 4K Quad-Core NAS Review @ Bjorn3D
- TerraMaster F2-220 2-Bay NAS @ techPowerUp
- QNAP TBS-453A 4-Bay M.2 SSD NAS @ Modders-Inc
- Seagate IronWolf Pro 10TB NAS HDD @ eTeknix
- Western Digital WD Red Pro 5 TB Hard Drive @ Hardware Secrets
Subject: General Tech, Graphics Cards | February 27, 2017 - 08:39 PM | Jeremy Hellstrom
Tagged: MWC, GDC, VRMark, Servermark, OptoFidelity, cyan room, benchmark
Futuremark are showing off new benchmarks at GDC and MWC, the two conferences which are both happening this week. We will have quite a bit of coverage this week as we try to keep up with simultaneous news releases and presentations.
First up is a new benchmark in their recently released DX12 VRMark suite, the new Cyan Room which sits between the existing two in the suite. The Orange Room is to test if your system is capable of providing you with an acceptable VR experience or if your system falls somewhat short of the minimum requirements while the Blue Room is to show off what a system that exceeds the recommended specs can manage. The Cyan room will be for those who know that their system can handle most VR, and need to test their systems settings. If you don't have the test suite Humble Bundle has a great deal on this suite and several other tools, if you act quickly.
Next up is a new suite to test Google Daydream, Google Cardboard, and Samsung Gear VR performance and ability. There is more than just performance to test when you are using your phone to view VR content, such as avoiding setting your eyeholes on fire. The tests will help you determine just how long your device can run VR content before overheating becomes an issue and interferes with performance, as well as helping you determine your battery life.
VR Latency testing is the next in the list of announcements and is very important when it comes to VR as high or unstable latency is the reason some users need to add a bucket to their list of VR essentials. Futuremark have partnered with OptoFidelity to produce VR Multimeter HMD hardware based testing. This allows you, and hopefully soon PCPer as well, to test motion-to-photon latency, display persistence, and frame jitter as well as audio to video synchronization and motion-to-audio-latency all of which could lead to a bad time.
Last up is the brand new Servermark to test the performance you can expect out of virtual servers, media servers and other common tasks. The VDI test lets you determine if a virtual machine has been provisioned at a level commensurate to the assigned task, so you can adjust it as required. The Media Transcode portion lets you determine the maximum number of concurrent streams as well as the maximum quality of those streams which your server can handle, very nice for those hosting media for an audience.
Expect to hear more as we see the new benchmarks in action.
Subject: General Tech | February 27, 2017 - 05:56 PM | Jeremy Hellstrom
Tagged: M2, Arduino Due, macchina, Kickstarter, open source, DIY
There is a Kickstarter out there for all you car enthusiasts and owners, the Arduino Duo based Macchina M2 which allows you to diagnose and change how your car functions. They originally developed the device during a personal project to modify a Ford Contour into an electric car, which required serious reprogramming of sensors and other hardware in the car. They realized that their prototype could be enhanced to allow users to connect into the hardware of their own cars to monitor performance, diagnose issues or even modify the performance. Slashdot has the links and their trademarked reasonable discourse for those interested, if you have the hardware already you can get the M2 interface $45, $79 or more for the hardware and accessories.
"Challenging "the closed, unpublished nature of modern-day car computers," their M2 device ships with protocols and libraries "to work with any car that isn't older than Google." With catchy slogans like "root your ride" and "the future is open," they're hoping to build a car-hacking developer community, and they're already touting the involvement of Craig Smith, the author of the Car Hacker's Handbook from No Starch Press."
Here is some more Tech News from around the web:
- Trying The SteamVR Beta On Linux Feels More Like An Early Alpha @ Phoronix
- Windows 10 Creators Update will let users block installation of Win32 apps @ The Inquirer
- Nokia 3310 (2017) hands-on @ The Inquirer
- Windows File History - An Inexpensive Insurance Policy @ Hardware Secrets
- Mysterious Gmail account lockouts prompt hack fears @ The Register
- Calyos may also produce stand-alone loop heat pipe coolers @ Kitguru
- The final 5G technical performance specs have been set @ The Register
Subject: General Tech, Mobile | February 27, 2017 - 04:12 PM | Sebastian Peak
Tagged: x50, Sub-6 Ghz, qualcomm, OFDM, NR, New Radio, MWC, multi-mode, modem, mmWave, LTE, 5G, 3GPP
Qualcomm has announced their first successful 5G New Radio (NR) connection using their prototype sub-6 GHz prototype system. This announcement was followed by today's news of Qualcomm's collaboration with Ericsson and Vodafone to trial 5G NR in the second half of 2017, as we approach the realization of 5G. New Radio is expected to become the standard for 5G going forward as 3GPP moves to finalize standards with release 15.
"5G NR will make the best use of a wide range of spectrum bands, and utilizing spectrum bands below 6 GHz is critical for achieving ubiquitous coverage and capacity to address the large number of envisioned 5G use cases. Qualcomm Technologies’ sub-6 GHz 5G NR prototype, which was announced and first showcased in June 2016, consists of both base stations and user equipment (UE) and serves as a testbed for verifying 5G NR capabilities in bands below 6 GHz."
The Qualcomm Sub-6 GHz 5G NR prototype (Image credit: Qualcomm)
Qualcomm first showed their sub-6 Ghz prototype this past summer, and it will be on display this week at MWC. The company states that the system is designed to demonstrate how 5G NR "can be utilized to efficiently achieve multi-gigabit-per-second data rates at significantly lower latency than today’s 4G LTE networks". New Radio, or NR, is a complex topic as it related to a new OFDM-based wireless standard. OFDM refers to "a digital multi-carrier modulation method" in which "a large number of closely spaced orthogonal sub-carrier signals are used to carry data on several parallel data streams or channels". With 3GPP adopting this standard going forward the "NR" name could stick, just as "LTE" (Long Term Evolution) caught on to describe the 4G wireless standard.
Along with this 5G NR news comes the annoucement of the expansion of its X50 modem family, first announced in October, "to include 5G New Radio (NR) multi-mode chipset solutions compliant with the 3GPP-based 5G NR global system", according to Qualcomm. This 'multi-mode' solution provides full 4G/5G compatibility with "2G/3G/4G/5G functionality in a single chip", with the first commercial devices expected in 2019.
"The new members of the Snapdragon X50 5G modem family are designed to support multi-mode 2G/3G/4G/5G functionality in a single chip, providing simultaneous connectivity across both 4G and 5G networks for robust mobility performance. The single chip solution also supports integrated Gigabit LTE capability, which has been pioneered by Qualcomm Technologies, and is an essential pillar for the 5G mobile experience as the high-speed coverage layer that co-exists and interworks with nascent 5G networks. This set of advanced multimode capabilities is designed to provide seamless Gigabit connectivity – a key requirement for next generation, premium smartphones and mobile computing devices."
Full press releases after the break.
Subject: General Tech | February 27, 2017 - 12:01 PM | Scott Michaud
Tagged: zenimax, Oculus
As far as I know, it’s fairly common to seek injunctions during legal fights over intellectual rights cases, so I’m not sure how surprising this should be. Still, after the $500 million USD judgment against Oculus, ZeniMax has indeed filed for a court order to, according to UploadVR, block the usage of Oculus PC software, Oculus Mobile software, and the plug-ins for Unity and Unreal Engine. They also demand, as usual, that Oculus deletes all copies of the infringing code and a few other stipulations.
I should stress that this is just a filing. It would need to be accepted for it to have any weight.
The timing is quite disruptive to Oculus, too, even if by total co-incidence. Epic Games is about to release their flagship, Oculus-exclusive title, Robo Recall, which was intended to be released for free to those who have Oculus Touch controllers. If it succeeds, and that’s way more if than when at this point, then that could sting for whoever gets stuck with the game’s invoice, which (I assume) would be Oculus.
Personally, I’m not quite sure how far this will go. Based on my memory of the jury decision, ZeniMax is entitled to $500 million USD for prior damages, and nothing for ongoing damages. You would think that, if a jury ruled that the infringement has no lasting effect, that an injunction wouldn’t recover any of that non-existent value. On the other hand, I’m not a judge (or anyone else of legal relevance) so what I reason doesn’t really matter outside the confines of this website.
We’ll need to wait and see if this goes anywhere.
Subject: Motherboards | February 26, 2017 - 06:29 AM | Tim Verry
Tagged: x370, sli, ryzen, PCI-E 3.0, gaming, crossfire, b350, amd
Computerbase.de recently published an update (translated) to an article outlining the differences between AMD’s AM4 motherboard chipsets. As it stands, the X370 and B350 chipsets are set to be the most popular chipsets for desktop PCs (with X300 catering to the small form factor crowd) especially among enthusiasts. One key differentiator between the two chipsets was initially support for multi-GPU configurations with X370. Now that motherboards have been revealed and are up for pre-order now, it turns out that the multi-GPU lines have been blurred a bit. As it stands, both B350 and X370 will support AMD’s CrossFire multi-GPU technology and the X370 alone will also have support for NVIDIA’s SLI technology.
The AM4 motherboards equipped with the B350 and X370 chipsets that feature two PCI-E x16 expansion slots will run as x8 in each slot in a dual GPU setup. (In a single GPU setup, the top slot can run at full x16 speeds.) Which is to say that the slots behave the same across both chipsets. Where the chipsets differ is in support for specific GPU technologies where NVIDIA’s SLI is locked to X370. TechPowerUp speculates that the decision to lock SLI to its top-end chipset is due, at least in part, to licensing costs. This is not a bad thing as B350 was originally not going to support any dual x16 slot multi-GPU configurations, but now motherboard manufacturers are being allowed to enable it by including a second slot and AMD will reportedly permit CrossFire usage (which costs AMD nothing in licensing). Meanwhile the most expensive X370 chipset will support SLI for those serious gamers that demand and can afford it. Had B350 supported SLI and carried the SLI branding, they likely would have been ever so slightly more expensive than they are now. Of course, DirectX 12's multi-adapter will work on either chipset so long as the game supports it.
|X370||B350||A320||X300 / B300 / A300||Ryzen CPU||Bristol Ridge APU|
|PCI-E 3.0||0||0||0||4||20 (18 w/ 2 SATA)||10|
|USB 3.1 Gen 2||2||2||1||1||0||0|
|USB 3.1 Gen 1||6||2||2||2||4||4|
|SATA 6 Gbps||4||2||2||2||2||2|
|Overclocking Capable?||Yes||Yes||No||Yes (X300 only)|
Multi-GPU is not the only differentiator though. Moving up from B350 to X370 will get you 6 USB 3.1 Gen 1 (USB 3.0) ports versus 2 on B350/A30/X300, two more PCI-E 2.0 lanes (8 versus 6), and two more SATA ports (6 total usable; 4 versus 2 coming from the chipset).
Note that X370, B350, and X300 all support CPU overclocking. Hopefully this helps you when trying to decide which AM4 motherboard to pair with your Ryzen CPU once the independent benchmarks are out. In short, if you must have SLI you are stuck ponying up for X370, but if you plan to only ever run a single GPU or tend to stick with AMD GPUs and CrossFire, B350 gets you most of the way to a X370 for a lot less money! You do not even have to give up any USB 3.1 Gen 2 ports though you limit your SATA drive options (it’s all about M.2 these days anyway heh).
For those curious, looking around on Newegg I notice that most of the B350 motherboards have that second PCI-E 3.0 x16 slot and CrossFire support listed in their specifications and seem to average around $99. Meanwhile X370 starts at $140 and rockets up from there (up to $299!) depending on how much bling you are looking for!
Are you going for a motherboard with the B350 or X370 chipset? Will you be rocking multiple graphics cards?
- AMD Ryzen Pre-order Starts Today, Specs and Performance Revealed
- AMD Launching Ryzen 5 Six Core Processors Soon (Q2 2017)
- AMD Details AM4 Chipsets and Upcoming Motherboards
- AMD Officially Launches Bristol Ridge Processors And Zen-Ready AM4 Platform
- Biostar Launches X370GT7 Flagship Motherboard For Ryzen CPUs
- Gigabyte is Ryzen up to the challenge of their rivals
- Mid-Range Gigabyte Socket AM4 (B350 Chipset) Micro ATX Motherboard Pictured
- CES 2017: MSI Shows Off X370 XPower Gaming Titanium AM4 Motherboard
Subject: General Tech | February 26, 2017 - 05:13 AM | Scott Michaud
Tagged: valve, pc gaming
When VR started to take off, developers begun to realize that audio is worth some attention. Historically, it’s been difficult to market, but that’s par for the course when it comes to VR technology, so I guess that’s no excuse to pass it up anymore. Now Valve, the owners of the leading VR platform on the PC have just released an API for audio processing: Steam Audio SDK.
Image Credit: Valve Software
First, I should mention that the SDK is not quite open. The GitHub page (and the source code ZIP in its releases tab) just contain the license (which is an EULA) and the readme. That said, Valve is under no obligation to provide these sorts of technology to the open (even though it would be nice) and they are maintaining builds for Windows, Mac, Linux, and Android. It is currently available as a C API and a plug-in for Unity. Unreal Engine 4, FMOD, and WWISE plug-ins are “coming soon”.
As for the technology itself, it has quite a few interesting features. As you might expect, it supports HRTF out of the box, which modifies a sound call to appear like it’s coming from a defined direction. The algorithm is based on experimental data, rather than some actual, physical process.
More interesting is their sound propagation and occlusion calculations. They are claiming that this can be raycast, and static scenes can bake some of the work ahead-of-time, which will reduce runtime overhead. Unlike VRWorks Audio or TrueAudio Next, it looks like they’re doing it on the CPU, though. I’m guessing this means that it will mostly raycast to fade between versions of the audio, rather than summing up contributions from thousands of individual rays at runtime (or an equivalent algorithm, like voxel leakage).
Still, this is available now as a C API and a Unity Plug-in, because Valve really likes Unity lately.
Subject: Motherboards | February 24, 2017 - 10:30 PM | Jeremy Hellstrom
Tagged: aorus, gigabyte, ryzen, b350, x370
Gigabyte have lead with five motherboards, two X370s under Aorus and three B350s with Gigabyte branding. They all share some traits in common such as RGB Fusion with 16.8 million colours to choose from and an application to allow you to customize the light show to your own specifications. It supports control from your phone if you are so addicted to the glow you need to play with your system from across the room.
Smartfan 5 indicates the presence of five headers for fans or pumps that will work with PWM and standard voltage fans, which can draw up to 12V at 2A. The boards also have six temperature sensors to give you feedback on the effectiveness of your cooling and modify it with the included application. Most models will offer Thunderbolt 3, Intel GbE NICs and an ASMedia 2142 USB 3.1 controllers which they claim can provide up to 16Gb/s. All will have high end audio solutions, often featuring a headphone pre-amp and high quality capacitors. There are a lot more features specific to each board, so make sure to click through to check out your favourites.
The Aorus boards, the GA-AX370-Gaming K7 and GA-AX370-GAMING 5 are very similar but if you plan on playing with your BCLK it is the K7 which includes Gigabyte's Turbo B-Clock. The Gigabyte lineup includes the GA-AB350M, GA-AB350-Gaming and GA-AB350-GAMING 3. The GA-AB350M is the only mATX Ryzen board of these five for those looking to build a smaller system. For audiophiles the full size the GAMING 3 includes an ALC1220 codec as opposed to the ALC 887 used on the other two models.
You can expect to see reviews of these boards which offer far more details on perfomance and features after they are released on March 2nd. Full PR under the break.
Subject: General Tech | February 24, 2017 - 08:58 PM | Jeremy Hellstrom
Tagged: cherry mx brown, input, mechanical keyboard, armato, AZiO
The Azio Armato is a big aluminium keyboard, with five macro keys located on the lower left, on the upper right are media control buttons beside the large volume knob. The keyboard does come with a wrist rest, which attaches via a magnet so you can choose to remove it at will. The keyboard does not require software, lighting is controlled via keystrokes and macros are recorded by pushing that large REC button and one of the macro keys, then up to up to 31 keys in sequence and the REC button again to save the macro. You can see more of the Armato over at Benchmark Reviews.
"In any case Benchmark Reviews has in hand their Armato Mechanical Gaming Keyboard, model MGK-ARMATO-01. As a single-color backlit mechanical keyboard with Cherry MX switches, it might seem as if there’s little to distinguish it from the many other similar products available. But first appearances can be deceiving, as we’ll find out in this review."
Here is some more Tech News from around the web:
- Roccat Suora FX @ Kitguru
- Das Keyboard X40 Pro Gaming Mechanical Keyboard Review @ NikKTech
- Genius Scorpion K20 Keyboard Review: Fast-input and Wallet Friendly @ Modders-Inc
- Tesoro Gram Spectrum RGB Keyboard @ techPowerUp
- AZIO MGK L80 RGB Mechanical Gaming Keyboard Review @ Techgage
- 1 of 1057