Delidded Ryzen 7 1700 Confirms AMD Is Using Solder With IHS On Ryzen Processors

Subject: Processors | March 1, 2017 - 09:17 PM |
Tagged: solder, Ryzen 1700, ryzen, overclocking, IHS, delid, amd

Professional extreme overclocker Roman "der8auer" Hartung from Germany recently managed to successfully de-lid his AMD Ryzen 7 1700 processor and confirmed that AMD is, in fact, using solder as its thermal interface material of choice between the Ryzen die and IHS (integrated heat spreader). The confirmation that AMD is using solder is promising news for enthusiasts eager to overclock the new processors and see just how far they are able to push them on air and water cooling.

Delidded Ryzen 7 1700 Die.JPG

Image credit: Roman Hartung. Additional high resolution photos are available here.

In a video on his YouTube channel, der8auer ("The Farmer") shows the steps involved in delidding the Ryzen 7 1700 which involve using razor blades, a heating element to get the IHS heated to a temperature high enough to melt the indium (~170°C on the block with the indium melting around 157°C), and a whole lot of courage. After using the razor blades to cut the glue around the edges, he heated up the IHS enough to start melting the solder and after a cringe-worthy cracking sound he was able to lift the package away from the IHS with the die and on-package components intact!

He does note that the Ryzen using PGA rather than the LGA method Intel has moved to makes the CPU a bit harder to handle as the pins are on the CPU rather than the socket and are easily bent. Compared to the delidding process and possibility of cracking the die or ripping off some components and killing the $329 CPU though, bent pins are nothing and can usually be bent back heh. He reportedly went through two previous Ryzen CPUs before getting a successful de-lid on the third attempt after all.

It seems that AMD is using two small pads of Indium solder along with some gold plating on the inside of the IHS to facilitate heat transfer and allow the solder to mate with the IHS. Because AMD is using what seems to be high quality solder TIM, delidding and replacing the TIM does not seem to be necessary at all; however, Roman "der8auer" Hartung speculates that direct die cooling could work out very well for those enthusiasts brave enough to try it since the cooler does not need to put high amounts of pressure onto the CPU to hold it into place unlike an LGA socket. 

If you are interested in seeing the overclocking benefits of de-lidding and direct die cooling a Ryzen CPU, keep an eye on his YouTube channel for a video over the weekend detailing his testing using a Ryzen 7 1800X.

I am really looking forward to seeing how far enthusiasts are able to push Ryzen (especially on water), and maybe we can convince Morry to de-lid a Ryzen CPU!

Happy Overclocking!

Also read:

Source: der8auer

GDC 2017: The Khronos Group (re-)Announces OpenXR

Subject: General Tech | March 1, 2017 - 08:12 PM |
Tagged: VR, pc gaming, openxr, Khronos

While the Vulkan update headlines the Khronos Group’s presence at GDC 2017, they also re-announced their VR initiative, now called OpenXR. This specification wraps around the individual SDKs, outlining functionality that is to be exposed to the application and the devices. If a device implements the device layer, then it will immediately support everything that uses the standard, and vice-versa.

khronos-2017-openxr-logo.png

OpenVR was donated by Valve, leading to OpenXR...
... because an X is really just a reflected V, right?

Like OpenGL and Vulkan, individual vendors will still be allowed to implement their own functionality, which I’m hoping will be mostly exposed through extensions. The goal is to ensure that users can, at a minimum, enjoy the base experience of any title on any device.

They are aiming for 2018, but interested parties should contribute now to influence the initial release.

Report: AMD to Launch Radeon RX 500 Series GPUs in April

Subject: Graphics Cards | March 1, 2017 - 05:04 PM |
Tagged: video card, RX 580, RX 570, RX 560, RX 550, rx 480, rumor, report, rebrand, radeon, graphics, gpu, amd

According to a report from VideoCardz.com we can expect AMD Radeon RX 500-series graphics cards next month, with an April 4th launch of the RX 580 and RX 570, and subsequent RX 560/550 launch on April 11. The bad news? According to the report "all cards, except RX 550, are most likely rebranded from Radeon RX 400 series".

Polaris10.jpg

AMD Polaris 10 GPU (Image credit: Heise Online)

Until official confirmation on specs arrive, this is still speculative; however, if Vega is not ready for an April launch and AMD will indeed be refreshing their Radeon lineup, an R9 300-series speed bump/rebrand is not out of the realm of possibility. VideoCardz offers (unconfirmed, at this point) specs of the upcoming RX 500-series cards, with RX 400 numbers for comparison:

videocardz_chart_1.png

Chart credit: VideoCardz.com

The first graph shows the increased GPU boost clock speed of ~1340 MHz for the rumored RX 580, with the existing RX 480 clocked at 1266 MHz. Both would be Polaris 10 GPUs with otherwise identical specs. The same largely holds for the rumored specs on the RX 570, though this GPU would presumably be shipping with faster memory clocks as well. On the RX 560 side, however, the Polaris 11 powered replacement for the RX 460 might be based on the 1024-core variant we have seen from the Chinese market.

videocardz_chart_2.png

Chart credit: VideoCardz.com

No specifics on the RX 550 are yet known, which VideoCardz says "is most likely equipped with Polaris 12, a new low-end GPU". These rumors come via heise.de (German language), who state that those "hoping for Vega-card will be disappointed - the cards are intended to be rebrands with known GPUs". We will have to wait until next month to know for sure, but even if this is the case, expect faster clocks and better performance for the same money.

Source: VideoCardz

Nintendo Switches out spinach green and almost black for 720p

Subject: General Tech | March 1, 2017 - 03:46 PM |
Tagged: Tegra X1, Nintendo Switch, Joy-Con, gaming

The Nintendo Switch has arrived for those who feel that mobile gaming is lacking in analog joysticks and buttons.  The product sits in an interesting place, the 720p screen is nowhere near the resolution of modern phones though those phones lack a dock which triggers an overclocked mode to send 1080p to a TV.  The programming team behind Nintendo also has far more resources than most mobile app developers and they can incorporate some tricks which a phone simply will not be able to replicate.  Ars Technica took the Switch, its two Joy-Cons and the limited number of released games on a tour to see just how well Nintendo did on their new portable gaming system.  There are some improvements that could be made but the Joy-Cons do sound more interesting than the Gameboy Advanced.

newswitch-1-1440x960.jpg

"With the Switch, Nintendo seems to be betting that the continued drum beat of Moore's Law and miniaturization has made that dichotomy moot. The Switch is an attempt to drag the portable gaming market kicking and screaming to a point where it's literally indistinguishable from the experience you'd get playing on a 1080p HDTV."

Here is some more Tech News from around the web:

Gaming

 

Source: Ars Technica

MWC: BlackBerry KEYone Is Solid Mid-Range Smartphone Priced Too High

Subject: Mobile | March 1, 2017 - 02:26 PM |
Tagged: Snapdragon 625, opinion, MWC, keyone, enterprise, Cortex A53, blackberry, Android 7.1, Android

February is quite the busy month with GDC, MWC, and a flurry of technology announcements coming out all around the same time! One of the more surprising announcements from Mobile World Congress in Barcelona came from BlackBerry in the form of a new mid-range smartphone it is calling the KEYone. The KEYone is an Android 7.1 smartphone actually built by TCL with an aluminum frame, "soft touch" plastic back, curved edges, and (in traditional CrackBerry fashion) a full physical QWERTY keyboard!

BlackBerry KEYone.jpg

The black and silver candy bar style KEYone (previously known as "Mercury") measures 5.78" x 2.85" x 0.37" and weighs 0.39 pounds. The left, right, and bottom edges are rounded and the top edge is flat. There are two bottom firing stereo speakers surrounding a USB Type-C port (Type-C 1.0 with USB OTG), a headphone jack up top, and volume, power, and convenience key buttons on the right side. The front of the device, which BlackBerry has designed to be comfortable using one handed, features a 4.5" 1620 x 1080 LCD touchscreen (434 PPI) protected by Gorilla Glass 4, a front facing camera with LED flash, and a large physical keyboard with straight rows of keys that have a traditional BlackBerry feel. The keyboard, in addition to having physical buttons, supports touch gestures such as swiping, and the spacebar has a fingerprint reader that early hands on reports indicate works rather well for quickly unlocking the phone. Further, every physical key can be programmed as a hot key to open any application with a long press (B for browser, E for email, ect).

i1.jpg

On the camera front, BlackBerry is using the same sensor found in the Google Pixel which is a Sony IMX378. There is a 12MP f/2.0 rear camera with dual LED flash and phase detect auto focus on the back as well as a front facing 8MP camera. Both cameras can record 1080p30 video as well as support HDR and software features like face detection. Android Central reports that the camera software is rather good (it even has a pro mode) and the camera is snappy at taking photos.

Internally, BlackBerry has opted to go with squarely mid-range hardware which is disappointing but not the end of the world. Specifically, the KEYone is powered by a Snapdragon 625 (MSM8953) with eight ARM Cortex A53 cores clocked at 2GHz and an Adreno 506 GPU paired with 3GB of RAM and 32GB of internal storage. Wireless support includes dual band 802.11ac, FM, Bluetooth 4.2, GPS, NFC, and GSM/HSPA/LTE cellular radios. The smartphone uses a 3,505 mAh battery that is not user removable but at least supports Quick Charge 3.0 which can reportedly charge the battery to 50% in 36 minutes. Storage can be expanded via MicroSD cards. The smartphone is running Android 7.1.1 with some BlackBerry UI tweaks but is otherwise fairly stock. Under the hood however BlackBerry has hardened the OS and includes its DTEK security sftware along with promising monthly updates.

Not bad right? Looking at the specifications and reading/watching the various hands-on reports coming out it is really looking like BlackBerry (finally) has a decent piece of hardware for enterprise customers, niche markets (lawyers, healthcare, ect), and customers craving a physical keyboard in a modern phone. At first glance the BlackBerry KEYone hits all the key marks to a competitive Android smartphone... except for its $549 price tag. The KEYone is expected to launch in April.

i6.jpg

No scroll ball? Blasphemy! (hehe)

Unfortunately, that $549 price is not a typo, and is what kills it even for a CrackBerry addict like myself. After some reflection and discussion with our intrepid smartphone guru Sebastian, I feel as though BlackBerry would have a competitive smartphone on its hands at $399, but at $549 even business IT departments are going to balk much less consumers (especially as many businesses embrace the BYOD culture or have grown accustomed to pricing out and giving everyone whatever basic Android or iPhone they can fit into the budget).

While similarly specced Snapdragon 625 smartphones are going for around $300, (e.g. Asus ZenPhone 3 at $265.98), there is some precedent for higher priced MSM8953-based smartphones such as the $449 Moto Z Play. There is some inherent cost in integrating a physical keyboard and BlackBerry has also hardened the Android 7.1.1 OS which I can see them charging a premium for and that business customers (or anyone that does a lot of writing on the go) that values security can appreciate. It seems like BlackBerry (and hardware partner TCL) has finally learned how to compete on the hardware design front in this modern Android-dominated market, and now they must learn how to compete on price especially as more and more Americans are buying unlocked and off-contract smartphones! I think the KEYone is a refreshing bit of hardware to come out of BlackBerry (I was not a fan of the Priv design) and I would like to see it do well and give the major players (Apple, Samsung, LG, Asus, Huawei, ect) some healthy competition with the twist of its focus on better security but in order for that to happen I think the BlackBerry KEYone needs to be a bit cheaper.

What are your thoughts on the KEYone and the return of the physical keyboard? Am I onto something or simply off my Moto Rokr on this?

Source: BlackBerry

Tearing open a Tesla; a look at the Model S battery

Subject: General Tech | March 1, 2017 - 01:23 PM |
Tagged: tesla motors, battery

Hack a Day posted a video of a teardown of the battery that powers the Tesla Model S, for those curious about how it is set up.  This is not recommended for you to try at home, not only are there a huge number of bolts and Torx screws, it seems that each has a specific torque amount which must be adhered to.  Inside are 16 battery packs, each of which contain 444 cells with a total of 24V, for a sum of 5.3 kWh.  Do not test the charge on these batteries with your tongue!  Click on through to watch the video.

tesla-teardown-featured.jpg

"Tesla famously build their battery packs from standard 18650 lithium-ion cells, but it’s safe to say that the pack in the Model S has little in common with your laptop battery. Fortunately for those of a curious nature, [Jehu Garcia] has posted a video showing the folks at EV West tearing down a Model S pack from a scrap car, so we can follow them through its construction."

Here is some more Tech News from around the web:

Tech Talk

 

Source: Hack a Day

NVIDIA Announces GeForce GTX 1080 Ti 11GB Graphics Card, $699, Available Next Week

Subject: Graphics Cards | February 28, 2017 - 10:59 PM |
Tagged: pascal, nvidia, gtx 1080 ti, gp102, geforce

Tonight at a GDC party hosted by CEO Jen-Hsun Huang, NVIDIA announced the GeForce GTX 1080 Ti graphics card, coming next week for $699. Let’s dive right into the specifications!

card1.jpg

  GTX 1080 Ti Titan X (Pascal) GTX 1080 GTX 980 Ti TITAN X GTX 980 R9 Fury X R9 Fury R9 Nano
GPU GP102 GP102 GP104 GM200 GM200 GM204 Fiji XT Fiji Pro Fiji XT
GPU Cores 3584 3584 2560 2816 3072 2048 4096 3584 4096
Base Clock 1480 MHz 1417 MHz 1607 MHz 1000 MHz 1000 MHz 1126 MHz 1050 MHz 1000 MHz up to 1000 MHz
Boost Clock 1600 MHz 1480 MHz 1733 MHz 1076 MHz 1089 MHz 1216 MHz - - -
Texture Units 224 224 160 176 192 128 256 224 256
ROP Units 88 96 64 96 96 64 64 64 64
Memory 11GB 12GB 8GB 6GB 12GB 4GB 4GB 4GB 4GB
Memory Clock 11000 MHz 10000 MHz 10000 MHz 7000 MHz 7000 MHz 7000 MHz 500 MHz 500 MHz 500 MHz
Memory Interface 352-bit 384-bit G5X 256-bit G5X 384-bit 384-bit 256-bit 4096-bit (HBM) 4096-bit (HBM) 4096-bit (HBM)
Memory Bandwidth 484 GB/s 480 GB/s 320 GB/s 336 GB/s 336 GB/s 224 GB/s 512 GB/s 512 GB/s 512 GB/s
TDP 250 watts 250 watts 180 watts 250 watts 250 watts 165 watts 275 watts 275 watts 175 watts
Peak Compute 10.6 TFLOPS 10.1 TFLOPS 8.2 TFLOPS 5.63 TFLOPS 6.14 TFLOPS 4.61 TFLOPS 8.60 TFLOPS 7.20 TFLOPS 8.19 TFLOPS
Transistor Count 12.0B 12.0B 7.2B 8.0B 8.0B 5.2B 8.9B 8.9B 8.9B
Process Tech 16nm 16nm 16nm 28nm 28nm 28nm 28nm 28nm 28nm
MSRP (current) $699 $1,200 $599 $649 $999 $499 $649 $549 $499

The GTX 1080 Ti looks a whole lot like the TITAN X launched in August of last year. Based on the 12B transistor GP102 chip, the new GTX 1080 Ti will have 3,584 CUDA core with a 1.60 GHz Boost clock. That gives it the same processor count as Titan X but with a slightly higher clock speed which should make the new GTX 1080 Ti slightly faster by at least a few percentage points and has a 4.7% edge in base clock compute capability. It has 28 SMs, 28 geometry units, 224 texture units.

archoverview.jpg

Interestingly, the memory system on the GTX 1080 Ti gets adjusted – NVIDIA has disabled a single 32-bit memory controller to give the card a total of 352-bit wide bus and an odd-sounding 11GB memory capacity. The ROP count also drops to 88 units. Speaking of 11, the memory clock on the G5X implementation on GTX 1080 Ti will now run at 11 Gbps, a boost available to NVIDIA thanks to a chip revision from Micron and improvements to equalization and reverse signal distortion.

memoryeye.jpg

The TDP of the new part is 250 watts, falling between the Titan X and the GTX 1080. That’s an interesting move considering that the GP102 was running at 250 watts with identical to the Titan product. The cooler has been improved compared to the GTX 1080, offering quieter fan speeds and lower temperatures when operating at the same power envelope.

coolerperf.jpg

Performance estimates from NVIDIA put the GTX 1080 Ti about 35% faster than the GTX 1080, the largest “kicker performance increase” that we have seen from a flagship Ti launch.

perf.jpg

Pricing is going to be set at $699 so don't expect to find this in any budget builds. But for the top performing GeForce card on the market, it's what we expect. It should be on virtual shelves starting next week.

(Side note, with the GTX 1080 getting a $100 price drop tonight, I think we'll find this new lineup very compelling to enthusiasts.)

card2.jpg

card3.jpg

NVIDIA did finally detail its tiled caching rendering technique. We'll be diving more into that in a separate article with a little more time for research.

One more thing…

In another interesting move, NVIDIA is going to be offering “overclocked” versions of the GTX 1080 and GTX 1060 with +1 Gbps memory speeds. Partners will be offering them with some undisclosed price premium.

1080oc.jpg

I don’t know how much performance this will give us but it’s clear that NVIDIA is preparing its lineup for the upcoming AMD Vega release.

GeForce_GTX_1080ti_3qtr_Front_Left_1488313915.jpg

We’ll have more news from NVIDIA and GDC as it comes!

Source: NVIDIA

GDC: NVIDIA Announces GTX 1080 Price Drop to $499

Subject: Graphics Cards | February 28, 2017 - 10:55 PM |
Tagged: pascal, nvidia, GTX 1080, GDC

Update Feb 28 @ 10:03pm It's official, NVIDIA launches $699 GTX 1080 Ti.

NVIDIA is hosting a "Gaming Celebration" live event during GDC 2017 to talk PC gaming and possibly launch new hardware (if rumors are true!). During the event, NVIDIA CEO Jen-Hsun Huang made a major announcement regarding its top-end GTX 1080 graphics card with a price drop to $499 effective immediately.

NVIDIA 499 GTX 1080.png

The NVIDIA GTX 1080 is a pascal based graphics card with 2560 CUDA cores paired with 8GB of GDDR5X memory. Graphics cards based on this GP104 GPU are currently selling for around $580 to $700 (most are around $650+/-) with the "Founders Edition" having an MSRP of $699. The $499 price teased at the live stream represents a significant price drop compared to what the graphics cards are going for now. NVIDIA did not specify if the new $499 MSRP was the new Founders Edition price or an average price that includes partner cards as well but even if it only happened on the reference cards, the partners would have to adjust their prices downwards accordingly to compete.

I suspect that NVIDIA is making such a bold move to make room in their lineup for a new product (the long-rumored 1080 Ti perhaps?) as well as a pre-emptive strike against AMD and their Radeon RX Vega products. This move may also be good news for GTX 1070 pricing as they may also see price drops to make room for cheaper GTX 1080 partner cards that come in below the $499 price point.

If you have been considering buying a new graphics card, NVIDIA has sweetened the pot a bit especially if you had already been eyeing a GTX 1080. (Note that while the price drop is said to be effective immediately, at the time of writing Amazon was still showing "normal"/typical prices for the cards. Enthusiasts might have to wait a few hours or days for the retailers to catch up and update their sites.)

This makes me a bit more excited to see what AMD will have to offer with Vega as well as the likelihood of a GTX 1080 Ti launch happening sooner rather than later!

Source: NVIDIA

Overclockers Push Ryzen 7 1800X to 5.2 GHz On LN2, Break Cinebench Record

Subject: Processors | February 28, 2017 - 09:06 PM |
Tagged: Zen, Ryzen 1800X, ryzen, overclocking, LN2, Cinebench, amd

During AMD’s Ryzen launch event a team of professional overclockers took the stage to see just how far they could push the top Zen-based processor. Using a bit of LN2 (liquid nitrogen) and a lot of voltage, the overclocking team was able to hit an impressive 5.20 GHz with all eight cores (16 threads) enabled!

Ryzen Cinebench Benchmark Record.png

In addition to the exotic LN2 cooling, the Ryzen 7 1800X needed 1.875 volts to hit 5.20 GHz. That 5.20 GHz was achieved by setting the base clock at 137.78 MHz and the multiplier at 37.75. Using these settings, the chip was even stable enough to benchmark with a score of 2,363 on Cinebench R15’s multi-threaded test.

According to information from AMD, a stock Ryzen 7 1800X comes clocked at 3.6 GHz base and up to 4 GHz boost (XFR can go higher depending on HSF) and is able to score 1,619 in Cinebench. The 30% overclock to 5.20 GHz got the overclockers an approximately 45% higher CInebench score.

Further, later in the overclocking event, they managed to break a Cinebench world record of 2,445 points by achieving a score of 2,449 (it is not clear what clockspeed this was at). Not bad for a brand-new processor!

AMD Ryzen 1800X Overclocked On LN2 to 5GHz.jpg

The overclocking results are certainly impressive, and suggest that Ryzen may be a decent overclocker so long as you have the cooling setup to get it there (the amount of voltage needed is a bit worrying though heh). Interestingly, HWBot shows a Core i7 6900K (also 8C/16T) hitting 5.22 GHz and scoring 2,146 in CInebench R15. That Ryzen can hit similar numbers with all cores and threads turned on is promising.

I am looking forward to seeing what people are able to hit on air and water cooling and if XFR will work as intended and get most of the way to a manual overclock without the effort of manually overclocking. I am also curious how the power phases and overclocking performance will stack up on motherboards using the B350 versus X370 chipsets. With the eight core chips able to hit 5.2, I expect the upcoming six core Ryzen 5 and four core Ryzen 3 processors to clock even higher which would certainly help gaming performance for budget builds!

Austin Evans was able to get video of the overclocking event which you can watch here (Vimeo).

Also read:

Source: Hexus

AMD Unveils Next-Generation GPU Branding, Details - Radeon RX Vega

Subject: General Tech | February 28, 2017 - 05:46 PM |
Tagged: amd, Vega, radeon rx vega, radeon, gdc 2017, capsaicin, rtg, HBCC, FP16

Today at the AMD Capsaicin & Cream event at GDC 2017, Senior VP of the Radeon Technologies Group, Raja Koduri officially revealed the branding that AMD will use for their next generation GPU products.

While we usually see final product branding deviate from their architectural code names (e.g. Polaris becoming the Radeon RX 460, 470 and 480), AMD this time has decided to embrace the code name for the retail naming scheme for upcoming graphics cards featuring the new GPU – Radeon RX Vega.

RadeonRXVega.jpg

However, we didn't just get a name for Vega-based GPUs. Raja also went into some further detail and showed some examples of technologies found in Vega.

First off is the High-Bandwidth Cache Controller found in Vega products. We covered this technology during our Vega architecture preview last month at CES, but today we finally saw a demo of this technology in action.

Vega-HBCCslide.jpg

Essentially, the High-Bandwidth Cache Controller (HBCC) allows Vega GPUs to address all available memory in the system (including things like NVMe SSDs, system DRAM and network storage.) AMD claims that by using the already fast memory you have available on your PC to augment onboard GPU memory (such as HBM2) they will be able to offer less expensive graphics cards that ultimately offer access to much more memory than current graphics cards.

Vega-HBCC.jpg

The demo that they showed on stage featured Deus Ex: Mankind Divided running on a system with a Vega GPU running with 2GB of VRAM, and Ryzen CPU. By turning HBCC on, they were able to show a 50% increase in average FPS, and a 100% increase in minimum FPS.

While we probably won't actually see a Vega product with such a small VRAM implementation, it was impressive to see how HBCC was able to dramatically improve the playability of a 2GB GPU on a game that has no special optimizations to take advantage of the High-Bandwidth Cache.

The other impressive demo running on Vega at the Capsaicin & Cream event centered around what AMD is calling Rapid Pack Math.

Rapid Pack Math is an implementation of something we have been hearing and theorizing a lot about lately, the use of FP16 shaders for some graphic effects in games. By using half-precision FP16 shaders instead of the current standard FP32 shaders, developers are able to get more performance out of the same GPU cores. In specific, Rapid Pack Math allows developers to run half-precision FP16 shaders at exactly 2X the speed of traditional standard-precision FP32 shaders.

TressFX-FP16.jpg

While the lower precision of FP16 shaders won't be appropriate for all GPU effects, AMD was showing a comparison of their TressFX hair rendering technology running on both standard and half-precision shaders. As you might expect, AMD was able to render twice the amount of hair strands per second, making for a much more fluid experience.

Vega-shirt.jpg

Just like we saw with the lead up to the Polaris GPU launch, AMD seems to be releasing a steady stream of information on Vega. Now that we have the official branding for Vega, we eagerly await getting our hands on these new High-end GPUs from AMD.

 

The X370s aren't here yet so take a gander at the fancy X270 from GIGABYTE

Subject: Motherboards | February 28, 2017 - 04:24 PM |
Tagged: intel z270, Aorus Z270X Gaming 9, gigabyte

What an interesting time it will be with Intel slinging Z270's at the same time AMD's Z370 arrives on the scene; there is no possible way some people could get confused.  It will also make the next generation of board names interesting as the two companies fight for numbering rights.  GIGABYTE's Aorus Z270X Gaming 9 comes with an impressive price tag of $500, so it will be interesting to see if [H]ard|OCP finds the feature set on the board worth of that investment.  The four 16x PCIe 3.0 slots will support four GPUs simultaneously and there are both a pair of M.2 and U.2 slots, to say nothing of the onboard SoundBlaster.  Head on over to read through the full review.

1487923677FlKVyNgDxY_1_10_l.jpg

"GIGABYTE’s Z270X Gaming 9 is one of the most feature rich and ultra-high end offerings you’ll see for the Z270 chipset this year. We were super fond of last year’s similar offering and as a result, the Z270X Gaming 9 has very large shoes to fill. With its massive feature set and overclocking prowess, it is poised to be one of the best motherboards of the year."

Here are some more Motherboard articles from around the web:

Motherboards

Source: [H]ard|OCP

The Microsoft Store's unintentional cash back offer

Subject: General Tech | February 28, 2017 - 03:48 PM |
Tagged: microsoft, oops, Lawsuit

If you purchased anything from the Microsoft store between November 2013 and February 24 of this year and live in the USA you could be eligible for up to $100 in cash damages.  It seems that the credit card information they provided on receipts contained more than half of your credit card numbers which is in violation of a law implemented in 2003 which states that no more than five numbers can be shown on receipts.  Now that the judgment against Microsoft is in, the proposed settlement for Microsoft to set aside $1,194,696US for customers who were affected by this issue.  The settlement needs to be approved by the judge so you cannot claim your money immediately, keep an eye out for more new.  The Register have posted links to the original lawsuit as well as the judgment right here.

535990119.jpg

"On Friday, the Redmond giant agreed to give up roughly seven minutes of its quarterly revenue to a gaggle of Microsoft Store customers who claimed that their receipts displayed more of their payment card numbers than legally allowed."

Here is some more Tech News from around the web:

Tech Talk

Source: The Register

SDXC SD cards come at a big premium; too bad we can't slide an M.2 SSD into our cameras

Subject: Storage | February 27, 2017 - 05:23 PM |
Tagged: sdxc, sd card, patriot, lx series

You may recall a while back Allyn put together an article detailing the new types of SD cards hitting the market which will support 4K recording in cameras.  Modders Inc just wrapped up a review of one of these cards, Patriot's 256GB LX Series SDXC card with an included adapter for those who need it.  The price certainly implies it is new technology, $200 for 256GB of storage is enough to make anyone pause, so the question becomes why one would pay such a premium. Their benchmarks offer insight into this, with 83Mb/s write and 96Mb/s read in both ATTO and CrystalDisk proving that this is a far cry from the performance of older SD cards and worthy of that brand new ultra high definition camera you just picked up.  Lets us hope the prices plummet as they did with the previous generations of cards.

Front.jpg

"Much like Mary Poppins bag of wonders, Patriot too has a method of fitting a substantial amount of goodness in a small space with the release of their 256GB LX Series SDXC class 10 memory card. Featuring an impressive 256GB of storage and boasting this as an “ultra high speed” card for QHD video production and high resolution photos."

Here are some more Storage reviews from around the web:

Storage

 

Source: Modders Inc

Futuremark at GDC and MWC

Subject: General Tech, Graphics Cards | February 27, 2017 - 03:39 PM |
Tagged: MWC, GDC, VRMark, Servermark, OptoFidelity, cyan room, benchmark

Futuremark are showing off new benchmarks at GDC and MWC, the two conferences which are both happening this week.  We will have quite a bit of coverage this week as we try to keep up with simultaneous news releases and presentations.

vrmark.jpg

First up is a new benchmark in their recently released DX12 VRMark suite, the new Cyan Room which sits between the existing two in the suite.  The Orange Room is to test if your system is capable of providing you with an acceptable VR experience or if your system falls somewhat short of the minimum requirements while the Blue Room is to show off what a system that exceeds the recommended specs can manage.  The Cyan room will be for those who know that their system can handle most VR, and need to test their systems settings.  If you don't have the test suite Humble Bundle has a great deal on this suite and several other tools, if you act quickly.

unnamed.jpg

Next up is a new suite to test Google Daydream, Google Cardboard, and Samsung Gear VR performance and ability.  There is more than just performance to test when you are using your phone to view VR content, such as avoiding setting your eyeholes on fire.  The tests will help you determine just how long your device can run VR content before overheating becomes an issue and interferes with performance, as well as helping you determine your battery life.

latency.jpg

VR Latency testing is the next in the list of announcements and is very important when it comes to VR as high or unstable latency is the reason some users need to add a bucket to their list of VR essentials.  Futuremark have partnered with OptoFidelity to produce VR Multimeter HMD hardware based testing. This allows you, and hopefully soon PCPer as well, to test motion-to-photon latency, display persistence, and frame jitter as well as audio to video synchronization and motion-to-audio-latency all of which could lead to a bad time.

servermark.jpg

Last up is the brand new Servermark to test the performance you can expect out of virtual servers, media servers and other common tasks.  The VDI test lets you determine if a virtual machine has been provisioned at a level commensurate to the assigned task, so you can adjust it as required.  The Media Transcode portion lets you determine the maximum number of concurrent streams as well as the maximum quality of those streams which your server can handle, very nice for those hosting media for an audience. 

Expect to hear more as we see the new benchmarks in action.

Source: Futuremark

If you can’t open it, you don’t own it - Macchina opens up your car's hardware

Subject: General Tech | February 27, 2017 - 12:56 PM |
Tagged: M2, Arduino Due, macchina, Kickstarter, open source, DIY

There is a Kickstarter out there for all you car enthusiasts and owners, the Arduino Duo based Macchina M2 which allows you to diagnose and change how your car functions.  They originally developed the device during a personal project to modify a Ford Contour into an electric car, which required serious reprogramming of sensors and other hardware in the car.  They realized that their prototype could be enhanced to allow users to connect into the hardware of their own cars to monitor performance, diagnose issues or even modify the performance.  Slashdot has the links and their trademarked reasonable discourse for those interested, if you have the hardware already you can get the M2 interface $45, $79 or more for the hardware and accessories.

5ca788192bfdfed89131bea2e7a39a8b_original.png

"Challenging "the closed, unpublished nature of modern-day car computers," their M2 device ships with protocols and libraries "to work with any car that isn't older than Google." With catchy slogans like "root your ride" and "the future is open," they're hoping to build a car-hacking developer community, and they're already touting the involvement of Craig Smith, the author of the Car Hacker's Handbook from No Starch Press."

Here is some more Tech News from around the web:

Tech Talk

Source: Slashdot

Qualcomm Announces First 3GPP 5G NR Connection, X50 5G NR Modem

Subject: General Tech, Mobile | February 27, 2017 - 11:12 AM |
Tagged: x50, Sub-6 Ghz, qualcomm, OFDM, NR, New Radio, MWC, multi-mode, modem, mmWave, LTE, 5G, 3GPP

Qualcomm has announced their first successful 5G New Radio (NR) connection using their prototype sub-6 GHz prototype system. This announcement was followed by today's news of Qualcomm's collaboration with Ericsson and Vodafone to trial 5G NR in the second half of 2017, as we approach the realization of 5G. New Radio is expected to become the standard for 5G going forward as 3GPP moves to finalize standards with release 15.

"5G NR will make the best use of a wide range of spectrum bands, and utilizing spectrum bands below 6 GHz is critical for achieving ubiquitous coverage and capacity to address the large number of envisioned 5G use cases. Qualcomm Technologies’ sub-6 GHz 5G NR prototype, which was announced and first showcased in June 2016, consists of both base stations and user equipment (UE) and serves as a testbed for verifying 5G NR capabilities in bands below 6 GHz."

5G NR Sub 6 GHz prototype.jpg

The Qualcomm Sub-6 GHz 5G NR prototype (Image credit: Qualcomm)

Qualcomm first showed their sub-6 Ghz prototype this past summer, and it will be on display this week at MWC. The company states that the system is designed to demonstrate how 5G NR "can be utilized to efficiently achieve multi-gigabit-per-second data rates at significantly lower latency than today’s 4G LTE networks". New Radio, or NR, is a complex topic as it related to a new OFDM-based wireless standard. OFDM refers to "a digital multi-carrier modulation method" in which "a large number of closely spaced orthogonal sub-carrier signals are used to carry data on several parallel data streams or channels". With 3GPP adopting this standard going forward the "NR" name could stick, just as "LTE" (Long Term Evolution) caught on to describe the 4G wireless standard.

Along with this 5G NR news comes the annoucement of the expansion of its X50 modem family, first announced in October, "to include 5G New Radio (NR) multi-mode chipset solutions compliant with the 3GPP-based 5G NR global system", according to Qualcomm. This 'multi-mode' solution provides full 4G/5G compatibility with "2G/3G/4G/5G functionality in a single chip", with the first commercial devices expected in 2019.

X50_Modem_Logo.jpg

"The new members of the Snapdragon X50 5G modem family are designed to support multi-mode 2G/3G/4G/5G functionality in a single chip, providing simultaneous connectivity across both 4G and 5G networks for robust mobility performance. The single chip solution also supports integrated Gigabit LTE capability, which has been pioneered by Qualcomm Technologies, and is an essential pillar for the 5G mobile experience as the high-speed coverage layer that co-exists and interworks with nascent 5G networks. This set of advanced multimode capabilities is designed to provide seamless Gigabit connectivity – a key requirement for next generation, premium smartphones and mobile computing devices."

Full press releases after the break.

Source: Qualcomm

ZeniMax Seeks an Injunction Against Oculus VR

Subject: General Tech | February 27, 2017 - 07:01 AM |
Tagged: zenimax, Oculus

As far as I know, it’s fairly common to seek injunctions during legal fights over intellectual rights cases, so I’m not sure how surprising this should be. Still, after the $500 million USD judgment against Oculus, ZeniMax has indeed filed for a court order to, according to UploadVR, block the usage of Oculus PC software, Oculus Mobile software, and the plug-ins for Unity and Unreal Engine. They also demand, as usual, that Oculus deletes all copies of the infringing code and a few other stipulations.

oculus-2016-riftkit.jpg

I should stress that this is just a filing. It would need to be accepted for it to have any weight.

The timing is quite disruptive to Oculus, too, even if by total co-incidence. Epic Games is about to release their flagship, Oculus-exclusive title, Robo Recall, which was intended to be released for free to those who have Oculus Touch controllers. If it succeeds, and that’s way more if than when at this point, then that could sting for whoever gets stuck with the game’s invoice, which (I assume) would be Oculus.

Personally, I’m not quite sure how far this will go. Based on my memory of the jury decision, ZeniMax is entitled to $500 million USD for prior damages, and nothing for ongoing damages. You would think that, if a jury ruled that the infringement has no lasting effect, that an injunction wouldn’t recover any of that non-existent value. On the other hand, I’m not a judge (or anyone else of legal relevance) so what I reason doesn’t really matter outside the confines of this website.

We’ll need to wait and see if this goes anywhere.

Source: UploadVR

AMD Supports CrossFire On B350 and X370 Chipsets, However SLI Limited to X370

Subject: Motherboards | February 26, 2017 - 01:29 AM |
Tagged: x370, sli, ryzen, PCI-E 3.0, gaming, crossfire, b350, amd

Computerbase.de recently published an update (translated) to an article outlining the differences between AMD’s AM4 motherboard chipsets. As it stands, the X370 and B350 chipsets are set to be the most popular chipsets for desktop PCs (with X300 catering to the small form factor crowd) especially among enthusiasts. One key differentiator between the two chipsets was initially support for multi-GPU configurations with X370. Now that motherboards have been revealed and are up for pre-order now, it turns out that the multi-GPU lines have been blurred a bit. As it stands, both B350 and X370 will support AMD’s CrossFire multi-GPU technology and the X370 alone will also have support for NVIDIA’s SLI technology.

The AM4 motherboards equipped with the B350 and X370 chipsets that feature two PCI-E x16 expansion slots will run as x8 in each slot in a dual GPU setup. (In a single GPU setup, the top slot can run at full x16 speeds.) Which is to say that the slots behave the same across both chipsets. Where the chipsets differ is in support for specific GPU technologies where NVIDIA’s SLI is locked to X370. TechPowerUp speculates that the decision to lock SLI to its top-end chipset is due, at least in part, to licensing costs. This is not a bad thing as B350 was originally not going to support any dual x16 slot multi-GPU configurations, but now motherboard manufacturers are being allowed to enable it by including a second slot and AMD will reportedly permit CrossFire usage (which costs AMD nothing in licensing). Meanwhile the most expensive X370 chipset will support SLI for those serious gamers that demand and can afford it. Had B350 supported SLI and carried the SLI branding, they likely would have been ever so slightly more expensive than they are now. Of course, DirectX 12's multi-adapter will work on either chipset so long as the game supports it.

  X370 B350 A320 X300 / B300 / A300 Ryzen CPU Bristol Ridge APU
PCI-E 3.0 0 0 0 4 20 (18 w/ 2 SATA) 10
PCI-E 2.0 8 6 4 0 0 0
USB 3.1 Gen 2 2 2 1 1 0 0
USB 3.1 Gen 1 6 2 2 2 4 4
USB 2.0 6 6 6 6 0 0
SATA 6 Gbps 4 2 2 2 2 2
SATA RAID 0/1/10 0/1/10 0/1/10 0/1    
Overclocking Capable? Yes Yes No Yes (X300 only)    
SLI Yes No No No    
CrossFire Yes Yes No No    

Multi-GPU is not the only differentiator though. Moving up from B350 to X370 will get you 6 USB 3.1 Gen 1 (USB 3.0) ports versus 2 on B350/A30/X300, two more PCI-E 2.0 lanes (8 versus 6), and two more SATA ports (6 total usable; 4 versus 2 coming from the chipset).

Note that X370, B350, and X300 all support CPU overclocking. Hopefully this helps you when trying to decide which AM4 motherboard to pair with your Ryzen CPU once the independent benchmarks are out. In short, if you must have SLI you are stuck ponying up for X370, but if you plan to only ever run a single GPU or tend to stick with AMD GPUs and CrossFire, B350 gets you most of the way to a X370 for a lot less money! You do not even have to give up any USB 3.1 Gen 2 ports though you limit your SATA drive options (it’s all about M.2 these days anyway heh).

For those curious, looking around on Newegg I notice that most of the B350 motherboards have that second PCI-E 3.0 x16 slot and CrossFire support listed in their specifications and seem to average around $99.  Meanwhile X370 starts at $140 and rockets up from there (up to $299!) depending on how much bling you are looking for!

Are you going for a motherboard with the B350 or X370 chipset? Will you be rocking multiple graphics cards?

Also read:

Valve Software Releases Steam Audio SDK on GitHub

Subject: General Tech | February 26, 2017 - 12:13 AM |
Tagged: valve, pc gaming

When VR started to take off, developers begun to realize that audio is worth some attention. Historically, it’s been difficult to market, but that’s par for the course when it comes to VR technology, so I guess that’s no excuse to pass it up anymore. Now Valve, the owners of the leading VR platform on the PC have just released an API for audio processing: Steam Audio SDK.

valve-2017-steamaudio.png

Image Credit: Valve Software

First, I should mention that the SDK is not quite open. The GitHub page (and the source code ZIP in its releases tab) just contain the license (which is an EULA) and the readme. That said, Valve is under no obligation to provide these sorts of technology to the open (even though it would be nice) and they are maintaining builds for Windows, Mac, Linux, and Android. It is currently available as a C API and a plug-in for Unity. Unreal Engine 4, FMOD, and WWISE plug-ins are “coming soon”.

As for the technology itself, it has quite a few interesting features. As you might expect, it supports HRTF out of the box, which modifies a sound call to appear like it’s coming from a defined direction. The algorithm is based on experimental data, rather than some actual, physical process.

More interesting is their sound propagation and occlusion calculations. They are claiming that this can be raycast, and static scenes can bake some of the work ahead-of-time, which will reduce runtime overhead. Unlike VRWorks Audio or TrueAudio Next, it looks like they’re doing it on the CPU, though. I’m guessing this means that it will mostly raycast to fade between versions of the audio, rather than summing up contributions from thousands of individual rays at runtime (or an equivalent algorithm, like voxel leakage).

Still, this is available now as a C API and a Unity Plug-in, because Valve really likes Unity lately.

Source: Valve

Gigabyte is Ryzen up to the challenge of their rivals

Subject: Motherboards | February 24, 2017 - 05:30 PM |
Tagged: aorus, gigabyte, ryzen, b350, x370

Gigabyte have lead with five motherboards, two X370s under Aorus and three B350s with Gigabyte branding.  They all share some traits in common such as RGB Fusion with 16.8 million colours to choose from and an application to allow you to customize the light show to your own specifications.  It supports control from your phone if you are so addicted to the glow you need to play with your system from across the room. 

lightshow.PNG

Smartfan 5 indicates the presence of five headers for fans or pumps that will work with PWM and standard voltage fans, which can draw up to 12V at 2A.  The boards also have six temperature sensors to give you feedback on the effectiveness of your cooling and modify it with the included application.  Most models will offer Thunderbolt 3, Intel GbE NICs and an ASMedia 2142 USB 3.1 controllers which they claim can provide up to 16Gb/s.  All will have high end audio solutions, often featuring a headphone pre-amp and high quality capacitors.  There are a lot more features specific to each board, so make sure to click through to check out your favourites.

gigabyte.PNG

The Aorus boards, the GA-AX370-Gaming K7 and GA-AX370-GAMING 5 are very similar but if you plan on playing with your BCLK it is the K7 which includes Gigabyte's Turbo B-Clock.  The Gigabyte lineup includes the GA-AB350M, GA-AB350-Gaming and GA-AB350-GAMING 3.  The GA-AB350M is the only mATX Ryzen board of these five for those looking to build a smaller system.  For audiophiles the full size the GAMING 3 includes an ALC1220 codec as opposed to the ALC 887 used on the other two models. 

You can expect to see reviews of these boards which offer far more details on perfomance and features after they are released on March 2nd.  Full PR under the break.

Source: Gigabyte