Flash player not detected. Click here to install flash.
« 1 2 3 4 5 »
Author:
Manufacturer: NVIDIA

GP106 Specifications

Twelve days ago, NVIDIA announced its competitor to the AMD Radeon RX 480, the GeForce GTX 1060, based on a new Pascal GPU; GP 106. Though that story was just a brief preview of the product, and a pictorial of the GTX 1060 Founders Edition card we were initially sent, it set the community ablaze with discussion around which mainstream enthusiast platform was going to be the best for gamers this summer.

Today we are allowed to show you our full review: benchmarks of the new GeForce GTX 1060 against the likes of the Radeon RX 480, the GTX 970 and GTX 980, and more. Starting at $250, the GTX 1060 has the potential to be the best bargain in the market today, though much of that will be decided based on product availability and our results on the following pages.

Does NVIDIA’s third consumer product based on Pascal make enough of an impact to dissuade gamers from buying into AMD Polaris?

01.jpg

All signs point to a bloody battle this July and August and the retail cards based on the GTX 1060 are making their way to our offices sooner than even those based around the RX 480. It is those cards, and not the reference/Founders Edition option, that will be the real competition that AMD has to go up against.

First, however, it’s important to find our baseline: where does the GeForce GTX 1060 find itself in the wide range of GPUs?

Continue reading our review of the GeForce GTX 1060 6GB graphics card!!

NVIDIA Announces GP102-based TITAN X with 3,584 CUDA cores

Subject: Graphics Cards | July 21, 2016 - 10:21 PM |
Tagged: titan x, titan, pascal, nvidia, gp102

Donning the leather jacket he goes very few places without, NVIDIA CEO Jen-Hsun Huang showed up at an AI meet-up at Stanford this evening to show, for the very first time, a graphics card based on a never before seen Pascal GP102 GPU. 

titanxpascal1.jpg

Source: Twitter (NVIDIA)

Rehashing an old name, NVIDIA will call this new graphics card the Titan X. You know, like the "new iPad" this is the "new TitanX." Here is the data we know about thus far:

  Titan X (Pascal) GTX 1080 GTX 980 Ti TITAN X GTX 980 R9 Fury X R9 Fury R9 Nano R9 390X
GPU GP102 GP104 GM200 GM200 GM204 Fiji XT Fiji Pro Fiji XT Hawaii XT
GPU Cores 3584 2560 2816 3072 2048 4096 3584 4096 2816
Rated Clock 1417 MHz 1607 MHz 1000 MHz 1000 MHz 1126 MHz 1050 MHz 1000 MHz up to 1000 MHz 1050 MHz
Texture Units 224 (?) 160 176 192 128 256 224 256 176
ROP Units 96 (?) 64 96 96 64 64 64 64 64
Memory 12GB 8GB 6GB 12GB 4GB 4GB 4GB 4GB 8GB
Memory Clock 10000 MHz 10000 MHz 7000 MHz 7000 MHz 7000 MHz 500 MHz 500 MHz 500 MHz 6000 MHz
Memory Interface 384-bit G5X 256-bit G5X 384-bit 384-bit 256-bit 4096-bit (HBM) 4096-bit (HBM) 4096-bit (HBM) 512-bit
Memory Bandwidth 480 GB/s 320 GB/s 336 GB/s 336 GB/s 224 GB/s 512 GB/s 512 GB/s 512 GB/s 320 GB/s
TDP 250 watts 180 watts 250 watts 250 watts 165 watts 275 watts 275 watts 175 watts 275 watts
Peak Compute 11.0 TFLOPS 8.2 TFLOPS 5.63 TFLOPS 6.14 TFLOPS 4.61 TFLOPS 8.60 TFLOPS 7.20 TFLOPS 8.19 TFLOPS 5.63 TFLOPS
Transistor Count 11.0B 7.2B 8.0B 8.0B 5.2B 8.9B 8.9B 8.9B 6.2B
Process Tech 16nm 16nm 28nm 28nm 28nm 28nm 28nm 28nm 28nm
MSRP (current) $1,200 $599 $649 $999 $499 $649 $549 $499 $329

Note: everything with a ? on is educated guesses on our part.

Obviously there is a lot for us to still learn about this new GPU and graphics card, including why in the WORLD it is still being called Titan X, rather than...just about anything else. That aside, GP102 will feature 40% more CUDA cores than the GP104 at slightly lower clock speeds. The rated 11 TFLOPS of single precision compute of the new Titan X is 34% better than that of the GeForce GTX 1080 and I would expect gaming performance to scale in line with that difference.

The new Titan X will feature 12GB of GDDR5X memory, not HBM as the GP100 chip has, so this is clearly a new chip with a new memory interface. NVIDIA claims it will have 480 GB/s of bandwidth, and I am guessing is built on a 384-bit memory controller interface running at the same 10 Gbps as the GTX 1080. It's truly amazing hardware.

titanxpascal2.jpg

What will you be asked to pay? $1200, going on sale on August 2nd, and only on NVIDIA.com, at least for now. Considering the prices of GeForce GTX 1080 cards with such limited availability, the $1200 price tag MIGHT NOT seem so insane. That's higher than the $999 starting price of the Titan X based on Maxwell in March of 2015 - the claims that NVIDIA is artificially raising prices of cards in each segment will continue, it seems.

I am curious about the TDP on the new Titan X - will it hit the 250 watt mark of the previous version? Yes, apparently it will it that 250 watt TDP - specs above updated. Does this also mean we'll see a GeForce GTX 1080 Ti that falls between the GTX 1080 and this new Titan X? Maybe, but we are likely looking at an $899 or higher SEP - so get those wallets ready. 

That's it for now; we'll have a briefing where we can get more details soon, and hopefully a review ready for you on August 2nd when the cards go on sale!

Source: NVIDIA
Manufacturer: Overclock.net

Yes, We're Writing About a Forum Post

Update - July 19th @ 7:15pm EDT: Well that was fast. Futuremark published their statement today. I haven't read it through yet, but there's no reason to wait to link it until I do.

Update 2 - July 20th @ 6:50pm EDT: We interviewed Jani Joki, Futuremark's Director of Engineering, on our YouTube page. The interview is embed just below this update.

Original post below

The comments of a previous post notified us of an Overclock.net thread, whose author claims that 3DMark's implementation of asynchronous compute is designed to show NVIDIA in the best possible light. At the end of the linked post, they note that asynchronous compute is a general blanket, and that we should better understand what is actually going on.

amd-mantle-queues.jpg

So, before we address the controversy, let's actually explain what asynchronous compute is. The main problem is that it actually is a broad term. Asynchronous compute could describe any optimization that allows tasks to execute when it is most convenient, rather than just blindly doing them in a row.

I will use JavaScript as a metaphor. In this language, you can assign tasks to be executed asynchronously by passing functions as parameters. This allows events to execute code when it is convenient. JavaScript, however, is still only single threaded (without Web Workers and newer technologies). It cannot run callbacks from multiple events simultaneously, even if you have an available core on your CPU. What it does, however, is allow the browser to manage its time better. Many events can be delayed until the browser renders the page, it performs other high-priority tasks, or until the asynchronous code has everything it needs, like assets that are loaded from the internet.

mozilla-architecture.jpg

This is asynchronous computing.

However, if JavaScript was designed differently, it would have been possible to run callbacks on any available thread, not just the main thread when available. Again, JavaScript is not designed in this way, but this is where I pull the analogy back into AMD's Asynchronous Compute Engines. In an ideal situation, a graphics driver will be able to see all the functionality that a task will require, and shove them down an at-work GPU, provided the specific resources that this task requires are not fully utilized by the existing work.

Read on to see how this is being implemented, and what the controversy is.

Manufacturer: AMD

Introduction: Rethinking the Stock Cooler

AMD's Wraith cooler was introduced at CES this January, and has been available with select processors from AMD for a few months. We've now had a chance to put one of these impressive-looking CPU coolers through its paces on the test bench to see how much it improves on the previous model, and see if aftermarket cooling is necessary with AMD's flagship parts anymore.

DSC_0411.jpg

While a switch in the bundled stock cooler might not seem very compelling, the fact that AMD has put effort into improving this aspect of their retail CPU offering is notable. AMD processors already present a great value relative to Intel's offerings for gaming and desktop productivity, but the stock coolers have to this point warranted a replacement.

Intel went the other direction with the current generation of enthusiast processors, as CPUs such as my Core i5-6600k no longer ship with a cooler of any kind. If AMD has upgraded the stock CPU cooler to the point that it now cools efficiently without significant noise, this will save buyers a little more cash when planning an upgrade, which is always a good thing.

DSC_0424.jpg

The previous AMD stock cooler (left) and the AMD Wraith cooler (right)

A quick search for "Wraith" on Amazon yields retail-box products like the A10-7890K APU, and the FX-8370 CPU; options which have generally required an aftermarket cooler for the highest performance. In this review we’ll take a close look at the results with the previous cooler and the Wraith, and throw in results from the most popular aftermarket cooler of them all; the Cooler Master Hyper 212 EVO.

Continue reading our review of the AMD Wraith CPU Cooler!!

Killer Networks, Alienware and Logitech Summer Giveaway!

Subject: General Tech | July 20, 2016 - 11:36 AM |
Tagged: rivet, logitech g, logitech, killer networks, giveaway, contest, alienware

The temperature is heating up across the US and we're starting to lose our minds around here. As a result, we have convinced our friends at Killer Networks, Alienware and Logitech G to give some incredible hardware packages to our readers and fans!

How does an Alienware 15 Gaming Laptop with an MSRP of $1199 sound to you? Pretty nice, right? And if you aren't the lucky winner of that, how about one of five packages worth $390 each from Logitech that include a G633 headset, G810 keyboard and G502 mouse?

Winning is easy - you can enter through one or methods, each of which is worth its own entry. We are open to anyone, anywhere in the world, so enter away! Entries close at midnight ET on July 31st when we'll draw the winners at random.

Killer Networks, Alienware and Logitech Summer Giveaway!

A HUGE thank you goes out to our friends at River/Killer, Alienware and Logitech for supplier the goods for this contest! Good luck!

Rumor: 16nm for NVIDIA's Volta Architecture

Subject: Graphics Cards | July 16, 2016 - 06:37 PM |
Tagged: Volta, pascal, nvidia, maxwell, 16nm

For the past few generations, NVIDIA has been roughly trying to release a new architecture with a new process node, and release a refresh the following year. This ran into a hitch as Maxwell was delayed a year, apart from the GTX 750 Ti, and then pushed back to the same 28nm process that Kepler utilized. Pascal caught up with 16nm, although we know that some hard, physical limitations are right around the corner. The lattice spacing for silicon at room temperature is around ~0.5nm, so we're talking about features the size of ~the low 30s of atoms in width.

nvidia-2016-gtc-pascal-fivemiracles.png

This rumor claims that NVIDIA is not trying to go with 10nm for Volta. Instead, it will take place on the same, 16nm node that Pascal is currently occupying. This is quite interesting, because GPUs scale quite well with complexity changes, as they have many features with a relatively low clock rate, so the only real ways to increase performance are to make the existing architecture more efficient, or make a larger chip.

That said, GP100 leaves a lot of room on the table for an FP32-optimized, ~600mm2 part to crush its performance at the high end, similar to how GM200 replaced GK110. The rumored GP102, expected in the ~450mm2 range for Titan or GTX 1080 Ti-style parts, has some room to grow. Like GM200, however, it would also be unappealing to GPU compute users who need FP64. If this is what is going on, and we're totally just speculating at the moment, it would signal that enterprise customers should expect a new GPGPU card every second gaming generation.

That is, of course, unless NVIDIA recognized ways to make the Maxwell-based architecture significantly more die-space efficient in Volta. Clocks could get higher, or the circuits themselves could get simpler. You would think that, especially in the latter case, they would have integrated those ideas into Maxwell and Pascal, though; but, like HBM2 memory, there might have been a reason why they couldn't.

We'll need to wait and see. The entire rumor could be crap, who knows?

Source: Fudzilla

Asus ROG STRIX RX 480 Graphics Card Coming Next Month

Subject: Graphics Cards | July 16, 2016 - 11:03 PM |
Tagged: rx 480, ROG, Radeon RX 480, polaris 10 xt, polaris 10, DirectCU III, asus

Following its previous announcement, Asus has released more information on the Republic of Gamers STRIX RX 480 graphics card. Pricing is still a mystery but the factory overclocked card will be available in the middle of next month!

In my previous coverage, I detailed that the STRIX RX 480 would be using a custom PCB along with Asus' DirectCU III cooler and Aura RGB back lighting. Yesterday, Asus revealed that the card also has a custom VRM solution that, in an interesting twist, draws all of the graphics card's power from the two PCI-E power connectors and nothing from the PCI-E slot. This would explain the inclusion of both a 6-pin and 8-pin power connector on the card! I do think that it is a bit of an over-reaction to not draw anything from the slot, but it is an interesting take on powering a graphics card and I'm interested to see how it all works out once the reviews hit and overclockers get a hold of it!

Asus ROG Strix RX 480.jpg

The custom graphics card is assembled using Asus' custom "Auto Extreme" automated assembly process and uses "Super Alloy Power II" components (which is to say that Asus claims to be using high quality hardware and build quality). The DirectCU III cooler is similar to the one used on the STRIX GTX 1080 and features direct contact heatpipes, an aluminum fin stack, and three Wing Blade fans that can spin down to zero RPMs when the card is being used on the desktop or during "casual gaming." The fan shroud and backplate are both made of metal which is a nice touch. Asus claims that the cooler is 30% cooler and three times quieter than the RX 480 reference cooler.

Last but certainly not least, Asus revealed boost clock speeds! The STRIX RX 480 will clock up to 1,330 MHz in OC Mode and up to 1,310 MHz in Gaming Mode. Further Asus has not touched the GDDR5 memory frequency which stays at the reference 8 GHz. Asus did not reveal base (average) GPU clocks. I was somewhat surprised by the factory overclock as I did not expect much out of the box, but 1,330 MHz is fairly respectable. This card should have a lot more headroom beyond that though, and fortunately Asus provides software that will automatically overclock the card even further with one click (GPU Tweak II also lets advanced users manually overclock the card). Users should be able to hit at least 1,450 MHz assuming they do decently in the silicon lottery.

For reference, stock RX 480s are clocked at 1,120 MHz base and up to 1,266 MHz boost. Asus claims their factory overclock results in a 15% higher score in 3DMark Fire Strike and 19% more performance in DOOM and Hitman.

Other features of the STRIX RX 480 include FanConnect which is two 4-pin fan headers that allows users to hook up two case fans and allow them to be controlled by the GPU. Aura RGB LEDs on the shroud and backplate allow users to match their build aesthetics. Asus also includes XSplit GameCaster for game streaming with the card.

No word on pricing yet, but you will be able to get your hands on the card in the middle of next month (specifically "worldwide from mid-August")! 

This card is definitely one of the most interesting RX 480 designs so far and I am anxiously awaiting the full reviews!

How far do you think the triple fan cooler can push AMD's Polaris 10 XT GPU?

Source: Asus

The Red Team is the new black

Subject: General Tech | July 22, 2016 - 12:39 PM |
Tagged: amd, profits

It is reasonable to expect more in depth analysis from Josh about AMD's earnings this quarter but the news is too good not to briefly mention immediately.  AMD brought in $1.027 billion in revenue this quarter, a cool $68.7 million higher than expected, mostly thanks to console sales as these numbers do not include the new Polaris cards which are just being released.  This is very good news for everyone, having $69 million in profit will give AMD a bit of breathing room until Polaris can start selling and Zen arrives next year.  It also gives investors a boost of confidence in this beleaguered company, something that has not happened for quite a while.  Drop by The Register for more numbers and a link to the slides from the AMD financial meeting from yesterday.

Capture.PNG

"AMD's share price is up more than seven per cent in after-hours trading to $5.60 at time of writing. That's agonizingly close to the magic six-buck mark for the troubled semiconductor giant that this time last year was struggling to look viable."

Here is some more Tech News from around the web:

Tech Talk

Source: The Register

Honey, I Shrunk The NES

Subject: General Tech | July 17, 2016 - 01:07 AM |
Tagged: Nintendo, nes, gaming, !console

Fans of the 90s (and late 80s) will be happy to know that Nintendo is bring back the Nintendo Entertainment System in the form of a modern and miniaturized package. The NES Classic Edition is small enough to fit in the palm of your hand and offers up 30 built in classic NES games! It will be available for the holiday season at $59.99 sans old school RCA jacks and finicky cartridges!

nes-classic-edition.png

Nintendo has not provided details on the internals of the console, unfortunately, but it seems to be using a low power SoC that runs emulated versions of the games. That is to say that it is likely Nintendo is using modern components rather the original hardware. One clue is that Nintendo states that gamers will be able to use multiple suspend points on each game and will not have to worry about using continue passwords each time they load up a game. A poster over at Ars Technica suggests that Nintendo may be using the guts of an existing or new 3DS handheld console to power the NES Classic Edition, but we'll have to wait for someone to get thier hands on it to know for sure what is going on under the hood.

On the outside, the NES Classic Edition looks nearly identical to the NES many gamers (myself included) grew up with except for the controller ports being different and of course the physical size! There is even a cartridge slot cover though it is only there for aesthetics and does not actually open (it would have been awesome if it opened to reveal an SD card slot!). Around the back you will find the AC power input and an HDMI video output which is great to see in this age where hooking up an old school console can be a pain (or a chain of adapters heh). There is no word on what resolution the console will output at or if there will be any upscaling...

Speaking of controllers, Nintendo has brought back the old school rectangular gray controller from the original NES which it is calling the NES Classic Controller. This controller plugs into the NES Classic Edition console using the same proprietary port found on the bottom of Wii Remotes (because going with a USB port would have been too easy heh), and users can plug up to two NES Controllers into the console to play with a friend or plug the controller into a Wii Remote in order to play classic games found on the Wii and Wii U Virtual Consoles.

The NES Classic Edition comes with a single controller. Additional controllers will have a MSRP of $9.99. Alternatively, gamers can plug their Wii Classic Controller or Wii Classic Controller Pro game pads into the mini NES.

The bite-sized NES will come with 30 built in games. This number is sadly not expandable as there is no external memory or internet connection on the console (modders would have loved this thing...).

The list of games is as follows:

  • Balloon Fight
  • Bubble Bobble
  • Castlevania
  • Castlevania II: Simon’s Quest
  • Donkey Kong
  • Donkey Kong Jr.
  • Double Dragon II: The Revenge
  • Dr. Mario
  • Excitebike
  • Final Fantasy
  • Galaga
  • Ghosts N' Goblins
  • Gradius
  • Ice Climber
  • Kid Icarus
  • Kirby’s Adventure
  • Mario Bros.
  • Mega Man 2
  • Metroid
  • Ninja Gaiden
  • Pac-Man
  • Punch-Out!! Featuring Mr. Dream
  • StarTropics
  • Super C
  • Super Mario Bros.
  • Super Mario Bros. 2
  • Super Mario Bros. 3
  • Tecmo Bowl
  • The Legend of Zelda
  • Zelda II: The Adventure of Link

I am excited to see the Castlevania and Zelda games on here along with, of course, the Super Mario Bros. games. I do remember playing Dr. Mario and Ninja Gaiden as well, but there are several games that I have fond memories of playing that did not make the cut! For example, I remember playing a lot of Super Off Road, Duck Hunt (how do they not have this? I guess the old gun wouldn't work with new TVs so they would have to figure something else out though), RC Pro-Am which I loved, and a few others I can't remember the names of anymore).

I have no doubt that this is going to be an extremely popular seller and a great gift idea for the gamer in your life (or yourself! hehe). I wish that it had more games or at least ROM support so that it had a bit more life, but for what it is it is not a bad deal. After all, the original NES launched at $199.99 in 1985 which would make it almost $450 in today's dollars! For those interested, it should be up for pre-order at some point, but for now it is still notify only at Amazon US.

Are you excited for the tiny NES Classic Edition or is your trusty NES and cartridges collection still kicking? What were your favorite NES games growing up (if any)?

Source: Nintendo

NVIDIA Release 368.95 Hotfix Driver for DPC Latency

Subject: Graphics Cards | July 22, 2016 - 05:51 PM |
Tagged: pascal, nvidia, graphics drivers

Turns out the Pascal-based GPUs suffered from DPC latency issues, and there's been an ongoing discussion about it for a little over a month. This is not an area that I know a lot about, but it's a system that schedules workloads by priority, which provides regular windows of time for sound and video devices to update. It can be stalled by long-running driver code, though, which could manifest as stutter, audio hitches, and other performance issues. With a 10-series GeForce device installed, users have reported that this latency increases about 10-20x, from ~20us to ~300-400us. This can increase to 1000us or more under load. (8333us is ~1 whole frame at 120FPS.)

nvidia-2015-bandaid.png

NVIDIA has acknowledged the issue and, just yesterday, released an optional hotfix. Upon installing the driver, while it could just be psychosomatic, the system felt a lot more responsive. I ran LatencyMon (DPCLat isn't compatible with Windows 8.x or Windows 10) before and after, and the latency measurement did drop significantly. It was consistently the largest source of latency, spiking in the thousands of microseconds, before the update. After the update, it was hidden by other drivers for the first night, although today it seems to have a few spikes again. That said, Microsoft's networking driver is also spiking in the ~200-300us range, so a good portion of it might be the sad state of my current OS install. I've been meaning to do a good system wipe for a while...

nvidia-2016-hotfix-pascaldpc.png

Measurement taken after the hotfix, while running Spotify.
That said, my computer's a mess right now.

That said, some of the post-hotfix driver spikes are reaching ~570us (mostly when I play music on Spotify through my Blue Yeti Pro). Also, Photoshop CC 2015 started complaining about graphics acceleration issues after installing the hotfix, so only install it if you're experiencing problems. About the latency, if it's not just my machine, NVIDIA might still have some work to do.

It does feel a lot better, though.

Source: NVIDIA

Video Perspective: EVGA DG-87 Case Preview

Subject: Cases and Cooling | July 22, 2016 - 05:50 PM |
Tagged: video, huge, evga, dg-87, dg-8, case

EVGA started showing off designs for a unique, and enormous, case in 2015. It has since been rebranded and has undergone some minor work at the plastic surgeon to emerge as the EVGA DG-8 series of chassis. EVGA sent me the flagship model, the DG-87, that features an integrated fan controller to operate intake and exhaust airflow individually. EVGA took some interesting chances with this design: it's bigger than just about anything we have ever used, it rotates the case orientation by 90 degrees so that what was normally your side panel window is now facing you and it routes all of your cables and connections through a side section and out the back side of the case. 

If you haven't seen it before, this video is worth a watch. Expect a full review sometime in August!

NVIDIA's New #OrderOf10 Origins Contest

Subject: Graphics Cards | July 19, 2016 - 01:07 AM |
Tagged: nvidia

Honestly, when I first received this news, I thought it was a mistaken re-announcement of the contest from a few months ago. The original Order of 10 challenge was made up of a series of puzzles, and the first handful of people to solve it, received a GTX 10-Series graphics card. Turns out, NVIDIA is doing it again.

nvidia-2016-orderof10-july.png

For four weeks, starting on July 21st, NVIDIA will add four new challenges and, more importantly, 100 new “chances to win”. They did not announce what those prizes will be or whether all of them will be distributed to the first 25 complete entries of each challenge, though. Some high-profile YouTube personalities, such as some of the members of Rooster Teeth, were streaming their attempts the last time around, so there might be some of that again this time, too.

Source: NVIDIA

NVIDIA's GTX 1060, the newest in their Hari Seldon lineup of cards

Subject: Graphics Cards | July 19, 2016 - 01:54 PM |
Tagged: pascal, nvidia, gtx 1060, gp106, geforce, founders edition

The GTX 1060 Founders Edition has arrived and also happens to be our first look at the 16nm FinFET GP106 silicon, the GTX 1080 and 1070 used GP104.  This card features 10 SMs, 1280 CUDA cores, 48 ROPs and 80 texture units, in many ways it is a half of a GTX 1080. The GPU is clocked at a base of 1506MHz with a boost of 1708MHz, the 6GB of VRAM at 8GHz.  [H]ard|OCP took this card through its paces, contrasting it with the RX480 and the GTX 980 at resolutions of 1440p as well as the more common 1080p.  As they do not use the frame rating tools which are the basis of our graphics testing of all cards, including the GTX 1060 of course, they included the new DOOM in their test suite.  Read on to see how they felt the card compared to the competition ... just don't expect to see a follow up article on SLI performance.

1468921254mrv4f5CHZE_1_14_l.jpg

"NVIDIA's GeForce GTX 1060 video card is launched today in the $249 and $299 price point for the Founders Edition. We will find out how it performs in comparison to AMD Radeon RX 480 in DOOM with the Vulkan API as well as DX12 and DX11 games. We'll also see how a GeForce GTX 980 compares in real world gaming."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

Oh the things you see in VR presentations; the RX 460 for instance

Subject: General Tech | July 21, 2016 - 01:02 PM |
Tagged: rx 460, polaris 11, oculus rift, amd

TechARP spotting something unexpected at the Radeon RX 480 launch in Malaysia, a Radeon RX 460.  One suspects that the picture below does not represent its final form but it does give you an idea of the dimensions and the outputs which seem to include DVI, DP and HDMI.  TechARP were given some of the specs of this AMD Polaris 11 GPU based card, 14 Compute Units, 2 GB of GDDR5 memory on a 128-bit memory bus. 

The biggest takeaway is what AMD was doing with it, this was powering an Oculus Rift VR demo so it is safe to say this card meets at least the minimum specs for the headset.  Drop by for more pictures and a video.

RX-460-First-Look-02.jpg

"We just stumbled upon an actual Radeon RX 460 graphics card. AMD was using it to power a virtual reality demo on an Oculus VR headset. That was our first encounter with the Radeon RX 460, so we had to take off the perspex cover to take a closer look!"

Here is some more Tech News from around the web:

Tech Talk

Report: ARM Holdings Purchased by SoftBank for $32 Billion

Subject: Processors, Mobile | July 18, 2016 - 12:03 AM |
Tagged: softbank, SoC, smartphones, mobile cpu, Cortex-A73, ARM Holdings, arm, acquisition

ARM Holdings is to be aquired by SoftBank for $32 billion USD. This report has been confirmed by the Wall Street Journal, who states that an official annoucement of the deal is likely on Monday as "both companies’ boards have agreed to the deal".

arm-holdings.jpg

(Image credit: director.co.uk)

"Japan’s SoftBank Group Corp. has reached a more than $32 billion deal to buy U.K.-based chip-designer ARM HoldingsPLC, marking a significant push for the Japanese telecommunications giant into the mobile internet, according to a person familiar with the situation." - WSJ

ARM just announced their newest CPU core, the Cortex-A73, at the end of May, with performance and efficiency improvements over the current Cortex-A72 promised with the new architecture.

ARM_01.png

(Image credit: AnandTech)

We will have to wait and see if this aquisition will have any bearing on future product development, though it seems the acquisition targets the significant intellectual property value of ARM, whose designs can be found in most smartphones.

RetroArch Announces Vulkan API Support (& Async Compute)

Subject: General Tech | July 16, 2016 - 06:58 PM |
Tagged: n64, dolphin, libretro, retroarch, vulkan, async shaders, asynchronous compute, amd

While the Dolphin emulator has a lot of mind share, and recently announced DirectX 12 support, they have only just recently discussed working on the open alternative, Vulkan. It looks like the LibRetro developer community will beat them with an update to RetroArch and the LibRetro API. The page for RetroArch 1.3.5 exists as of (according to Google) yesterday, but 404s, so it should be coming soon. It is still in experimental mode, but it's better than nothing.

retroarch-2016-plain-logo.png

Interestingly, they also claim that their Vulkan port of Angrylion makes use of asynchronous compute. It's unclear what it uses that for, but I'm sure it will make for interesting benchmarks.

Ya, so our IoT enabled toasters need patching ... oh, only around 5 million, why is that a problem?

Subject: General Tech | July 20, 2016 - 12:45 PM |
Tagged: iot, security, amazon, Intel

The Register brings up the issue of IoT security once again today, this time looking at the logistics of patching and updating a fleet of IoT devices.  Amazon is focusing on dumb devices with a smart core, the physical device having the sensors required and a connection to the net to send all data to be processed in large database which would be much easier to maintain but does offer other security issues.  Intel on the other hand unsurprisingly prefers end devices with some smarts, such as their Curie and Edison modules, with a smarter gateway device sitting between those end devices and the same sort of large server based computing as Amazon. 

Intel's implementation may be more effective in certain enviroments than Amazons, El Reg uses the example of an oil rig, but would be more expensive to purchase and maintain.  Take a look at the article for a deeper look, or just imagine the horrors of pushing out a critical patch to 1000's of devices in an unknown state when you go live.

talkie-toaster.jpg

"Internet of Things (IoT) hype focuses on the riches that will rain from the sky once humanity connects the planet, but mostly ignores what it will take to build and operate fleets of things.

And the operational side of things could be hell."

Here is some more Tech News from around the web:

Tech Talk

Source: The Register

Details Emerge On Intel's Upcoming Kaby Lake and Apollo Lake Powered NUCs

Subject: General Tech | July 16, 2016 - 05:07 PM |
Tagged: nuc, kaby lake, iris, Intel, baby canyon, arches canyon, apollo lake

According to Olivier over at FanlessTech, Intel will be launching two new small form factor NUC PCs later this year. The new NUCs are code named Baby Canyon and Arches Canyon and will be powered by Intel’s Kaby Lake-U and Apollo Lake processors respectively. Baby Canyon will occupy the high end while Arches Canyon is aimed at low power and budget markets.

Intel 2017 NUCs Baby Canyon and Arches Canyon.jpg

Left: Intel NUC Roadmap. Middle: Intel Baby Canyon NUC. Right: Intel Arches Canyon NUC.

First up is the “Baby Canyon” NUC which will come in five SKUs. Featuring aluminum enclosures, the Baby Canyon NUCs measure 115 x 111 x 51mm for models with a SATA drive (models without SATA drive support are shorter at 35mm tall). The PCs will be powered by Intel’s Kaby Lake-U processors up to a 28W quad core i7 chip with Iris graphics. There will also be 15W Core i5 and i3 models. Kaby Lake is the 14nm successor to Skylake and features native support for USB 3.1, HDCP 2.2, and HEVC. Further, Kaby Lake chips will reportedly utilize an improved graphics architecture. While Kaby Lake chips in general will be available with TDPs up to 95W, the models used in Baby Canyon NUCs top out at 28W and are the Kaby Lake-U mobile variants.

Baby Canyon NUCs will pair the Kaby Lake-U CPUs with dual channel DDR4 SODIMMs (up to 32GB), a M.2 SSD, and SATA hard drive (on some models). Networking is handled by a soldered down Intel’s Wireless AC + BT 4.2 WiFI NIC and an Intel Gigabit Ethernet NIC.

Connectivity includes two USB 3.0 ports (one charging), a Micro SDXC card slot, 3.5mm audio jack, and an IR port on the front. Rear IO is made up of two more USB 3.0 ports, HDMI 2.0 video output, Gigabit Ethernet port, and a USB 3.1 (Gen1 5Gbps) Type-C port with support for DisplayPort 1.2 (DisplayPort Alt Mode). Finally, users can get access two USB 2.0 ports via an internal header.

Arches Canyon will be the new budget NUC option in 2017 and will be powered by Intel’s Apollo Lake SoC. Arches Canyon is the same 115 x 111 x 51mm size as the higher end Baby Canyon NUC, but the reference Intel chassis will be primarily made of plastic to reduce cost. Moving to the lower end platform, users will lose out on the USB 3.1 Type-C port, M.2 slot, and DDR4 support. Instead, the Arches Canyon NUCs will use dual channel DDR3L (up to 8GB) and come in two models: one with 32GB of built-in eMMC storage and one without. Both models will support adding in a SATA SSD or hard drive though.
External IO includes four USB 3.0 ports (two front, two rear, one charging), two 3.5mm audio jacks (the rear port supports TOSLINK), one Micro SDXC slot, one HDMI 2.0 video output, a VGA video out, and a Gigabit Ethernet port.

Internally, Arches Canyon is powered by Celeron branded Apollo Lake SoCs which are the successor to Braswell and feature Goldmont CPU cores paired with Gen 9 HD Graphics. Intel has not announced the specific chip yet, but the chip used in these budget NUCs will allegedly be a quad core model with a 10W TDP. Apollo Lake in general is said to offer up to 30% more CPU and GPU performance along with 15% better battery life over current Braswell designs. The battery savings are not really relevant in a NUC, but the performance improvements should certainly help!

One interesting contradiction in these Intel slides is that the Baby Canyon slide mentions Thunderbolt 3 (40Gbps) and USB 3.1 Gen 2 (10Gbps) support for the USB Type-C connector but in the connectivity section limits the USB 3.1 Type-C port to Gen 1 (5Gbps) and no mention of Thunderbolt support at all. I guess we will just have to wait and see if TB3 will end up making the cut!

The new NUCs look promising in that they should replace the older models at their current price points (for the most part) while offering better performance which will be especially important on the low end Arches Canyon SKUs! Being NUCs, users will be able to buy them as barebones kits or as systems pre-loaded with Windows 10.

If the chart is accurate, both Baby Canyon and Arches Canyon will be launched towards the end of the year with availability sometime in early to mid 2017. There is no word on exact pricing, naturally.

Are you still interested in Intel’s NUC platform? Stay tuned for more information as it comes in closer to launch!

Also read:

Source: FanlessTech
Author:

Introduction and Features

Introduction

2-Banner.jpg

SFX form factor cases and power supplies continue grow in popularity and in market share. As one of the original manufacturers of SFX power supplies, Silverstone Technology Co. is meeting demand with new products; continuing to raise the bar in the SFX power supply arena with the introduction of their new SX700-LPT unit.

SX700-LPT
(SX=SFX Form Factor, 700=700W, L=Lengthened, PT=Platinum certified)

SilverStone has a long-standing reputation for providing a full line of high quality enclosures, power supplies, cooling components, and accessories for PC enthusiasts. With a continued focus on smaller physical size and support for small form-factor enthusiasts, SilverStone added the new SX700-LPT to their SFX form factor series. There are now seven power supplies in the SFX Series, ranging in output capacity from 300W to 700W. The SX700-LPT is the second SFX unit to feature a lengthened chassis. The SX700-LPT enclosure is 30mm (1.2”) longer than a standard SFX chassis, which allows using a quieter 120mm cooling fan rather than the typical 80mm fan used in most SFX power supplies.

The new SX700-LPT power supply was designed for small form factor cases but it can also be used in place of a standard ATX power supply (in small cases) with an optional mounting bracket. In addition to its small size, the SX700-LPT features high efficiency (80 Plus Platinum certified), all modular flat ribbon-style cables, and provides up to 700W of continuous DC output (750W peak). The SX700-LPT also operates in semi-fanless mode and incorporates a very quiet 120mm cooling fan.

3-SX700-LPT-side.jpg

SilverStone SX700-LPT PSU Key Features:

•    Small Form Factor (SFX-L) design
•    700W continuous power output rated for 24/7 operation
•    Very quiet with semi-fanless operation
•    120mm cooling fan optimized for low noise
•    80 Plus Platinum certified for high efficiency
•    Powerful single +12V rail with 58.4A capacity
•    All-modular, flat ribbon-style cables
•    High quality construction with all Japanese capacitors
•    Strict ±3% voltage regulation and low AC ripple and noise
•    Support for high-end GPUs with four PCI-E 8/6-pin connectors
•    Safety Protections: OCP, OVP, UVP, SCP, OTP, and OPP

Please continue reading our review of the SilverStone SX700-LPT PSU!!!

Report: NVIDIA GeForce GTX 1070M and 1060M Specs Leaked

Subject: Graphics Cards | July 20, 2016 - 12:19 PM |
Tagged: VideoCardz, rumor, report, nvidia, GTX 1070M, GTX 1060M, GeForce GTX 1070, GeForce GTX 1060, 2048 CUDA Cores

Specifications for the upcoming mobile version of NVIDIA's GTX 1070 GPU may have leaked, and according to the report at VideoCardz.com this GTX 1070M will have 2048 CUDA cores; 128 more than the desktop version's 1920 cores.

nvidia-geforce-gtx-1070-mobile-specs.jpg

Image credit: BenchLife via VideoCardz

The report comes via BenchLife, with the screenshot of GPU-Z showing the higher CUDA core count (though VideoCardz mentions the TMU count should be 128). The memory interface remains at 256-bit for the mobile version, with 8GB of GDDR5.

VideoCardz reported another GPU-Z screenshot (via PurePC) of the mobile GTX 1060, which appears to offer the same specs of the desktop version, at a slightly lower clock speed.

nvidia-geforce-gtx-1060-mobile-specs.jpg

Image credit: PurePC via VideoCardz

Finally, this chart was provided for reference:

videocardz_chart.PNG

Image credit: VideoCardz

Note the absence of information about a mobile variant of the GTX 1080, details of which are still unknown (for now).

Source: VideoCardz