Flash player not detected. Click here to install flash.
« 1 2 3 4 5 »

NVIDIA Announces Q4 2017 Results

Subject: Editorial | February 9, 2017 - 11:59 PM |
Tagged: TSMC, Samsung, Results, quadro, Q4, nvidia, Intel, geforce, Drive PX2, amd, 2017, 2016

It is most definitely quarterly reports time for our favorite tech firms.  NVIDIA’s is unique with their fiscal vs. calendar year as compared to how AMD and Intel report.  This has to do when NVIDIA had their first public offering and set the fiscal quarters ahead quite a few months from the actual calendar.  So when NVIDIA announces Q4 2017, it is actually reflecting the Q4 period in 2016.  Clear as mud?

Semantics aside, NVIDIA had a record quarter.  Gross revenue was an impressive $2.173 billion US.  This is up slightly more than $700 million from the previous Q4.  NVIDIA has shown amazing growth during this time attributed to several factors.  Net income (GAAP) is at $655 million.  This again is a tremendous amount of profit for a company that came in just over $2 billion in revenue.  We can compare this to AMD’s results two weeks ago that hit $1.11 billion in revenue and a loss of $51 million for the quarter.  Consider that AMD provides CPUs, chipsets, and GPUs to the market and is the #2 x86 manufacturer in the world.

NVLogo_2D_H.jpg

The yearly results were just as impressive.  FY 2017 featured record revenue and net income.  Revenue was $6.91 billion as compare to FY 2016 at $5 billion.  Net income for the year was $1.666 billion with comparison to $614 million for FY 2016.  The growth for the entire year is astounding, and certainly the company had not seen an expansion like this since the early 2000s.

The core strength of the company continues to be gaming.  Gaming GPUs and products provided $1.348 billion in revenue by themselves.  Since the manufacturing industry was unable to provide a usable 20 nm planar product for large, complex ASICs companies such as NVIDIA and AMD were forced to innovate in design to create new products with greater feature sets and performance, all the while still using the same 28 nm process as previous products.  Typically process shrinks accounted for the majority of improvements (more transistors packed into a smaller area with corresponding switching speed increases).  Many users kept cards that were several years old due to there not being a huge impetus to upgrade.  With the arrival of the 14 nm and 16 nm processes from Samsung and TSMC respectively, users suddenly had a very significant reason to upgrade.  NVIDIA was able to address the entire market from high to low with their latest GTX 10x0 series of products.  AMD on the other hand only had new products that hit the midrange and budget markets.

NV-Q4-2014.jpg

The next biggest area for NVIDIA is that of the datacenter.  This has seen tremendous growth as compared to the other markets (except of course gaming) that NVIDIA covers.  It has gone from around $97 million in Q4 2016 up to $296 million this last quarter.  Tripling revenue in any one area is rare.  Gaming “only” about doubled during this same time period.  Deep learning and AI are two areas that required this type of compute power and NVIDIA was able to deliver a comprehensive software stack, as well as strategic partnerships that provided turnkey solutions for end users.

After datacenter we still have the visualization market based on the Quadro products.  This area has not seen the dramatic growth as other aspects of the company, but it remains a solid foundation and a good money maker for the firm.  The Quadro products continue to be improved upon and software support grows.

One area that promises to really explode in the next three to four years is the automotive sector.  The Drive PX2 system is being integrated into a variety of cars and NVIDIA is focused on providing a solid and feature packed solution for manufacturers.  Auto-pilot and “co-pilot” modes will become more and more important in upcoming models and should reach wide availability by 2020, if not a little sooner.  NVIDIA is working with some of the biggest names in the industry from both automakers and parts suppliers.  BMW should release a fully automated driving system later this year with their i8 series.  Audi also has higher end cars in the works that will utilize NVIDIA hardware and fully automated operation.  If NVIDIA continues to expand here, eventually it could become as significant a source of income as gaming is today.

There was one bit of bad news from the company.  Their OEM & IP division has seen several drops over the past several quarters.  NVIDIA announced that the IP licensing to Intel would be discontinued this quarter and would not be renewed.  We know that AMD has entered into an agreement with Intel to provide graphics IP to the company in future parts and to cover Intel in potential licensing litigation.  This was a fair amount of money per quarter for NVIDIA, but their other divisions more than made up for the loss of this particular income.

NVIDIA certainly seems to be hitting on all cylinders and is growing into markets that previously were unavailable as of five to ten years ago.  They are spreading out their financial base so as to avoid boom and bust cycles of any one industry.  Next quarter NVIDIA expects revenue to be down seasonally into the $1.9 billion range.  Even though that number is down, it would still represent the 3rd highest quarterly revenue.

Source: NVIDIA

Blender Foundation Releases 2.78b... for Performance!

Subject: General Tech | February 9, 2017 - 10:03 PM |
Tagged: Blender

It has been a few months since the release of 2.78, and the Blender Foundation has been sitting on a bunch of performance enhancements in that time. Since 2.79 is still a couple of months off, they decided to “cherry pick” a bunch of them back into the 2.78 branch and push out an update to it. Most of these updates are things like multi-threading their shader compiler for Cycles, speeding up motion blur in Cycles, and reducing “fireflies” in Cycles renders, which indirectly helps performance by requiring less light samples to average out the noise.

blender-2016-278logo.jpg

I tried running two frames from different scenes of my upcoming PC enthusiast explanation video. While they’re fairly light, motion graphics sequences, they both use a little bit of motion blur (~half of a 60 Hz frame of integration) and one of the two frames is in the middle of three objects with volumetric absorption that are moving quite fast.

0580.png

The "easier" scene to render.

When disabling my GTX 670, and only using my GTX 1080, the easier scene went from 9.96s in 2.78a to 9.99s in 2.78b. The harsher scene, with volumetric absorption and a big streak of motion blur, went from 36.4s in 2.78a to 36.31s in 2.78b. My typical render settings include a fairly high sample count, though, so it’s possible that I could get away with less and save time that way.

0605.png

The "harsher" scene to render.

Blender is currently working on Agent 327, which is an upcoming animated feature film. Typically, these movies guide development of the project, so it makes sense that my little one-person motion graphics won’t have the complexity to show the huge optimizations that they’re targeting. Also, I had a lot of other programs running, which is known to make a significant difference in render time, although they were doing the same things between runs. No browser tabs were opened or closed, the same videos were running on other monitors while 2.78a and 2.78b were working, etc. But yeah, it's not a bulletproof benchmark by any means.

Also, some of the optimizations solve bugs with Intel’s CPU implementation as well as increase the use of SSE 4.1+ and AVX2. Unfortunately for AMD, these were pushed up right before the launch of Ryzen, and Blender with Cycles has been one of their go-to benchmarks for multi-threaded performance. While this won’t hurt AMD any more than typical version-to-version variations, it should give a last-minute boost to their competitors on AMD’s home turf.

Blender 2.78b is available today, free as always, at their website.

New graphics drivers? Fine, back to benchmarking.

Subject: Graphics Cards | February 9, 2017 - 07:46 PM |
Tagged: amd, nvidia

New graphics drivers are a boon to everyone who isn't a hardware reviewer, especially one who has just wrapped up benchmarking a new card the same day one is released.  To address this issue see what changes have been implemented by AMD and NVIDIA in their last few releases, [H]ard|OCP tested a slew of recent drivers from both companies.  The performance of AMD's past releases, up to and including the AMD Crimson ReLive Edition 17.1.1 Beta can be found here.  For NVIDIA users, recent drivers covering up to the 378.57 Beta Hotfix are right here.  The tests show both companies generally increasing the performance of their drivers, however the change is so small you are not going to notice a large difference.

0chained-to-office-man-desk-stick-figure-vector-43659486-958x1024.jpg

"We take the AMD Radeon R9 Fury X and AMD Radeon RX 480 for a ride in 11 games using drivers from the time of each video card’s launch date, to the latest AMD Radeon Software Crimson ReLive Edition 17.1.1 Beta driver. We will see how performance in old and newer games has changed over the course of 2015-2017 with new drivers. "

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

Have you ever noticed how popular June 21, 2006 is?

Subject: General Tech | February 9, 2017 - 06:51 PM |
Tagged: workaround, microsoft

Have you ever noticed how many drivers on your system are dated June 21st, 2006?  If not, pop open device manager and take a look at some of your devices which don't use a driver directly from the manufacturer.  Slashdot posted a link to the inimitable Raymond Chen who explains exactly why so many of your drivers bear that date.  The short version is that this is a workaround which prevents newer Microsoft drivers from overwriting manufacturer's drivers by ensuring the date stamp on the Microsoft driver will never have a more recent date.  This is especially important for laptop users as even the simple chipset drivers will be supplied by the manufacturer.  For instance this processor is old, but not that old!

2006.PNG

"When the system looks for a driver to use for a particular piece of hardware, it ranks them according to various criteria. If a driver provides a perfect match to the hardware ID, then it becomes a top candidate. And if more than one driver provides a perfect match, then the one with the most recent timestamp is chosen."

Here is some more Tech News from around the web:

Tech Talk

Source: Slashdot

Podcast #436 - ECS Mini-STX, NVIDIA Quadro, AMD Zen Arch, Optane, GDDR6 and more!

Subject: Editorial | February 9, 2017 - 03:50 PM |
Tagged: podcast, Zen, Windows 10 Game Mode, webcam, ryzen, quadro, Optane, nvidia, mini-stx, humble bundle, gddr6, evga, ECS, atom, amd, 4k

PC Perspective Podcast #436 - 02/09/17

Join us for ECS Mini-STX, NVIDIA Quadro, AMD Zen Arch, Optane, GDDR6 and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

Hosts: Ryan Shrout, Allyn Malventano, Ken Addison, Josh Walrath, Jermey Hellstrom

Program length: 1:32:21

Podcast topics of discussion:

  1. Week in Review:
  2. News items of interest:
    1. 1:14:00 Zen Price Points Leaked
  3. Hardware/Software Picks of the Week
  4. Closing/outro
 
 

Source:

WebKit Proposal for WebGPU

Subject: General Tech | February 9, 2017 - 03:46 AM |
Tagged: webkit, webgpu, metal, vulkan, webgl

Apple’s WebKit team has just announced their proposal for WebGPU, which competes with WebGL to provide graphics and GPU compute to websites. Being from Apple, it is based on the Metal API, so it has a lot of potential, especially as a Web graphics API.

Okay, so I have mixed feelings about this announcement.

apple-2017-webkit-logo.png

First, and most concerning, is that Apple has attempted to legally block open standards in the past. For instance, when The Khronos Group created WebCL based on OpenCL, which Apple owns the trademark and several patents to, Apple shut the door on extending their licensing agreement to the new standard. If the W3C considers Apple’s proposal, they should be really careful about what legal control they allow Apple to retain.

From a functionality standpoint, this is very interesting, though. With the aforementioned death of WebCL, as well as the sluggish progress of WebGL Compute Shaders, there’s a lot of room to use one (or more) GPUs in a system for high-end compute tasks. Even if you are not interested in gaming on a web browser, although many people are, especially if you count the market that Adobe Flash dominated for the last ten years, you might want to GPU-accelerate photo and video tasks. Having an API that allows for this would be very helpful going forward, although, as stated, others are working on it, like The Khronos Group with WebGL compute shaders. On the other-other hand, an API that allows explicit multi-GPU would be even more interesting.

Further, it sounds like they’re even intending to ingest byte-code, like what DirectX 12 and Vulkan are doing with DXIL and SPIR-V, respectively, but it currently accepts shader code as a string and compiles it in the driver. This is interesting from a security standpoint, because it obfuscates what GPU-executed code consists of, but that’s up to the graphics and browser vendors to figure out... for now.

So when will we see it? No idea! There’s an experimental WebKit patch, which requires the Metal API, and an API proposal... a couple blog posts... a tweet or two... and that’s about it.

So what do you think? Does the API sound interesting? Does Apple’s involvement scare you? Or does getting scared about Apple’s involvement annoy you? Comment away! : D

Source: WebKit

AMD Details Zen at ISSCC

Subject: Processors | February 9, 2017 - 02:38 AM |
Tagged: Zen, Skylake, Samsung, ryzen, kaby lake, ISSCC, Intel, GLOBALFOUNDRIES, amd, AM4, 14 nm FinFET

Yesterday EE Times posted some interesting information that they had gleaned at ISSCC.  AMD released a paper describing the design process and advances they were able to achieve with the Zen architecture manufactured on Samsung’s/GF’s 14nm FinFETT process.  AMD went over some of the basic measurements at the transistor scale and how it compares to what Intel currently has on their latest 14nm process.

icon.jpg

The first thing that jumps out is that AMD claimes that their 4 core/8 thread x86 core is about 10% smaller than what Intel has with one of their latest CPUs.  We assume it is either Kaby Lake or Skylake.  AMD did not exactly go over exactly what they were counting when looking at the cores because there are some significant differences between the two architectures.  We are not sure if that 44mm sq. figure includes the L3 cache or the L2 caches.  My guess is that it probably includes L2 cache but not L3.  I could be easily wrong here.

Going down the table we see that AMD and Samsung/GF are able to get their SRAM sizes down smaller than what Intel is able to do.  AMD has double the amount of L2 cache per core, but it is only about 60% larger than Intel’s 256 KB L2.  AMD also has a much smaller L3 cache as well than Intel.  Both are 8 MB units but AMD comes in at 16 mm sq. while Intel is at 19.1 mm sq.  There will be differences in how AMD and Intel set up these caches, and until we see L3 performance comparisons we cannot assume too much.

Zen-comparison.png

(Image courtesy of ISSCC)

In some of the basic measurements of the different processes we see that Intel has advantages throughout.  This is not surprising as Intel has been well known to push process technology beyond what others are able to do.  In theory their products will have denser logic throughout, including the SRAM cells.  When looking at this information we wonder how AMD has been able to make their cores and caches smaller.  Part of that is due to the likely setup of cache control and access.

One of the most likely culprits of this smaller size is that the less advanced FPU/SSE/AVX units that AMD has in Zen.  They support AVX-256, but it has to be done in double the cycles.  They can do single cycle AVX-128, but Intel’s throughput is much higher than what AMD can achieve.  AVX is not the end-all, be-all but it is gaining in importance in high performance computing and editing applications.  David Kanter in his article covering the architecture explicitly said that AMD made this decision to lower the die size and power constraints for this product.

Ryzen will undoubtedly be a pretty large chip overall once both modules and 16 MB of L3 cache are put together.  My guess would be in the 220 mm sq. range, but again that is only a guess once all is said and done (northbridge, southbridge, PCI-E controllers, etc.).  What is perhaps most interesting of it all is that AMD has a part that on the surface is very close to the Broadwell-E based Intel i7 chips.  The i7-6900K runs at 3.2 to 3.7 GHz, features 8 cores and 16 threads, and around 20 MB of L2/L3 cache.  AMD’s top end looks to run at 3.6 GHz, features the same number of cores and threads, and has 20 MB of L2/L3 cache.  The Intel part is rated at 140 watts TDP while the AMD part will have a max of 95 watts TDP.

If Ryzen is truly competitive in this top end space (with a price to undercut Intel, yet not destroy their own margins) then AMD is going to be in a good position for the rest of this year.  We will find out exactly what is coming our way next month, but all indications point to Ryzen being competitive in overall performance while being able to undercut Intel in TDPs for comparable cores/threads.  We are counting down the days...

Source: AMD

ZTE Axon 7 Receives OTA Nougat Update

Subject: Mobile | February 9, 2017 - 12:26 AM |
Tagged: zte, axon 7, google, nougat, Android, android 7.0

Well that was quick. About two weeks ago, we reported on ZTE Mobile Deutschland’s Facebook post that said Android 7.0 would miss January, but arrive some time in Q1. For North America, that apparently means the second week of February, because my device was just notified, about an hour ago, that A2017UV1.1.0B15 was available for over-the-air update. It just finished installing.

zte-2017-axon7-nougatupdate.jpg

In my case, I needed to hold the on button a few times to get the phone to boot into the second stage of installation, but ZTE mentions it in the pre-install notes, so that’s good. Then, when the phone moved on to the new lock screen, my fingerprint reader didn’t work until after I typed in the lock screen password. I’m not sure why the phone didn’t accept the fingerprint reader until after I successfully logged in, especially since it used the fingerprints on file from Android 6.0, I didn’t need to set it up again, but it’s a small inconvenience. Just don’t perform the update if you can’t access your password manager and you don’t remember the unlock code off the top of your head.

While I don’t have a Daydream VR headset, I’ll probably pick one up soon and give it a test. The Daydream app is installed on the device, though, so you can finally enjoy Android-based VR content if you pick one up.

If your phone hasn’t alerted you yet, find your unlock password and check for updates in the settings app.

Source: ZTE

A peek at The Bard’s Tale 4

Subject: General Tech | February 8, 2017 - 07:24 PM |
Tagged: gaming, bards tale, inxile

inXile have been very busy recently, doing a stellar job at resurrecting Wasteland into a new, modern RPG which is soon to see its third incarnation released.  The long anticipated Torment: Tides of Numenara arrives at the end of this month, the beta has been a tantalizing taste as was the YouTube 'choose your own adventure' teaser.  There is another project they have been working on, bringing the old Bard's Tale gaming into the modern era.  A trailer showing in-game footage, including combat has just been release which you can see over at Rock, Paper, SHOTGUN.  It certainly doesn't look like the Bard's Tale of old!

08bardstale.jpg

"On the game HUD, you can see your party occupying 2 rows of 4 spaces each. Enemies will line up on the opposite grid with the same number of slots. The exact positioning of enemies, as well as your own party, will determine which attacks can land, and which will swing wild past their mark."

Here is some more Tech News from around the web:

Gaming

Jump into Kaby Lake naked

Subject: Processors | February 8, 2017 - 06:16 PM |
Tagged: kaby lake, i5-7600K, Intel

[H]ard|OCP followed up their series on replacing the TIM underneath the heatspreader on Kaby Lake processors with another series depicting the i5-7600K in the buff.  They removed the heatspreader completely and tried watercooling the die directly.  As you can see in the video this requires more work than you might immediately assume, it was not simply shimming which was involved, some of the socket on the motherboard needed to be trimmed with a knife in order to get the waterblock to sit directly on the core.  In the end the results were somewhat depressing, the risks involved are high and the benefits almost non-existent.  If you are willing to risk it, replacing the TIM and reattaching the heatspreader is a far better choice.

getimage.jpg

"After our recent experiments with delidding and relidding our 7700K and 7600K to see if we could get better operating temperatures, we decided it was time to go topless! Popping the top on your CPU is one thing, and getting it to work in the current processor socket is another. Get out your pocket knife, we are going to have to make some cuts."

Here are some more Processor articles from around the web:

Processors

Source: [H]ard|OCP

FreeSync 2 - The Adaptification!

Subject: General Tech | February 8, 2017 - 05:44 PM |
Tagged: amd, FreeSync2, David Glen, Syed Hussain

TechARP published a video of their interview with AMD's David Glen and Syed Hussain in which they discussed what to expect from FreeSync 2.  They also listed some key points for those who do not wish to watch the full video; either can be found right here.  The question on most people's minds is answered immediately, this will not be a Vega only product and if your GPU supports the current version it will support the sequel.  We will not see support for it until a new driver is released, then again we also await new monitors to hit the market as well so it is hard to be upset at AMD for the delay.

FreeSync-2-presentation-16.jpg

"While waiting for AMD to finalise Radeon FreeSync 2 and its certification program for their partners, let’s share with you our Q&A session with the two key AMD engineers in charge of the Radeon FreeSync 2 project – David Glen and Syed Athar Hussain."

Here is some more Tech News from around the web:

Tech Talk

Source: Techgage

ASRock Announces H110-STX MXM Mini-STX Motherboard with MXM GPU Support

Subject: Motherboards | February 8, 2017 - 03:15 PM |
Tagged: small form factor, SFF, PCI-E 3.0, MXM, motherboard, mobile gpu, mini-stx, H110-STX-MXM, asrock

ASRock has announced a new mini-STX motherboard with an interesting twist, as the H110-STX MXM motherboard offers support for current MXM (version 3.0b, up to 120W) mobile graphics cards.

H110_STX_MXM.jpg

Like the ECS H110 motherboard featured in our recent Mini-STX build, the ASRock H110-STX MXM is based on the LGA1151 socket (though CPU TDP was not in the source post), offers a pair a DDR SODIMM slots for up to 32GB of DDR4 notebook memory. Storage support is excellent with dual SATA ports and M.2 SSD support. Importantly, this ASRock board uses PCI Express 3.0 on both the MXM (PCIe 3.0 x16) and M.2 (PCIe 3.0 x4) slots. Display output capability is excellent as well, quoting the TechPowerUp post:

"Display connectivity includes one HDMI port that's wired to the CPU's onboard graphics, a second HDMI port wired to the MXM slot, a full-size DisplayPort wired to the MXM, and a Thunderbolt port with mini-DisplayPort wiring to the MXM."

There are some roadblocks to building up a gaming system with this motherboard, not the least of which is cost. Consider that compatible MXM 3.0b options (with a recent GPU) are hundreds of dollars from a place like Eurocom (a GTX 980M is around $800, for example). Naturally, if you had a damaged gaming notebook with a usable MXM GPU, this board might be a nice option for re-purposing that graphics card. Cooling for the MXM card is another issue, however, though harvesting an MXM card from a notebook could potentially allow implementing the existing thermal solution from the laptop.

H110_STX_MXM_2.jpg

Look closely and you will see a Z270 product name in this ASRock photo

Update: We now have full specifications from ASRock's product page, which include:

  • Socket LGA1151 for Intel Core i7/i5/i3/Pentium/Celeron (Kabylake)
  • Supports MXM Graphics Card (Type-B , Up to 120W)
  • Supports DDR4 2400MHz, 2 x SO-DIMM, up to 32GB system memory
  • 1 x HDMI (4K@60Hz), 1x HDMI, 1x DisplayPort, 1x Mini-DisplayPort
  • 3x USB3.0 Type-A, 1x Thunderbolt 3 with USB 3.1 Type-C
  • 1x M.2 (Key E), 2x M.2 (Key M)
  • 1x Intel i219V Gigabit LAN
  • DC 19V / 220W power input

Of note, the chipset is listed as Z270, though the product name and primary motherboard photo suggest H110. The H110-STX MXM is part of ASRocks industrial motherboard offerings (with signage and gaming the mentioned applications), and includes a 220W power supply. Pricing and availability were not mentioned.

Source: TechPowerUp

Star Wars Humble Bundle III; better than the movie!

Subject: General Tech | February 7, 2017 - 08:01 PM |
Tagged: gaming, Star Wars, humble bundle

Do like Star Wars games, PCPer and Unicef?  If so there is a Humble Bundle perfect for you running for the next two weeks.  Depending on how much you pay you can get up to 15 games and an X-Wing verus TIE Fighter t-shirt, with a percentage of your purchase helping us to continue to provide the content you love.  There is some overlap with previous bundles you may have picked up but for those of you missing KOTOR 1 or 2, The Force Unleashed 1 or 2, Shadows of the Empire or even the second Star Wars Battlefront game it is well worth the cost.

68ceffd08ee3ec146869d8eda9767d32eefcb7fa.png

How can you resist that t-shirt?

 

The new ASUS Maximus IX Formula is put through its paces

Subject: Motherboards | February 7, 2017 - 07:35 PM |
Tagged: z270 express, Maximus IX Formula, intel z270, ASUS ROG, asus

ASUS' Maximus Formula series have become familiar to high end system builders and the newest member looks to live up to our expectations.  The list of features is comprehensive, including two M.2 slots and a U.2 slot, two USB 3.1 ports including a Type-C and an ASUS 2T2R dual band 802.11a/b/g/n/ac antenna.  [H]ard|OCP had mixed results when overclocking, some testers had a perfect experience while others ran into some hurdles, that may be due to the processors they used so do not immediately write this motherboard off.  Take a look at the full review before you decide one way or the other.

1486385016RAfE27Cjtg_1_8_l.jpg

"ASUS is nothing like Hollywood. ASUS can actually turn out sequels which not only match the originals, but surpass them. ASUS Republic of Gamers Maximus IX Formula is another sequel in the long line of Maximus motherboards. Can ASUS continue its long history of awesome sequels? One things for certain, it’s no Robocop 3."

Here are some more Motherboard articles from around the web:

Motherboards

 

Source: [H]ard|OCP

Intel's Atom C2xxx processors may just make like a banana and split

Subject: General Tech | February 7, 2017 - 06:31 PM |
Tagged: Intel, c2000, Avoton

"System May Experience Inability to Boot or May Cease Operation" is not the errata note you want to read, but for those running devices powered by an Intel Avoton C2xxx family Atom processor it is something to pay attention to.  The Low Pin Count bus clock may stop functioning permanently after the chip has been in service for a time, rendering the device non-functional.  Intel had little to say about the issue when approached by The Register but did state that there is a board level workaround available to resolve the issue.

The Avoton famliy of chips were released in 2013 and were designed to compete against ARM's new low powered server chips.  The flaw is likely responsible for the issues with Cisco routers that have been reported on recently; the chip can also be found in the Synology DS1815+ and some Dell server products.  It will be interesting to see how Intel responds to this issue, they have a history of reluctance discussing flaws in their product's architecture.

Avoton.png

"Intel's Atom C2000 processor family has a fault that effectively bricks devices, costing the company a significant amount of money to correct. But the semiconductor giant won't disclosed precisely how many chips are affected nor which products are at risk."

Here is some more Tech News from around the web:

Tech Talk

Source: The Register

Logitech Announces BRIO Webcam: 4K and HDR

Subject: General Tech | February 7, 2017 - 09:31 AM |
Tagged: logitech, webcam, brio, 4k, hdr

Today’s announcement of the Logitech BRIO rolls in many features that have been lacking in webcams. With it, you can record in 720p30, 720p60, 1080p30, 1080p60, and, the big reveal, 4K30. It is also capable of shooting in HDR using RightLight 3, although they don’t specify color space formats, so it’s unclear what you will be able to capture with video recording software.

logitech-2017-brio-hero.png

On top of these interesting video modes, the camera also supports infrared for Windows Hello “or other facial recognition software”. Unlike Intel’s RealSense, the webcam claims support for the relatively ancient Core 2 and higher, which sounds promising for AMD users. I’m curious what open-source developers will be able to accomplish, especially if it’s general enough to do background rejection (and so forth). Obviously, this is just my speculation -- Logitech hasn’t even hinted at this in their documentation.

As you would expect for a 4K sensor, Logitech is also advertising quite a bit of digital zoom. They claim up to 5X and FOVs user-configurable between 65 degrees and 90 degrees.

Finally, the price is $199 USD / $249 CDN and it ships today.

Source: Logitech

Mozilla to Require Rust (and Dependencies) for Firefox

Subject: General Tech | February 7, 2017 - 07:47 AM |
Tagged: mozilla, firefox, web browser, Rust, llvm

Firefox 52 will be the company’s next Extended Support Release (ESR) branch of their popular web browser. After this release, Mozilla is planning a few changes that will break compatibility, especially if you’re building the browser from source. If you’re an end-user, the major one to look out for is Mozilla disabling NPAPI-based plugins (except Flash) unless you are using Firefox 52 ESR. This change will land in the consumer version of Firefox 52, though. It’s not really clear why they didn’t just wait until Firefox 53, rather than add a soft-kill in Firefox 52 and hard-code it the next version, but that’s their decision. It really does not affect me in the slightest.

mozilla-rust.png

The more interesting change, however, is that Mozilla will begin requiring Rust (and LLVM) in an upcoming version. I’ve seen multiple sources claim Firefox 53, Firefox 54, and Firefox 55 as possible targets for this, but, at some point around those versions, critical components of the browser will be written in Rust. As more of the browser is migrated to this language, it should be progressively faster and more secure, as this language is designed to enforce memory safety and task concurrency.

Firefox 52 is expected in March.

If you were going to sell Mechanical Keyboards, what name would you choose?

Subject: General Tech | February 6, 2017 - 10:26 PM |
Tagged: MK Fission, mechanical keyboard, input, Cherry MX

If you wanted MechanicalKeyboards.com then TechPowerUp has some bad news for you, as it is already taken.  When not brainstorming with Captain Obvious, they are the North American retailer for Ducky Keyboards, a name you might have possibly heard before.  Their MK Fission comes in 18 flavours, you can only choose black or white keycaps but you have your choice of the full range of Cherry switches.  If you have lost track of the score that includes Red, Brown, Blue, Black, Silent Red, Speed Silver, Green, Clear and White.  The keyboard has blue backlighting and the RGB disease has only infected the outer casing of the keyboard, giving it a look which might be familiar to anyone who knew someone in the 90s' with questionable taste in car accessories.

mk-fission.jpg

"MechanicalKeyboards.com is a prominent retailer of mechanical keyboards, as the name would suggest, based in the USA. Today we get to take a look at their new MK Fission full size keyboard that comes in 18 possible options to choose from, Yes, there is RGB included but perhaps not the way you think."

Here is some more Tech News from around the web:

Tech Talk

 

Source: TechPowerUp

Lenovo Announces new ThinkPad P51s P51 and P71 Mobile Workstations

Subject: Systems, Mobile | February 6, 2017 - 08:37 PM |
Tagged: xeon, Thinkpad, quadro, P71, P51s, P51, nvidia, notebook, mobile workstation, Lenovo, kaby lake, core i7

Lenovo has announced a trio of new ThinkPad mobile workstations, featuring updated Intel 7th-generation Core (Kaby Lake) processors and NVIDIA Quadro graphics, and among these is the thinnest and lightest ThinkPad mobile workstation to date in the P51s.

P51s.jpg

"Engineered to deliver breakthrough levels of performance, reliability and long battery life, the ThinkPad P51s features a new chassis, designed to meet customer demands for a powerful but portable machine. Developed with engineers and professional designers in mind, this mobile workstation features Intel’s 7th generation Core i7 processors and the latest NVIDIA Quadro dedicated workstation graphics, as well as a 4K UHD IPS display with optional IR camera."

Lenovo says that the ThinkPad P51s is more than a half pound lighter than the previous generation (P50s), stating that "the P51s is the lightest and thinnest mobile workstation ever developed by ThinkPad" at 14.4 x 9.95 x 0.79 inches, and weight starting at 4.3 lbs.

Specs for the P51s include:

  • Up to a 7th Generation Intel Core i7 Processor
  • NVIDIA Quadro M520M Graphics
  • Choice of standard or touchscreen FHD (1920 x 1080) IPS, or 4K UHD (3840 x 2160) IPS display
  • Up to 32 GB DDR4 2133 RAM (2x SODIMM slots)
  • Storage options including up to 1 TB (5400 rpm) HDD and 1 TB NVMe PCIe SSDs
  • USB-C with Intel Thunderbolt 3
  • 802.11ac and LTE-A wireless connectivity

Lenovo also announced the ThinkPad P51, which is slightly larger than the P51s, but brings the option of Intel Xeon E3-v6 processors (in addition to Kaby Lake Core i7 CPUs), Quadro M2200M graphics, faster 2400 MHz memory up to 64 GB (4x SODIMM slots), and up to a 4K IPS display with X-Rite Pantone color calibration.

Thinkpad_P51.jpg

Finally there is the new VR-ready P71 mobile workstation, which offers up to an NVIDIA Quadro P5000M GPU along with Oculus and HTC VR certification.

"Lenovo is also bringing virtual reality to life with the new ThinkPad P71. One of the most talked about technologies today, VR has the ability to bring a new visual perspective and immersive experience to our customers’ workflow. In our new P71, the NVIDIA Pascal-based Quadro GPUs offer a stunning level of performance never before seen in a mobile workstation, and it comes equipped with full Oculus and HTC certifications, along with NVIDIA’s VR-ready certification."

Thinkpad_P71.jpg

Pricing and availability is as follows:

  • ThinkPad P51s, starting at $1049, March
  • ThinkPad P51, starting at $1399, April
  • ThinkPad P71, starting at $1849, April
Source: Lenovo
Author:
Manufacturer: NVIDIA

NVIDIA P100 comes to Quadro

At the start of the SOLIDWORKS World conference this week, NVIDIA took the cover off of a handful of new Quadro cards targeting professional graphics workloads. Though the bulk of NVIDIA’s discussion covered lower cost options like the Quadro P4000, P2000, and below, the most interesting product sits at the high end, the Quadro GP100.

As you might guess from the name alone, the Quadro GP100 is based on the GP100 GPU, the same silicon used on the Tesla P100 announced back in April of 2016. At the time, the GP100 GPU was specifically billed as an HPC accelerator for servers. It had a unique form factor with a passive cooler that required additional chassis fans. Just a couple of months later, a PCIe version of the GP100 was released under the Tesla GP100 brand with the same specifications.

quadro2017-2.jpg

Today that GPU hardware gets a third iteration as the Quadro GP100. Let’s take a look at the Quadro GP100 specifications and how it compares to some recent Quadro offerings.

  Quadro GP100 Quadro P6000 Quadro M6000 Full GP100
GPU GP100 GP102 GM200 GP100 (Pascal)
SMs 56 60 48 60
TPCs 28 30 24 (30?)
FP32 CUDA Cores / SM 64 64 64 64
FP32 CUDA Cores / GPU 3584 3840 3072 3840
FP64 CUDA Cores / SM 32 2 2 32
FP64 CUDA Cores / GPU 1792 120 96 1920
Base Clock 1303 MHz 1417 MHz 1026 MHz TBD
GPU Boost Clock 1442 MHz 1530 MHz 1152 MHz TBD
FP32 TFLOPS (SP) 10.3 12.0 7.0 TBD
FP64 TFLOPS (DP) 5.15 0.375 0.221 TBD
Texture Units 224 240 192 240
ROPs 128? 96 96 128?
Memory Interface 1.4 Gbps
4096-bit HBM2
9 Gbps
384-bit GDDR5X
6.6 Gbps
384-bit
GDDR5
4096-bit HBM2
Memory Bandwidth 716 GB/s 432 GB/s 316.8 GB/s ?
Memory Size 16GB 24 GB 12GB 16GB
TDP 235 W 250 W 250 W TBD
Transistors 15.3 billion 12 billion 8 billion 15.3 billion
GPU Die Size 610mm2 471 mm2 601 mm2 610mm2
Manufacturing Process 16nm 16nm 28nm 16nm

There are some interesting stats here that may not be obvious at first glance. Most interesting is that despite the pricing and segmentation, the GP100 is not the de facto fastest Quadro card from NVIDIA depending on your workload. With 3584 CUDA cores running at somewhere around 1400 MHz at Boost speeds, the single precision (32-bit) rating for GP100 is 10.3 TFLOPS, less than the recently released P6000 card. Based on GP102, the P6000 has 3840 CUDA cores running at something around 1500 MHz for a total of 12 TFLOPS.

gp102-blockdiagram.jpg

GP100 (full) Block Diagram

Clearly the placement for Quadro GP100 is based around its 64-bit, double precision performance, and its ability to offer real-time simulations on more complex workloads than other Pascal-based Quadro cards can offer. The Quadro GP100 offers 1/2 DP compute rate, totaling 5.2 TFLOPS. The P6000 on the other hand is only capable of 0.375 TLOPS with the standard, consumer level 1/32 DP rate. Inclusion of ECC memory support on GP100 is also something no other recent Quadro card has.

quadro2017-3.jpg

Raw graphics performance and throughput is going to be questionable until someone does some testing, but it seems likely that the Quadro P6000 will still be the best solution for that by at least a slim margin. With a higher CUDA core count, higher clock speeds and equivalent architecture, the P6000 should run games, graphics rendering and design applications very well.

There are other important differences offered by the GP100. The memory system is built around a 16GB HBM2 implementation which means more total memory bandwidth but at a lower capacity than the 24GB Quadro P6000. Offering 66% more memory bandwidth does mean that the GP100 offers applications that are pixel throughput bound an advantage, as long as the compute capability keeps up on the backend.

m.jpg

Continue reading our preview of the new Quadro GP100!