Subject: General Tech | April 1, 2014 - 03:17 PM | Jeremy Hellstrom
Nothing beats speculating on a product that hasn't been released yet; often it ends up being more fun than the release. Currently DX12 is providing great fodder for enthusiasts, especially when the comparison to Mantle is broached in conversation. The Tech Report is looking to pass some ammunition on to online prognosticators by fleshing out the debate with some history and a review of what was announced and what has been stated since. One of their biggest secondary sources of information is Matt Sandy's Blog, as a DX Developer he is a knowledgeable source about the new API, in as far as he is allowed to speak on it. Check out the three page post here for a good resource of what we know for now.
"We already covered the basics of DirectX 12 amid the GDC frenzy. Now that we've had time to study our notes from the show, we can delve into a little more detail about the new API's inception, the key ways in which it differs from DirectX 11, and what AMD and Nvidia think about it."
Here is some more Tech News from around the web:
- Nvidia expects better 2014, says company product marketing VP @ DigiTimes
- Intel Upgrades MinnowBoard: Baytrail CPU, Nearly Halves Price To $99 @ Slashdot
- Sticky Tahr-fy pudding: Ubuntu 14.04 slickest Linux desktop ever @ The Register
- How to Create and Manage Btrfs Snapshots and Rollbacks on Linux (part 2) @ Linux.com
- VMware reveals more VSAN nodes @ The Register
- Bitcoin Malware Infects Apple iAd @ TechARP
- Rollei Sunglasses Cam 100 @ NikKTech
Subject: Editorial, General Tech, Graphics Cards, Processors, Shows and Expos | March 30, 2014 - 01:45 AM | Scott Michaud
Tagged: gdc 14, GDC, GCN, amd
While Mantle and DirectX 12 are designed to reduce overhead and keep GPUs loaded, the conversation shifts when you are limited by shader throughput. Modern graphics processors are dominated by sometimes thousands of compute cores. Video drivers are complex packages of software. One of their many tasks is converting your scripts, known as shaders, into machine code for its hardware. If this machine code is efficient, it could mean drastically higher frame rates, especially at extreme resolutions and intense quality settings.
Emil Persson of Avalanche Studios, probably known best for the Just Cause franchise, published his slides and speech on optimizing shaders. His talk focuses on AMD's GCN architecture, due to its existence in both console and PC, while bringing up older GPUs for examples. Yes, he has many snippets of GPU assembly code.
AMD's GCN architecture is actually quite interesting, especially dissected as it was in the presentation. It is simpler than its ancestors and much more CPU-like, with resources mapped to memory (and caches of said memory) rather than "slots" (although drivers and APIs often pretend those relics still exist) and with how vectors are mostly treated as collections of scalars, and so forth. Tricks which attempt to combine instructions together into vectors, such as using dot products, can just put irrelevant restrictions on the compiler and optimizer... as it breaks down those vector operations into those very same component-by-component ops that you thought you were avoiding.
Basically, and it makes sense coming from GDC, this talk rarely glosses over points. It goes over execution speed of one individual op compared to another, at various precisions, and which to avoid (protip: integer divide). Also, fused multiply-add is awesome.
I know I learned.
As a final note, this returns to the discussions we had prior to the launch of the next generation consoles. Developers are learning how to make their shader code much more efficient on GCN and that could easily translate to leading PC titles. Especially with DirectX 12 and Mantle, which lightens the CPU-based bottlenecks, learning how to do more work per FLOP addresses the other side. Everyone was looking at Mantle as AMD's play for success through harnessing console mindshare (and in terms of Intel vs AMD, it might help). But honestly, I believe that it will be trends like this presentation which prove more significant... even if behind-the-scenes. Of course developers were always having these discussions, but now console developers will probably be talking about only one architecture - that is a lot of people talking about very few things.
This is not really reducing overhead; this is teaching people how to do more work with less, especially in situations (high resolutions with complex shaders) where the GPU is most relevant.
Subject: General Tech | March 28, 2014 - 07:28 PM | Jeremy Hellstrom
Tagged: audio, Creative, Sound Blaster AXX 200, wireless, speaker, microphone, battery charger
The Creative Sound Blaster AXX 200 is more that just a wireless speaker for your PC or smartphone, it is also a voice recorder, a hands free microphone for your smartphone and a battery charger. The Bluetooth speaker function can be set to stereo or 7.1 channel surround and will accept a signal from up to 10' away. The microphone feature has a similar range and can capture audio in a 360 degree area and [H]ard|OCP were also able to make a handsfree call using only the AXX 200. The USB plugs make it into a charging station as well, handy considering how integrated it is with your phone.
"With its unusual vertical, compact design, Creative's new flagship stereo speaker system features touch controls and a multitude of wired and wireless connectivity options for your mobile phone, tablet, Mac, and PC. Today, we will tell you if there is enough room in the "mix" for great sound as well."
Here is some more Tech News from around the web:
- Luxa2 GroovyW Wireless Speaker and Qi Charger Review @HiTech Legion
- Wavemaster Stax Second Generation Speakers @ Kitguru
- Tech-Life BeatBlock WET Weatherproof Bluetooth Speaker @ NikKTech
- Kingston HyperX Cloud gaming headset @ Kitguru
- TteSports Chao Dracco Captain Headset & Dracco Headphones @ eTeknix
- Cambridge Audio DacMagic XS @ techPowerUp
- Steelseries H-Wireless Gaming Headset @ Funky Kit
- SteelSeries H Wireless Gaming Headset @ NikKTech
- Tt eSPORTS Level 10 M Gaming Headset (Iron White) Review @ Madshrimps
- ROCCAT Kave XTD 5.1 Surround Sound Gaming Headphones Review @ Techgage
- itFenix FLO Headset Review @HiTech Legion
- CM Storm Pulse-R Aluminum Gaming Headset @ Funky Kit
- Turtle Beach Ear Force i60 Wireless Headset Review @ Legit Reviews
- Tritton Kama PlayStation 4 / Vita Gaming Headset @ eTeknix
Subject: General Tech, Displays | March 28, 2014 - 04:21 PM | Scott Michaud
Tagged: VR, valve, Oculus, facebook
Today, Oculus VR issued a statement which claims that Michael Abrash has joined their ranks as Chief Scientist. Abrash was hired by Valve in 2011 where he led, and apparently came up with the idea for, their wearable computing initiatives. For a time, he and Jeri Ellsworth were conducting similar projects until she, and many others, were forced out of the company for undisclosed reasons (she was allowed to take her project with her which ultimately became CastAR). While I have yet to see an official announcement claim that Abrash has left Valve, I have serious doubts that he would be employed in both places for any reasonable period of time. With both gone, I wonder about Valve's wearable initaitive going forward.
Abrash at Steam Dev Days
This press statement comes just three days after Facebook announced "definitive" plans to acquire Oculus VR for an equivalent of $2 billion USD (it is twice the company Instragram was). Apparently, the financial stability of Facebook (... deep breath before continuing...) was the catalyst for this decision. VR research is expensive. Abrash is now comfortable working with them, gleefully expending R&D funds, advancing the project without sinking the ship.
And then there's Valve.
On last night's This Week in Computer Hardware (#260), Patrick Norton and I were discussing the Oculus VR acquisition. He claimed that he had serious doubts about whether Valve ever intended to ship a product. So far, the only product available that uses Valve's research is the Oculus Rift DK2. Honestly, while I have not really thought about it until now, it would not be surprising for Valve to contribute to the PC platform itself.
And, hey, at least someone is not afraid of Facebook's ownership.
Subject: General Tech | March 28, 2014 - 02:48 PM | Jeremy Hellstrom
Tagged: Samsung, galaxy s5
Some lucky Aussies at The Register sweet talked their way into a Samsung Galaxy S5 and have put together a brief preview for your reading pleasure. There are many new features you will someday be able to use, even if El Reg couldn't quite test them yet. There is a battery saving mode which should help road warriors and a fingerprint sensor which is touted to work with NFC to turn your S5 into a replacement for your credit cards so you don't have to carry them with you. There is more to see in the article, including the Galaxy Gear Neo smartwatch.
"This time around Samsung is keen on its battery-saving mode, IP67 rating and, once again, fitness features. Samsung Australia personnel swore blind all of those features were designed for an “Aussie lifestyle”. Because down here we all go to the beach every day, a supposition only slightly less believable than the notion that an S5 design meeting considered how to optimise sales in a nation of 23 million."
Here is some more Tech News from around the web:
- Trying Out & Benchmarking The DigitalOcean Cloud @ Phoronix
- HDD vendors promoting ultra-slim models @ DigiTimes
- Microsoft CEO Nadella launches Office for iPad, now live in the Apple App Store @ The Inquirer
- BlackBerry 10 given top-level clearance by Department of Defense @ The Inquirer
- Microsoft aims at global shipments of 25 million Windows tablets in 2014, say Taiwan makers @ DigiTimes
- PAPAGO! P2 Pro Dashcam @ eTeknix
- PAPAGO! P3 Dashcam @ Benchmark Reviews
- Terminator-maker 'Cyberdyne Inc' lists on Tokyo stock exchange @ The Register
Subject: General Tech | March 27, 2014 - 02:42 PM | Ken Addison
Tagged: W9100, video, titan z, poseidon 780, podcast, Oculus, nvidia, GTC, GDC
PC Perspective Podcast #293 - 03/27/2014
Join us this week as we discuss the NVIDIA Titan-Z, ASUS ROG Poseidon 780, News from OculusVR and more!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the Store
- RSS - Subscribe through your regular RSS reader
- MP3 - Direct download link to the MP3 file
Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath and Allyn Malventano
Week in Review:
0:37:07 This podcast is brought to you by Coolermaster, and the CM Storm Pulse-R Gaming Headset
News items of interest:
Hardware/Software Picks of the Week:
Josh: Certainly not a Skype Connection to the Studio
Allyn: Continuous ink conversions
Subject: General Tech | March 27, 2014 - 01:10 PM | Jeremy Hellstrom
Tagged: pascal, nvlink, nvidia, maxwell, jen-hsun huang, GTC
Before we get to see Volta in action NVIDIA is taking a half step and releasing the Pascal architecture which will use Maxwell-like Streaming Multiprocessors and will introduce stacked or 3D memory which will reside on the same substrate as the GPU. Jen-Hsun claimed this new type of memory will vastly increase the bandwidth available, provide two and a half times the capacity and be four times as energy efficient at the same time. Along with the 3D memory announcement was the revealing of NVLink, an alternative interconnect which he claims will offer 5-12 times the bandwidth of PCIe and will be utilized by HPC systems. From his announcement that NVLink will feature eight 20Gbps lanes per block or as NVIDIA is calling them, bricks, which The Tech Report used to make a quick calculation and came up with an aggregate bandwidth of a brick of around 20GB/s. Read on to see what else was revealed.
"Today during his opening keynote at the Nvidia GPU Technology Conference, CEO Jen-Hsun Huang offered an update to Nvidia's GPU roadmap. The big reveal was about a GPU code-named Pascal, which will be a generation beyond the still-being-introduced Maxwell architecture in the firm's plans."
Here is some more Tech News from around the web:
- Nvidia, VMware join to pipe high-quality 3D graphics from the cloud @ The Register
- Android has 97 Percent of Mobile Malware, But Nearly None in the U.S. @ DailyTech
- Amazon HALVES cloud storage prices after Google's shock slash @ The Register
- Bitcoin mining malware hits Android @ The Inquirer
- Facebook Oculus VR buy causes rift with developers and tech fans @ The Inquirer
- iSAW EXtreme Action Camera @ Kitguru
- Netgear VueZone VZSX2800 Wireless Surveillance Camera Kit @ eTeknix
Subject: General Tech | March 27, 2014 - 12:11 AM | Morry Teitelman
Tagged: hashing benchmarks, GPGPU performance, FinalWire, aida64
Courtesy of FinalWire
Today, FinalWire Ltd. announced the release of version 4.30 of their diagnostic and benchmarking tool, AIDA64. This new version updates their Extreme Edition and Business Edition of the software.
Courtesy of FinalWire
The latest version of AIDA64 has been updated to work with the latest versions of the Windows Desktop and Server-based OSes, Windows 8.1 Update 1 and Windows Server 2012 R2 Update 1. Further, FinalWire integrated support for AMD's Mantle technology as well as support for Advanced Vector Extensions 2 (AVX2), Fused Multiply-Add (FMA) instructions, and AES-NI hardware acceleration integrated into the upcoming Intel Broadwell-based processor series.
New features include:
- Microsoft Windows 8.1 Update 1 and Windows Server 2012 R2 Update 1 support
- OpenCL GPGPU SHA-1 hash benchmark
- CUDA 6.0 support
- Socket AM1 motherboards support
- Improved support for Intel “Broadwell” CPU
- Preliminary support for AMD “Carrizo” and “Toronto” APUs
- Preliminary support for Intel “Skylake”, “Cherry Trail”, “Denverton” CPUs
- Crucial M550 and Intel 730 SSD support
- GPU details for AMD Radeon R7 265
- GPU details for nVIDIA GeForce GTX 745, GeForce 800 Series
Software updates new to this release (since AIDA64 v4.00):
- OpenCL GPGPU Benchmark Suite
- AMD Mantle graphics accelerator diagnostics
- Multi-threaded memory stress test with SSE, SSE2, AVX, AVX2, FMA, BMI and BMI2 acceleration
- Optimized 64-bit benchmarks for AMD “Kaveri”, “Bald Eagle”, “Mullins”, “Beema” APUs
- Optimized 64-bit benchmarks for Intel Atom C2000 “Avoton” and “Rangeley” SoC
- Optimized 64-bit benchmarks for Intel “Bay Trail” desktop, mobile and tablet SoC
- Full support for the upcoming Intel “Haswell Refresh” platform with Intel “Wildcat Point” PCH
- Razer SwitchBlade LCD support
- Preliminary support for Intel Quark X1000 “Clanton” SoC
- Improved support for OpenCL 2.0
- Support for VirtualBox v4.3 and VMware Workstation v10
- OCZ Vector 150, OCZ Vertex 460, Samsung XP941 SSD support
- GPU details for AMD Radeon R5, R7, R9 Series
- GPU details for nVIDIA GeForce 700 Series
Subject: General Tech | March 26, 2014 - 08:49 PM | Tim Verry
Tagged: remote graphics, nvidia, GTC 2014, gpgpu, emerging companies summit, ecs 2014, cloud computing
NVIDIA started the Emerging Companies Summit six years ago, and since then the event has grown in size and scope to identify and support those technology companies tha leverage (or plan to leverage) GPGPU computing to deliver innovative products. The ECS continues to be a platform for new startups to showcase their work at the annual GPU Technology Conference. NVIDIA provides support in the form of legal, developmental, and co-marketing to the companies featured at ECS.
There was an interesting twist this year though in the form of the Early Start Challenge. This is a new aspect to ECS in addition to the ‘One to Watch’ award. I attended the Emerging Companies Summit again this year and managed to snag some photos and participate in the Early Start Challenge (disclosure: i voted for Audiostream TV).
The 12 Early Start Challenge contestants take the stage at once to await the vote tally.
During the challenge, 12 selected startup companies were each given eight minutes on stage to pitch their company and why their innovations were deserving of the $100,000 grand prize. The on stage time was divided into a four minute presentation and a four minute Q&A session with the panel of judges (this year the audience was not part of the Q&A session at ECS unlike last year due to time constraints).
After all 12 companies had their chance on stage, the panel of judges and the audience submitted their votes for the most innovative startup. The panel of judges included:
- Scott Budman Business & Technology Reporter, NBC
- Jeff Herbst Vice President of Business Development, NVIDIA
- Jens Hortsmann Executive Producer & Managing Partner, Crestlight Venture Productions
- Pat Moorhead President & Principal Analyst, Moor Insights & Strategy
- Bill Reichert Managing Director, Garage Technology Ventures
The companies participating in the challenge include Okam Studio, MyCloud3D, Global Valuation, Brytlyt, Clarifai, Aerys, oMobio, ShiVa Technologies, IGI Technologies, Map-D, Scalable Graphics, and AudioStream TV. The companies are involved in machine learning, deep neural networks, computer vision, remote graphics, real time visualization, gaming, and big data analytics.
After all the votes were tallied, Map-D was revealed to be the winner and received a check for $100,000 from NVIDIA Vice President of Business Development Jeff Herbst.
Jeff Herbst awarding Map-D's CEO with the Early Start Challenge grand prize check. From left to right: Scott Budman, Jeff Herbst, and Thomas Graham.
Map-D is a company that specializes in a scaleable in-memory GPU database that promises millisecond queries directly from GPU memory (with GPU memory bandwidth being the bottleneck) and very fast database inserts. The company is working with Facebook and PayPal to analyze data. In the case of Facebook, Map-D is being used to analyze status updates in real time to identify malicious behavior. The software can be scaled across eight NVIDIA Tesla cards to analyze a billion Twitter tweets in real time.
It is specialized software, but extremely useful within its niche. Hopefully the company puts the prize money to good use in furthering its GPGPU endeavors. Although there was only a single grand prize winner, I found all the presentations interesting and look forward to seeing where they go from here.
Subject: General Tech, Graphics Cards | March 26, 2014 - 05:43 PM | Scott Michaud
Tagged: amd, firepro, W9100
The AMD FirePro W9100 has been announced, bringing the Hawaii architecture to non-gaming markets. First seen in the Radeon R9 series of graphics cards, it has the capacity for 5 TeraFLOPs of single-precision (32-bit) performance and 2 TeraFLOPs of double-precision (64-bit). The card also has 16GB of GDDR5 memory to support it. From the raw numbers, this is slightly more capacity than either the Titan Black or Quadro K6000 in all categories. It will also support six 4K monitors (or three at 60Hz), per card. AMD supports up to four W9100 cards in a single system.
Professional users can be looking for several things in their graphics cards: compute performance (either directly or through licensed software such as Photoshop, Premiere, Blender, Maya, and so forth), several high-resolution monitors (or digital signage units), and/or a lot of graphics performance. The W9100 is basically the top of the stack which covers all three of these requirements.
AMD also announced a system branding initiative called, "AMD FirePro Ultra Workstation". They currently have five launch partners, Supermicro, Boxx, Tarox, Silverdraft, and Versatile Distribution Services, which will have workstations available under this program. The list of components for a "Recommend" certification is: two eight-core 2.6 GHz CPUs, 32GB of RAM, four PCIe 3.0 x16 slots, a 1500W Platinum PSU, and a case with nine expansion slots (to allow four W9100 GPUs along with one SSD or SDI interface card).
Also, while the company has heavily discussed OpenCL in their slide deck, they have not mentioned specific versions. As such, I will assume that the FirePro W9100 supports OpenCL 1.2, like the R9-series, and not OpenCL 2.0 which was ratified back in November. This is still a higher conformance level than NVIDIA, which is at OpenCL 1.1.
Currently no word about pricing or availability.
Get notified when we go live!