All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Subject: General Tech, Storage | April 2, 2014 - 02:27 AM | Scott Michaud
Tagged: Seagate, NAS
Seagate released a network-attached storage (NAS) device intended for businesses with "up to 50 employees", called the Seagate Business 4-Bay 16TB NAS. Dominic Sharoo of NitroWare reviewed one and, obviously/hopefully, gave his opinion in the process. In short, while he liked the connectivity options, he shies away from a recommendation without a price cut and a firmware update (its built-in software is not compatible with Windows 8).
As for what it did well, he was pleased by its relatively compact chassis, USB 3.0 support, and the inclusion of dual gigabit Ethernet LAN ports. It is configurable in RAID 0, 1, 5, 10, or "JBOD" (just a bunch of drives). He also liked that, in his testing, the unit did not seem to require drives from a specific vendor. If you buy the unit already loaded with drives, they are formatted in RAID 5. For a four-bay NAS, that seems like a good default. It also uses a standard laptop power supply, which should make finding a replacement (or a spare) easy.
While the device is a mixed bag, check out his review if you are interested.
Subject: General Tech, Graphics Cards | April 1, 2014 - 04:42 PM | Tim Verry
Tagged: VCA, nvidia, GTC 2014
NVIDIA launched a new visual computing appliance called the Iray VCA at the GPU Technology Conference last week. This new piece of enterprise hardware uses full GK 110 graphics cards to accelerate the company’s Iray renderer which is used to create photo realistic models in various design programs.
The Iray VCA specifically is a licensed appliance (hardware + software) that combines NVIDIA hardware and software. On the hardware side of things, the Iray VCA is powered by eight graphics cards, dual processors (unspecified but likely Intel Xeons based on usage in last year’s GRID VCA), 256GB of system RAM, and a 2TB SSD. Networking hardware includes two 10GbE NICs, two 1GbE NICs, and one Infiniband connection. In total, the Iray VCA features 20 CPU cores and 23,040 CUDA cores. The GPUs used are based on the full GK110 die and are paired with 12GB of memory each.
Even better, it is a scalable solution such that companies can add additional Iray VCAs to the network. The appliances reportedly transparently accelerate the Iray accelerated renders done on designer’s workstations. NVIDIA reports that an Iray VCA is approximately 60-times faster than a Quadro K5000-powered workstation. Further, according to NVIDIA, 19 Iray VCAs working together amounts to 1 PetaFLOP of compute performance which is enough to render photo realistic simulations using 1 billion rays with up to hundreds of thousands of bounces.
The Iray VCA enables some rather impressive real time renders of 3D models with realistic physical properties and lighting. The models are light simulations that use ray tracing, global illumination and other techniques to show photo realistic models using up to billions of rays of light. NVIDIA is positioning the Iray VCA as an alternative to physical prototyping, allowing designers to put together virtual prototypes that can be iterated and changed at significantly less cost and time.
Iray itself is NVIDIA’s GPU-accelerated photo realistic renderer. The Iray technology is used in a number of design software packages. The Iray VCA is meant to further accelerate that Iray renderer by throwing massive amounts of parallel processing hardware at the resource intensive problem over the network (the Iray VCAs can be installed at a data center or kept on site). Initially the Iray VCA will support 3ds Max, Catia, Bunkspeed, and Maya, but NVIDIA is working on supporting all Iray accelerated software with the VCA hardware.
The virtual prototypes can be sliced and examined and can even be placed in real world environments by importing HDR photos. Jen-Hsun Huang demonstrated this by placing Honda’s vehicle model on the GTC stage (virtually).
In fact, one of NVIDIA’s initial partners with the Iray VCA is Honda. Honda is currently beta testing a cluster of 25 Iray VCAs to refine styling designs for cars and their interiors based on initial artistic work. Honda Research and Development System Engineer Daisuke Ide was quoted by NVIDIA as stating that “Our TOPS tool, which uses NVIDIA Iray on our NVIDIA GPU cluster, enables us to evaluate our original design data as if it were real. This allows us to explore more designs so we can create better designs faster and more affordably.”
The Iray VCA (PDF) will be available this summer for $50,000. The sticker price includes the hardware, Iray license, and the first year of updates and maintenance. This is far from consumer technology, but it is interesting technology that may be used in the design process of your next car or other major purchase.
What do you think about the Iray VCA and NVIDIA's licensed hardware model?
Subject: General Tech | April 1, 2014 - 03:17 PM | Jeremy Hellstrom
Nothing beats speculating on a product that hasn't been released yet; often it ends up being more fun than the release. Currently DX12 is providing great fodder for enthusiasts, especially when the comparison to Mantle is broached in conversation. The Tech Report is looking to pass some ammunition on to online prognosticators by fleshing out the debate with some history and a review of what was announced and what has been stated since. One of their biggest secondary sources of information is Matt Sandy's Blog, as a DX Developer he is a knowledgeable source about the new API, in as far as he is allowed to speak on it. Check out the three page post here for a good resource of what we know for now.
"We already covered the basics of DirectX 12 amid the GDC frenzy. Now that we've had time to study our notes from the show, we can delve into a little more detail about the new API's inception, the key ways in which it differs from DirectX 11, and what AMD and Nvidia think about it."
Here is some more Tech News from around the web:
- Nvidia expects better 2014, says company product marketing VP @ DigiTimes
- Intel Upgrades MinnowBoard: Baytrail CPU, Nearly Halves Price To $99 @ Slashdot
- Sticky Tahr-fy pudding: Ubuntu 14.04 slickest Linux desktop ever @ The Register
- How to Create and Manage Btrfs Snapshots and Rollbacks on Linux (part 2) @ Linux.com
- VMware reveals more VSAN nodes @ The Register
- Bitcoin Malware Infects Apple iAd @ TechARP
- Rollei Sunglasses Cam 100 @ NikKTech
Subject: Editorial, General Tech, Graphics Cards, Processors, Shows and Expos | March 30, 2014 - 01:45 AM | Scott Michaud
Tagged: gdc 14, GDC, GCN, amd
While Mantle and DirectX 12 are designed to reduce overhead and keep GPUs loaded, the conversation shifts when you are limited by shader throughput. Modern graphics processors are dominated by sometimes thousands of compute cores. Video drivers are complex packages of software. One of their many tasks is converting your scripts, known as shaders, into machine code for its hardware. If this machine code is efficient, it could mean drastically higher frame rates, especially at extreme resolutions and intense quality settings.
Emil Persson of Avalanche Studios, probably known best for the Just Cause franchise, published his slides and speech on optimizing shaders. His talk focuses on AMD's GCN architecture, due to its existence in both console and PC, while bringing up older GPUs for examples. Yes, he has many snippets of GPU assembly code.
AMD's GCN architecture is actually quite interesting, especially dissected as it was in the presentation. It is simpler than its ancestors and much more CPU-like, with resources mapped to memory (and caches of said memory) rather than "slots" (although drivers and APIs often pretend those relics still exist) and with how vectors are mostly treated as collections of scalars, and so forth. Tricks which attempt to combine instructions together into vectors, such as using dot products, can just put irrelevant restrictions on the compiler and optimizer... as it breaks down those vector operations into those very same component-by-component ops that you thought you were avoiding.
Basically, and it makes sense coming from GDC, this talk rarely glosses over points. It goes over execution speed of one individual op compared to another, at various precisions, and which to avoid (protip: integer divide). Also, fused multiply-add is awesome.
I know I learned.
As a final note, this returns to the discussions we had prior to the launch of the next generation consoles. Developers are learning how to make their shader code much more efficient on GCN and that could easily translate to leading PC titles. Especially with DirectX 12 and Mantle, which lightens the CPU-based bottlenecks, learning how to do more work per FLOP addresses the other side. Everyone was looking at Mantle as AMD's play for success through harnessing console mindshare (and in terms of Intel vs AMD, it might help). But honestly, I believe that it will be trends like this presentation which prove more significant... even if behind-the-scenes. Of course developers were always having these discussions, but now console developers will probably be talking about only one architecture - that is a lot of people talking about very few things.
This is not really reducing overhead; this is teaching people how to do more work with less, especially in situations (high resolutions with complex shaders) where the GPU is most relevant.
Subject: General Tech | March 28, 2014 - 07:28 PM | Jeremy Hellstrom
Tagged: audio, Creative, Sound Blaster AXX 200, wireless, speaker, microphone, battery charger
The Creative Sound Blaster AXX 200 is more that just a wireless speaker for your PC or smartphone, it is also a voice recorder, a hands free microphone for your smartphone and a battery charger. The Bluetooth speaker function can be set to stereo or 7.1 channel surround and will accept a signal from up to 10' away. The microphone feature has a similar range and can capture audio in a 360 degree area and [H]ard|OCP were also able to make a handsfree call using only the AXX 200. The USB plugs make it into a charging station as well, handy considering how integrated it is with your phone.
"With its unusual vertical, compact design, Creative's new flagship stereo speaker system features touch controls and a multitude of wired and wireless connectivity options for your mobile phone, tablet, Mac, and PC. Today, we will tell you if there is enough room in the "mix" for great sound as well."
Here is some more Tech News from around the web:
- Luxa2 GroovyW Wireless Speaker and Qi Charger Review @HiTech Legion
- Wavemaster Stax Second Generation Speakers @ Kitguru
- Tech-Life BeatBlock WET Weatherproof Bluetooth Speaker @ NikKTech
- Kingston HyperX Cloud gaming headset @ Kitguru
- TteSports Chao Dracco Captain Headset & Dracco Headphones @ eTeknix
- Cambridge Audio DacMagic XS @ techPowerUp
- Steelseries H-Wireless Gaming Headset @ Funky Kit
- SteelSeries H Wireless Gaming Headset @ NikKTech
- Tt eSPORTS Level 10 M Gaming Headset (Iron White) Review @ Madshrimps
- ROCCAT Kave XTD 5.1 Surround Sound Gaming Headphones Review @ Techgage
- itFenix FLO Headset Review @HiTech Legion
- CM Storm Pulse-R Aluminum Gaming Headset @ Funky Kit
- Turtle Beach Ear Force i60 Wireless Headset Review @ Legit Reviews
- Tritton Kama PlayStation 4 / Vita Gaming Headset @ eTeknix
Subject: General Tech, Displays | March 28, 2014 - 04:21 PM | Scott Michaud
Tagged: VR, valve, Oculus, facebook
Today, Oculus VR issued a statement which claims that Michael Abrash has joined their ranks as Chief Scientist. Abrash was hired by Valve in 2011 where he led, and apparently came up with the idea for, their wearable computing initiatives. For a time, he and Jeri Ellsworth were conducting similar projects until she, and many others, were forced out of the company for undisclosed reasons (she was allowed to take her project with her which ultimately became CastAR). While I have yet to see an official announcement claim that Abrash has left Valve, I have serious doubts that he would be employed in both places for any reasonable period of time. With both gone, I wonder about Valve's wearable initaitive going forward.
Abrash at Steam Dev Days
This press statement comes just three days after Facebook announced "definitive" plans to acquire Oculus VR for an equivalent of $2 billion USD (it is twice the company Instragram was). Apparently, the financial stability of Facebook (... deep breath before continuing...) was the catalyst for this decision. VR research is expensive. Abrash is now comfortable working with them, gleefully expending R&D funds, advancing the project without sinking the ship.
And then there's Valve.
On last night's This Week in Computer Hardware (#260), Patrick Norton and I were discussing the Oculus VR acquisition. He claimed that he had serious doubts about whether Valve ever intended to ship a product. So far, the only product available that uses Valve's research is the Oculus Rift DK2. Honestly, while I have not really thought about it until now, it would not be surprising for Valve to contribute to the PC platform itself.
And, hey, at least someone is not afraid of Facebook's ownership.
Subject: General Tech | March 28, 2014 - 02:48 PM | Jeremy Hellstrom
Tagged: Samsung, galaxy s5
Some lucky Aussies at The Register sweet talked their way into a Samsung Galaxy S5 and have put together a brief preview for your reading pleasure. There are many new features you will someday be able to use, even if El Reg couldn't quite test them yet. There is a battery saving mode which should help road warriors and a fingerprint sensor which is touted to work with NFC to turn your S5 into a replacement for your credit cards so you don't have to carry them with you. There is more to see in the article, including the Galaxy Gear Neo smartwatch.
"This time around Samsung is keen on its battery-saving mode, IP67 rating and, once again, fitness features. Samsung Australia personnel swore blind all of those features were designed for an “Aussie lifestyle”. Because down here we all go to the beach every day, a supposition only slightly less believable than the notion that an S5 design meeting considered how to optimise sales in a nation of 23 million."
Here is some more Tech News from around the web:
- Trying Out & Benchmarking The DigitalOcean Cloud @ Phoronix
- HDD vendors promoting ultra-slim models @ DigiTimes
- Microsoft CEO Nadella launches Office for iPad, now live in the Apple App Store @ The Inquirer
- BlackBerry 10 given top-level clearance by Department of Defense @ The Inquirer
- Microsoft aims at global shipments of 25 million Windows tablets in 2014, say Taiwan makers @ DigiTimes
- PAPAGO! P2 Pro Dashcam @ eTeknix
- PAPAGO! P3 Dashcam @ Benchmark Reviews
- Terminator-maker 'Cyberdyne Inc' lists on Tokyo stock exchange @ The Register
Subject: General Tech | March 27, 2014 - 02:42 PM | Ken Addison
Tagged: W9100, video, titan z, poseidon 780, podcast, Oculus, nvidia, GTC, GDC
PC Perspective Podcast #293 - 03/27/2014
Join us this week as we discuss the NVIDIA Titan-Z, ASUS ROG Poseidon 780, News from OculusVR and more!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the Store
- RSS - Subscribe through your regular RSS reader
- MP3 - Direct download link to the MP3 file
Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath and Allyn Malventano
Week in Review:
0:37:07 This podcast is brought to you by Coolermaster, and the CM Storm Pulse-R Gaming Headset
News items of interest:
Hardware/Software Picks of the Week:
Josh: Certainly not a Skype Connection to the Studio
Allyn: Continuous ink conversions
Subject: General Tech | March 27, 2014 - 01:10 PM | Jeremy Hellstrom
Tagged: pascal, nvlink, nvidia, maxwell, jen-hsun huang, GTC
Before we get to see Volta in action NVIDIA is taking a half step and releasing the Pascal architecture which will use Maxwell-like Streaming Multiprocessors and will introduce stacked or 3D memory which will reside on the same substrate as the GPU. Jen-Hsun claimed this new type of memory will vastly increase the bandwidth available, provide two and a half times the capacity and be four times as energy efficient at the same time. Along with the 3D memory announcement was the revealing of NVLink, an alternative interconnect which he claims will offer 5-12 times the bandwidth of PCIe and will be utilized by HPC systems. From his announcement that NVLink will feature eight 20Gbps lanes per block or as NVIDIA is calling them, bricks, which The Tech Report used to make a quick calculation and came up with an aggregate bandwidth of a brick of around 20GB/s. Read on to see what else was revealed.
"Today during his opening keynote at the Nvidia GPU Technology Conference, CEO Jen-Hsun Huang offered an update to Nvidia's GPU roadmap. The big reveal was about a GPU code-named Pascal, which will be a generation beyond the still-being-introduced Maxwell architecture in the firm's plans."
Here is some more Tech News from around the web:
- Nvidia, VMware join to pipe high-quality 3D graphics from the cloud @ The Register
- Android has 97 Percent of Mobile Malware, But Nearly None in the U.S. @ DailyTech
- Amazon HALVES cloud storage prices after Google's shock slash @ The Register
- Bitcoin mining malware hits Android @ The Inquirer
- Facebook Oculus VR buy causes rift with developers and tech fans @ The Inquirer
- iSAW EXtreme Action Camera @ Kitguru
- Netgear VueZone VZSX2800 Wireless Surveillance Camera Kit @ eTeknix
Subject: General Tech | March 27, 2014 - 12:11 AM | Morry Teitelman
Tagged: hashing benchmarks, GPGPU performance, FinalWire, aida64
Courtesy of FinalWire
Today, FinalWire Ltd. announced the release of version 4.30 of their diagnostic and benchmarking tool, AIDA64. This new version updates their Extreme Edition and Business Edition of the software.
Courtesy of FinalWire
The latest version of AIDA64 has been updated to work with the latest versions of the Windows Desktop and Server-based OSes, Windows 8.1 Update 1 and Windows Server 2012 R2 Update 1. Further, FinalWire integrated support for AMD's Mantle technology as well as support for Advanced Vector Extensions 2 (AVX2), Fused Multiply-Add (FMA) instructions, and AES-NI hardware acceleration integrated into the upcoming Intel Broadwell-based processor series.
New features include:
- Microsoft Windows 8.1 Update 1 and Windows Server 2012 R2 Update 1 support
- OpenCL GPGPU SHA-1 hash benchmark
- CUDA 6.0 support
- Socket AM1 motherboards support
- Improved support for Intel “Broadwell” CPU
- Preliminary support for AMD “Carrizo” and “Toronto” APUs
- Preliminary support for Intel “Skylake”, “Cherry Trail”, “Denverton” CPUs
- Crucial M550 and Intel 730 SSD support
- GPU details for AMD Radeon R7 265
- GPU details for nVIDIA GeForce GTX 745, GeForce 800 Series
Software updates new to this release (since AIDA64 v4.00):
- OpenCL GPGPU Benchmark Suite
- AMD Mantle graphics accelerator diagnostics
- Multi-threaded memory stress test with SSE, SSE2, AVX, AVX2, FMA, BMI and BMI2 acceleration
- Optimized 64-bit benchmarks for AMD “Kaveri”, “Bald Eagle”, “Mullins”, “Beema” APUs
- Optimized 64-bit benchmarks for Intel Atom C2000 “Avoton” and “Rangeley” SoC
- Optimized 64-bit benchmarks for Intel “Bay Trail” desktop, mobile and tablet SoC
- Full support for the upcoming Intel “Haswell Refresh” platform with Intel “Wildcat Point” PCH
- Razer SwitchBlade LCD support
- Preliminary support for Intel Quark X1000 “Clanton” SoC
- Improved support for OpenCL 2.0
- Support for VirtualBox v4.3 and VMware Workstation v10
- OCZ Vector 150, OCZ Vertex 460, Samsung XP941 SSD support
- GPU details for AMD Radeon R5, R7, R9 Series
- GPU details for nVIDIA GeForce 700 Series
Subject: General Tech | March 26, 2014 - 08:49 PM | Tim Verry
Tagged: remote graphics, nvidia, GTC 2014, gpgpu, emerging companies summit, ecs 2014, cloud computing
NVIDIA started the Emerging Companies Summit six years ago, and since then the event has grown in size and scope to identify and support those technology companies tha leverage (or plan to leverage) GPGPU computing to deliver innovative products. The ECS continues to be a platform for new startups to showcase their work at the annual GPU Technology Conference. NVIDIA provides support in the form of legal, developmental, and co-marketing to the companies featured at ECS.
There was an interesting twist this year though in the form of the Early Start Challenge. This is a new aspect to ECS in addition to the ‘One to Watch’ award. I attended the Emerging Companies Summit again this year and managed to snag some photos and participate in the Early Start Challenge (disclosure: i voted for Audiostream TV).
The 12 Early Start Challenge contestants take the stage at once to await the vote tally.
During the challenge, 12 selected startup companies were each given eight minutes on stage to pitch their company and why their innovations were deserving of the $100,000 grand prize. The on stage time was divided into a four minute presentation and a four minute Q&A session with the panel of judges (this year the audience was not part of the Q&A session at ECS unlike last year due to time constraints).
After all 12 companies had their chance on stage, the panel of judges and the audience submitted their votes for the most innovative startup. The panel of judges included:
- Scott Budman Business & Technology Reporter, NBC
- Jeff Herbst Vice President of Business Development, NVIDIA
- Jens Hortsmann Executive Producer & Managing Partner, Crestlight Venture Productions
- Pat Moorhead President & Principal Analyst, Moor Insights & Strategy
- Bill Reichert Managing Director, Garage Technology Ventures
The companies participating in the challenge include Okam Studio, MyCloud3D, Global Valuation, Brytlyt, Clarifai, Aerys, oMobio, ShiVa Technologies, IGI Technologies, Map-D, Scalable Graphics, and AudioStream TV. The companies are involved in machine learning, deep neural networks, computer vision, remote graphics, real time visualization, gaming, and big data analytics.
After all the votes were tallied, Map-D was revealed to be the winner and received a check for $100,000 from NVIDIA Vice President of Business Development Jeff Herbst.
Jeff Herbst awarding Map-D's CEO with the Early Start Challenge grand prize check. From left to right: Scott Budman, Jeff Herbst, and Thomas Graham.
Map-D is a company that specializes in a scaleable in-memory GPU database that promises millisecond queries directly from GPU memory (with GPU memory bandwidth being the bottleneck) and very fast database inserts. The company is working with Facebook and PayPal to analyze data. In the case of Facebook, Map-D is being used to analyze status updates in real time to identify malicious behavior. The software can be scaled across eight NVIDIA Tesla cards to analyze a billion Twitter tweets in real time.
It is specialized software, but extremely useful within its niche. Hopefully the company puts the prize money to good use in furthering its GPGPU endeavors. Although there was only a single grand prize winner, I found all the presentations interesting and look forward to seeing where they go from here.
Subject: General Tech, Graphics Cards | March 26, 2014 - 05:43 PM | Scott Michaud
Tagged: amd, firepro, W9100
The AMD FirePro W9100 has been announced, bringing the Hawaii architecture to non-gaming markets. First seen in the Radeon R9 series of graphics cards, it has the capacity for 5 TeraFLOPs of single-precision (32-bit) performance and 2 TeraFLOPs of double-precision (64-bit). The card also has 16GB of GDDR5 memory to support it. From the raw numbers, this is slightly more capacity than either the Titan Black or Quadro K6000 in all categories. It will also support six 4K monitors (or three at 60Hz), per card. AMD supports up to four W9100 cards in a single system.
Professional users can be looking for several things in their graphics cards: compute performance (either directly or through licensed software such as Photoshop, Premiere, Blender, Maya, and so forth), several high-resolution monitors (or digital signage units), and/or a lot of graphics performance. The W9100 is basically the top of the stack which covers all three of these requirements.
AMD also announced a system branding initiative called, "AMD FirePro Ultra Workstation". They currently have five launch partners, Supermicro, Boxx, Tarox, Silverdraft, and Versatile Distribution Services, which will have workstations available under this program. The list of components for a "Recommend" certification is: two eight-core 2.6 GHz CPUs, 32GB of RAM, four PCIe 3.0 x16 slots, a 1500W Platinum PSU, and a case with nine expansion slots (to allow four W9100 GPUs along with one SSD or SDI interface card).
Also, while the company has heavily discussed OpenCL in their slide deck, they have not mentioned specific versions. As such, I will assume that the FirePro W9100 supports OpenCL 1.2, like the R9-series, and not OpenCL 2.0 which was ratified back in November. This is still a higher conformance level than NVIDIA, which is at OpenCL 1.1.
Currently no word about pricing or availability.
Subject: General Tech | March 26, 2014 - 02:48 PM | Jeremy Hellstrom
Tagged: oculus rift, Kickstarter, john carmack, facebook
You've heard by now that Facebook has purchased Oculus and you likely have an opinion on the matter. There are quite a few issues this sale raises for the technologically inclined. For the Kickstarter backers, the question of the propriety of Vulture Capitalists benefiting monetarily from a project which began in part because of their donation made on Kickstarter; which still did net them a device. For those hoping that Oculus was going to be a project designed and lead by Palmer Luckey and involving John Carmack with little oversight or pressure from a company that wants an immediate return on their investment. For some the simple involvment of Facebook is enough to sour the entire deal regardless of any other factors.
KitGuru offers some possible benefits that could come of this deal; Facebook cannot afford to slow development as competitors such as castAR will soon arrive, nor can they really push Carmack around without risking his involvement. Before you start screaming take a moment to think about everything this deal involves and then express your opinion ... after all you don't get reality that is much more virtual than Facebook.
"I know guys. I know. I’m mad too. I’m sad, disappointed, even betrayed, but these are all things I’m feeling and I bet you are too. We’re having an emotional reaction to two companies worth multiple billions of dollars doing a business deal and though I can’t help but wish it hadn’t happened, I know that if I look at it logically, it makes sense for everyone."
Here is some more Tech News from around the web:
- Nvidia takes on Raspberry Pi with the Jetson TK1 mini supercomputer @ The Inquirer
- GNOME 3.12 Seeded by GNOME OS Projects @ Linux.com
- Meet Microsoft's latest Windows Server reseller – come on down, Google @ The Register
- SSD penetration rate bound to rise in 2014 @ DigiTimes
- Rosewill RGS-108P POE Gigabit Network Switch @ Modders-Inc
- Windows 8 BREAKS ITSELF after system restores @ The Register
Subject: General Tech, Mobile | March 25, 2014 - 09:34 PM | Tim Verry
Tagged: GTC 2014, tegra k1, nvidia, CUDA, kepler, jetson tk1, development
NVIDIA recently unified its desktop and mobile GPU lineups by moving to a Kepler-based GPU in its latest Tegra K1 mobile SoC. The move to the Kepler architecture has simplified development and enabled the CUDA programming model to run on mobile devices. One of the main points of the opening keynote earlier today was ‘CUDA everywhere,’ and NVIDIA has officially accomplished that goal by having CUDA compatible hardware from servers to desktops to tablets and embedded devices.
Speaking of embedded devices, NVIDIA showed off a new development board called the Jetson TK1. This tiny new board features a NVIDIA Tegra K1 SoC at its heart along with 2GB RAM and 16GB eMMC storage. The Jetson TK1 supports a plethora of IO options including an internal expansion port (GPIO compatible), SATA, one half-mini PCI-e slot, serial, USB 3.0, micro USB, Gigabit Ethernet, analog audio, and HDMI video outputs.
Of course the Tegra K1 part is a quad core (4+1) ARM CPU and a Kepler-based GPU with 192 CUDA cores. The SoC is rated at 326 GFLOPS which enables some interesting compute workloads including machine vision.
In fact, Audi has been utilizing the Jetson TK1 development board to power its self-driving prototype car (more on that soon). Other intended uses for the new development board include robotics, medical devices, security systems, and perhaps low power compute clusters (such as an improved Pedraforca system).It can also be used as a simple desktop platform for testing and developing mobile applications for other Tegra K1 powered devices, of course.
Beyond the hardware, the Jetson TK1 comes with the CUDA toolkit, OpenGL 4.4 driver, and NVIDIA VisionWorks SDK which includes programming libraries and sample code for getting machine vision applications running on the Tegra K1 SoC.
The Jetson TK1 is available for pre-order now at $192 and is slated to begin shipping in April. Interested developers can find more information on the NVIDIA developer website.
Subject: General Tech | March 25, 2014 - 05:46 PM | Tim Verry
Tagged: gtx titan z, gtx titan, GTC 2014, CUDA
During the opening keynote, NVIDIA showed off several pieces of hardware that will be available soon. On the desktop and workstation side of things, researchers (and consumers chasing the ultra high end) have the new GTX Titan Z to look forward to. This new graphics card is a dual GK110 GPU monster that offers up 8 TeraFLOPS of number crunching performance for an equally impressive $2,999 price tag.
Specifically, the GTX TITAN Z is a triple slot graphics card that marries two full GK110 (big Kepler) GPUs for a total of 5,760 CUDA cores, 448 TMUs, and 96 ROPs with 12GB of GDDR5 memory on a 384-bit bus (6GB on a 384-bit bus per GPU). NVIDIA has yet to release clockspeeds, but the two GPUs will run at the same clocks with a dynamic power balancing feature. Four the truly adventurous, it appears possible to SLI two GTX Titan Z cards using the single SLI connector. Display outputs include two DVI, one HDMI, and one DisplayPort connector.
NVIDIA is cooling the card using a single fan and two vapor chambers. Air is drawn inwards and exhausted out of the front exhaust vents.
In short, the GTX Titan Z is NVIDIA's new number crunching king and should find its way into servers and workstations running big data analytics and simulations. Personally, I'm looking forward to seeing someone slap two of them into a gaming PC and watching the screen catch on fire (not really).
What do you think about the newest dual GPU flagship?
Stay tuned to PC Perspective for further GTC 2014 coverage!
Subject: General Tech, Graphics Cards, Mobile | March 25, 2014 - 03:01 PM | Scott Michaud
Tagged: shield, nvidia
The SHIELD from NVIDIA is getting a software update which advances GameStream, TegraZone, and the Android OS, itself, to KitKat. Personally, the GameStream enhancements seem most notable as it now allows users to access their home PC's gaming content outside of the home, as if it were a cloud server (but some other parts were interesting, too). Also, from now until the end of April, NVIDIA has temporarily cut the price down to $199.
Going into more detail: GameStream, now out of Beta, will stream games which are rendered on your gaming PC to your SHIELD. Typically, we have seen this through "cloud" services, such as OnLive and GaiKai, which allow access to a set of games that run on their servers (with varying license models). The fear with these services is the lack of ownership, but the advantage is that the slave device just needs enough power to decode an HD video stream.
In NVIDIA's case, the user owns both server (their standard NVIDIA-powered gaming PC, which can now be a laptop) and target device (the SHIELD). This technology was once limited to your own network (which definitely has its uses, especially for the SHIELD as a home theater device) but now can also be exposed over the internet. For this technology, NVIDIA recommends 5 megabit upload and download speeds - which is still a lot of upload bandwidth, even for 2014. In terms of performance, NVIDIA believes that it should live up to expectations set by their GRID. I do not have any experience with this, but others on the conference call took it as good news.
As for content, NVIDIA has expanded the number of supported titles to over a hundred, including new entries: Assassin's Creed IV, Batman: Arkham Origins, Battlefield 4, Call of Duty: Ghosts, Daylight, Titanfall, and Dark Souls II. They also claim that users can add other apps which are not officially supported, Halo 2: Vista was mentioned as an example, for streaming. FPS and Bitrate can now be set by the user. A bluetooth mouse and keyboard can also be paired to SHIELD for that input type through GameStream.
Yeah, I don't like checkbox comparisons either. It's just a summary.
A new TegraZone was also briefly mentioned. Its main upgrade was apparently its library interface. There has also been a number of PC titles ported to Android recently, such as Mount and Blade: Warband.
The update is available now and the $199 promotion will last until the end of April.
Subject: General Tech | March 25, 2014 - 02:33 PM | Tim Verry
Tagged: Portal, GTC 2014, gaming, nvidia
During the opening keynote of NVIDIA's GTC 2014 conference, company CEO Jen-Hsun Huang announced that Valve had ported the ever-popular "Portal" game to the NVIDIA SHIELD handheld gaming platform.
The game appeared to run smoothly on the portable device, and is a worthy addition to the catalog of local games that can be run on the SHIELD.
Additionally, while the cake may still be a lie, portable gaming systems apparently are not as Jen-Hsun Huang revealed that all GTC attendees will be getting a free SHIELD.
Stay tuned to PC Perspective for more information on all the opening keynote announcements and their implications for the future of computing!
GPU Technology Conference 2014 resources:
Subject: General Tech | March 25, 2014 - 12:59 PM | Jeremy Hellstrom
Tagged: rtf, microsoft, outlook, word, fud
Users of Microsoft Word 2003 to the current version on PC or the 2011 version for Mac, which means any version of Outlook or other Microsoft application in which Word is the default text editor may want to avoid RTF attachments for the next while. There is an exploit in the wild which could allow a nefariously modified RTF file to give an attacker access to the machine which it was opened on at the same level as the user. This does mean that those who follow the advice of most Windows admins and do not log in to an administrator level account for day to day work need not worry overly but those who ignore the advice may find themselves compromised. As The Register points out, just previewing the attachment in Outlook is enough to trigger a possible infection.
"Microsoft has warned its Word software is vulnerable to a newly discovered dangerous bug – which is being exploited right now in "limited, targeted attacks" in the wild. There is no patch available at this time."
Here is some more Tech News from around the web:
- Hey, Glasshole: That cool app? It has turned you into a SPY DRONE @ The Register
- Remote ATM Attack Uses SMS To Dispense Cash @ Slashdot
- Brain structure inspires FinFET @ Nanotechweb
- Ubuntu 14.04: Intel's Haswell Linux Driver Comes Up Short Of Windows @ Phoronix
- How to Manage Btrfs Storage Pools, Subvolumes And Snapshots on Linux (part 1) @ Linux.com
- Intel desktop Haswell Refresh processors to be available in April @ DigiTimes
Subject: General Tech | March 24, 2014 - 01:22 PM | Jeremy Hellstrom
Tagged: input, ducky, Cherry MX
If you are not satisfied with a plain keyboard that doesn't stand out in a crowd and also care about the quality of the board then the Ducky Shine 3 is a keyboard you should be aware of. Your choice of Cherry MX switches to ensure a proper mechanical feel to your key presses and an array of LED lights will make this keyboard stand out from across the room. As you can see from the picture, this isn't just backlit keys, a glowing snake on the space bar and lights on every key make this board rather unique. If flashy keyboards are your thing, check out Benchmark Reviews article here.
"The Ducky Shine Series, arguably one of the best mechanical keyboards on the market, has released the Ducky Shine 3 DK9008S3. Often referred to as the YOTS or “Year of the Snake”, the 2013 Shine 3 is the offshoot descendant of the 2012 Year of the Dragon Shine 2 DK9087 (a tenkeyless version in the shine series). This model, like it’s predecessor, comes with a wide array of switch options including Cherry MX Black, Blue, Brown, and Red, and a wide array of LED color options including: Blue, Red, Green, White, Magenta, and Orange."
Here is some more Tech News from around the web:
- Razer Blackwidow Ultimate 2014 (Razer Green Switches) @ Custom PC Review
- Corsair Raptor K40 Gaming Keyboard @ Benchmark Reviews
- Ducky Shine 3 DK-9008 Tuhaojin Gold (Cherry Green switches) @ Kitguru
- Speedlink Strike FX-6 Bluetooth PS3 Gamepad @ eTeknix
- Speedlink Xeox Pro Analogue Wireless PlayStation 3 & PC Gamepad @ eTeknix
- GAMDIAS NYX Speed Gaming Mouse Pad Review @HiTech Legion
- Mionix AVIOR 7000 gaming mouse @ Kitguru
- Genius GX Gaming Gila Mouse Review @ Modders-Inc
- Mionix Avior 7000 and Naos 7000 Review - Same, But Different @ Techgage
- Corsair Raptor M45 Gaming Mouse @ Benchmark Reviews
- Steelseries Rival Gaming Mouse AND AVEXIR Blitz 1.1 Memory @ Funky Kit
- Func MS-3 R2 Gaming Mouse and 1030 R2 Gaming Surface Review @HiTech Legion
- Roccat Kone Pure Mouse @ Benchmark Reviews
Subject: General Tech | March 24, 2014 - 12:26 PM | Jeremy Hellstrom
Tagged: opengl, nvidia, gdc 14, GDC, amd, Intel
DX12 and its Mantle-like qualities garnered the most interest from gamers at GDC but an odd trio of companies were also pushing a different API. OpenGL has been around for over 20 years and has waged a long war against Direct3D, a war which may be intensifying again. Representatives from Intel, AMD and NVIDIA all took to the stage to praise the new OpenGL standard, suggesting that with a tweaked implementation of OpenGL developers could expect to see performance increases between 7 to 15 times. The Inquirer has embedded an hour long video in their story, check it out to learn more.
"CHIP DESIGNERS AMD, Intel and Nvidia teamed up to tout the advantages of the OpenGL multi-platform application programming interface (API) at this year's Game Developers Conference (GDC)."
Here is some more Tech News from around the web:
- The TR Podcast 152: Intel's new desktop mojo, DX12, and TR does subscriptions
- DirectX 12 will also add new features for next-gen GPUs @ The Tech Report
- Malwarebytes offers Windows XP security support before Microsoft's April deadline @ The Inquirer
- Slow SSD Transition and The Consumer Mindset – Learning to Run With Flash @ SSD Review
- AMD Is Exploring A Very Interesting, More-Open Linux Driver Strategy @ Phoronix
- AT&T and Netflix get into very public spat over net neutrality @ The Register
Get notified when we go live!