All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Subject: General Tech | March 24, 2014 - 12:26 PM | Jeremy Hellstrom
Tagged: opengl, nvidia, gdc 14, GDC, amd, Intel
DX12 and its Mantle-like qualities garnered the most interest from gamers at GDC but an odd trio of companies were also pushing a different API. OpenGL has been around for over 20 years and has waged a long war against Direct3D, a war which may be intensifying again. Representatives from Intel, AMD and NVIDIA all took to the stage to praise the new OpenGL standard, suggesting that with a tweaked implementation of OpenGL developers could expect to see performance increases between 7 to 15 times. The Inquirer has embedded an hour long video in their story, check it out to learn more.
"CHIP DESIGNERS AMD, Intel and Nvidia teamed up to tout the advantages of the OpenGL multi-platform application programming interface (API) at this year's Game Developers Conference (GDC)."
Here is some more Tech News from around the web:
- The TR Podcast 152: Intel's new desktop mojo, DX12, and TR does subscriptions
- DirectX 12 will also add new features for next-gen GPUs @ The Tech Report
- Malwarebytes offers Windows XP security support before Microsoft's April deadline @ The Inquirer
- Slow SSD Transition and The Consumer Mindset – Learning to Run With Flash @ SSD Review
- AMD Is Exploring A Very Interesting, More-Open Linux Driver Strategy @ Phoronix
- AT&T and Netflix get into very public spat over net neutrality @ The Register
Subject: General Tech, Shows and Expos | March 22, 2014 - 01:41 AM | Scott Michaud
Tagged: opengl, nvidia, Intel, gdc 14, GDC, amd
So, for all the discussion about DirectX 12, the three main desktop GPU vendors, NVIDIA, AMD, and Intel, want to tell OpenGL developers how to tune their applications. Using OpenGL 4.2 and a few cross-vendor extensions, because OpenGL is all about its extensions, a handful of known tricks can reduce driver overhead up to ten-fold and increase performance up to fifteen-fold. The talk is very graphics developer-centric, but it basically describes a series of tricks known to accomplish feats similar to what Mantle and DirectX 12 suggest.
The 130-slide presentation is broken into a few sections, each GPU vendor getting a decent chunk of time. On occasion, they would mention which implementation fairs better with one function call. The main point that they wanted to drive home (since they clearly repeated the slide three times with three different fonts) is that none of this requires a new API. Everything exists and can be implemented right now. The real trick is to know how to not poke the graphics library in the wrong way.
The page also hosts a keynote from the recent Steam Dev Days.
That said, an advantage that I expect from DirectX 12 and Mantle is reduced driver complexity. Since the processors have settled into standards, I expect that drivers will not need to do as much unless the library demands it for legacy reasons. I am not sure how extending OpenGL will affect that benefit, as opposed to just isolating the legacy and building on a solid foundation, but I wonder if these extensions could be just as easy to maintain and optimize. Maybe it is.
Either way, the performance figures do not lie.
Subject: General Tech, Displays, Shows and Expos | March 22, 2014 - 01:04 AM | Scott Michaud
Tagged: oculus rift, Oculus, gdc 14, GDC
Last month, we published a news piece stating that Oculus Rift production has been suspended as "certain components" were unavailable. At the time, the company said they are looking for alternate suppliers but do not know how long that will take. The speculation was that the company was simply readying a new version and did not want to cannibalize their sales.
This week, they announced a new version which is available for pre-order and expected to ship in July.
DK2, as it is called, integrates a pair of 960x1080 OLED displays (correction, March 22nd @ 3:15pm: It is technically a single 1080p display that is divided per eye) for higher resolution and lower persistence. Citing Valve's VR research, they claim that the low persistence will reduce motion blur as your eye blends neighboring frames together. In this design, it flickers the image for a short period before going black, and does this at a high enough rate keep your eye fed with light. The higher resolution also prevents the "screen door effect" complained about by the first release. Like their "Crystal Cove" prototype, it also uses an external camera to reduce latency in detecting your movement. All of these should combine to less motion sickness.
I would expect that VR has a long road ahead of it before it becomes a commercial product for the general population, though. There are many legitimate concerns about leaving your users trapped in a sensory deprivation apparatus when Kinect could not even go a couple of days without someone pretending to play volleyball and wrecking their TV with ceiling fan fragments. Still, this company seems to be doing it intelligently: keep afloat on developers and lead users as you work through your prototypes. It is cool, even if it will get significantly better, and people will support its research while getting the best at the time.
DK2 is available for pre-order for $350 and is expected to ship in July.
Subject: General Tech, Graphics Cards | March 22, 2014 - 12:43 AM | Scott Michaud
NVIDIA's Release 340.xx GPU drivers for Windows will be the last to contain "enhancements and optimizations" for users with video cards based on architectures before Fermi. While NVIDIA will provided some extended support for 340.xx (and earlier) drivers until April 1st, 2016, they will not be able to install Release 343.xx (or later) drivers. Release 343 will only support Fermi, Kepler, and Maxwell-based GPUs.
The company has a large table on their CustHelp website filled with product models that are pining for the fjords. In short, if the model is 400-series or higher (except the GeForce 405) then it is still fully supported. If you do have the GeForce 405, or anything 300-series and prior, then GeForce Release 340.xx drivers will be the end of the line for you.
As for speculation, Fermi was the first modern GPU architecture for NVIDIA. It transitioned to standards-based (IEEE 754, etc.) data structures, introduced L1 and L2 cache, and so forth. From our DirectX 12 live blog, we also noticed that the new graphics API will, likewise, begin support at Fermi. It feels to me that NVIDIA, like Microsoft, wants to shed the transition period and work on developing a platform built around that baseline.
Subject: General Tech | March 21, 2014 - 01:32 PM | Jeremy Hellstrom
Tagged: internet of things
Call it ubiquitous computing, the internet of things or whichever moniker you prefer but what you are describing remains the same, the network of internet enabled sensors, identifiers and broadcasters which now surround us. For many this is not something they think about but for the technologically inclined it is a topic of much interest with many concerns to address of which the prime suspect is privacy. The Inquirer has been hosting discussions on this topic and the possibilities that are created by the massive growth of the internet of things. Check out the debate on how this will impact privacy here.
"THE INTERNET OF THINGS is a term that has been around for about 15 years, with its origins in barcodes and radio frequency identity (RFID) tags, and evolving via near-field communications (NFC) and QR codes. But it's the rise of smart devices and wearable technology, which has only started to take off in the past few years, that will see the Internet of Things come into its own."
Here is some more Tech News from around the web:
- A sysadmin always comes prepared: Grasp those essential tools @ The Register
- Don’t Just Go Sticking That Anywhere: Protect the Precious With a USB Wrapper @ Hack a Day
- Linux May Succeed Windows XP As OS of Choice For ATMs @ Slashdot
- Netgear Nighthawk R7000 AC1900 Wireless Router Review @ Legit Reviews
- Intel ready to launch 4 new desktop CPUs in Mid 2014 @ Madshrimps
- EA games web server was hosting PHISHING SITE – securobod @ The Register
- 2014 Game Developers Conference @ Benchmark Reviews
Subject: General Tech | March 20, 2014 - 04:05 PM | Ken Addison
Tagged: podcast, video, gdc14, haswell, Haswell-E, Broadwell, devil's canyon, Intel, amd, Mantle, dx12, nvidia, gtx 750ti, evga, pny, galaxy
PC Perspective Podcast #292 - 03/20/2014
Join us this week as we discuss Haswell-E, Iris Pro in Broadwell, our 750 Ti Roundup and more!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the Store
- RSS - Subscribe through your regular RSS reader
- MP3 - Direct download link to the MP3 file
Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath and Allyn Malventano
We have our winners! Win a GeForce GTX 750 Ti by Showing Off Your Upgrade-Worthy Rig!
Week in Review:
0:34:44 This podcast is brought to you by Coolermaster, and the CM Storm Pulse-R Gaming Headset
News items of interest:
0:40:25 An SSD Supercomputer?
0:57:00 Busy week to be a GPU-accelerated software developer
Hardware/Software Picks of the Week:
Jeremy: Windows Mobile SSLChainSaver
Subject: General Tech | March 20, 2014 - 03:01 PM | Jeremy Hellstrom
Tagged: bitcoin, dogecoin, internet of things, cryptocurrency
Not content to ruin the hopes of gamers wanting to upgrade to a Hawaii based AMD GPU now your smart devices are being press ganged into mining crypto-currency. Everything from TVs and fridges through printers, routers and security cameras can be infected with the linux.darlloz worm and will then begin mining for the author of the worm. The worm will even block other infections and has even been monitored patching certain holes in routers to prevent anything else from infecting the device and slowing down the mining computations. The Inquirer does have some humourous news about this worm, there are 31,716 separate IP addresses infected but this has manged to raise a mere $196.00 so for the author.
"A WORM that leverages the Internet of Things to mine cryptocurrencies has been found to have infected around 31,000 devices."
Here is some more Tech News from around the web:
- MSI Shows Off New Gaming Notebooks @ Kitguru
- Exclusive Interview with Richard Huddy from Intel at GDC @ Kitguru
- Top 10 Google Chromecast apps you should install @ The Inquirer
- Azure promises to guard virtual machines against migration dangers @ The Register
- 'Software amplifier' boosts quantum signals @ The Register
- Intel Desktop Roadmap Update - Devil's Canyon, Broadwell @ Legit Reviews
- Intel Talks Haswell-E, Broadwell, Devil's Canyon & More @ Hardware Canucks
- Intel to renew commitment to desktop PCs with a slew of new CPUs
- We're being royalty screwed! Pandora blames price rise on musos wanting money @ The Register
Subject: General Tech, Graphics Cards | March 19, 2014 - 08:26 PM | Ryan Shrout
Tagged: live blog, gdc 14, dx12, DirectX 12, DirectX
UPDATE: If you are looking for the live blog information including commentary and photos, we have placed it in archive format right over here. Thanks!!
It is nearly time for Microsoft to reveal the secrets behind DirectX 12 and what it will offer PC gaming going forward. I will be in San Francisco for the session and will be live blogging from it as networking allows. We'll have a couple of other PC Perspective staffers chiming in as well, so it should be an interesting event for sure! We don't know how much detail Microsoft is going to get into, but we will all know soon.
Microsoft DirectX 12 Session Live Blog
Thursday, March 20th, 10am PDT
You can sign up for a reminder using the CoverItLive interface below or you can sign up for the PC Perspective Live mailing list. See you Thursday!
Subject: General Tech, Shows and Expos | March 19, 2014 - 08:15 PM | Scott Michaud
Tagged: unreal engine 4, gdc 14, GDC, epic games
Game developers, from indie to the gigantic, can now access Unreal Engine 4 with a $19/month subscription (plus 5% of revenue from resulting sales). This is a much different model from UDK, which was free to develop games with their precompiled builds until commercial release, where an upfront fee and 25% royalty is then applied. For Unreal Engine 4, however, this $19 monthly fee also gives you full C++ source code access (which I have wondered about since the announcement that Unrealscript no longer exists).
Of course, the Unreal Engine 3-based UDK is still available (and just recently updated).
This is definitely interesting and, I believe, a response to publishers doubling-down on developing their own engines. EA has basically sworn off engines outside of their own Frostbite and Ingite technologies. Ubisoft has only announced or released three games based on Unreal Engine since 2011; Activision has announced or released seven in that time, three of which were in that first year. Epic Games has always been very friendly to smaller developers and, with the rise of the internet, it is becoming much easier for indie developers to release content through Steam or even their own website. These developers now have a "AAA" engine, which I think almost anyone would agree that Unreal Engine 4 is, with an affordable license (and full source access).
Speaking of full source access, licensees can access the engine at Epic's GitHub. While a top-five publisher might hesitate to share fixes and patches, the army of smaller developers might share and share-alike. This could lead to Unreal Engine 4 acquiring its own features rapidly. Epic highlights their Oculus VR, Linux and Steam OS, and native HTML5 initiatives but, given community support, there could be pushes into unofficial support for Mantle, TrueAudio, or other technologies. Who knows?
A sister announcement, albeit a much smaller one, is that Unreal Engine 4 is now part of NVIDIA's GameWorks initiative. This integrates various NVIDIA SDKs, such as PhysX, into the engine. The press release quote from Tim Sweeney is as follows:
Epic developed Unreal Engine 4 on NVIDIA hardware, and it looks and runs best on GeForce.
Another brief mention is that Unreal Engine 4 will have expanded support for Android.
Subject: General Tech, Shows and Expos | March 19, 2014 - 05:00 PM | Scott Michaud
Tagged: Mantle, gdc 14, GDC, crytek, CRYENGINE
While I do not have too many details otherwise, Crytek and AMD have announced that mainline CRYENGINE will support the Mantle graphics API. CRYENGINE, by Crytek, now sits alongside Frostbite, by Dice, and Nitrous, by Oxide Games, as engines which support that alternative to DirectX and OpenGL. This comes little more than a week after their announcement of native Linux support with their popular engine.
The tape has separate draw calls!
Crytek has been "evaluating" the API for quite some time now, showing interest back at the AMD Developer Summit. Since then, they have apparently made a clear decision on it. It is also not the first time that CRYENGINE has been publicly introduced to Mantle, with Chris Robert's Star Citizen, also powered by the 4th Generation CRYENGINE, having announced support for the graphics API. Of course, there is a large gap between having a licensee do legwork to include an API and having the engine developer provide you supported builds (that would be like saying UnrealEngine 3 supports the original Wii).
Hopefully we will learn more as GDC continues.
Editor's (Ryan) Take:
As the week at GDC has gone on, AMD continues to push forward with Mantle and calls Crytek's implementation of the low level API "a huge endorsement" of the company's direction and vision for the future. Many, including myself, have considered that the pending announcement of DX12 would be a major set back for Mantle but AMD claims that is "short sited" and as more developers come into the Mantle ecosystem it is proof AMD is doing the "right thing."
Here at GDC, AMD told us they have expanded the number of beta Mantle members dramatically with plenty more applications (dozens) in waiting. Obviously this could put a lot of strain on AMD for Mantle support and maintenance but representatives assure us that the major work of building out documentation and development tools is nearly 100% behind them.
If stories like this one over at Semiaccurate are true, and that Microsoft's DirectX 12 will be nearly identical to AMD Mantle, then it makes sense that developers serious about new gaming engines can get a leg up on projects by learning Mantle today. Applying that knowledge to the DX12 API upon its release could speed up development and improve implementation efficiency. From what I am hearing from the few developers willing to even mention DX12, Mantle is much further along in its release (late beta) than DX12 is (early alpha).
AMD indeed was talking with and sharing the development of Mantle with Microsoft "every step of the way" and AMD has stated on several occasions that there were two outcomes with Mantle; it either becomes or inspires a new industry standard in game development. Even if DX12 is more or less a carbon copy of Mantle, forcing NVIDIA to implement that API style with DX12's release, AMD could potentially have the advantage of gaming performance and support between now and Microsoft's DirectX release. That could be as much as a full calendar year from reports we are getting at GDC.
Subject: General Tech | March 19, 2014 - 04:15 PM | Jeremy Hellstrom
Tagged: Homeworld, Homeworld 2, kick ass, Homeworld Remastered, gearbox
If you never had the chance to play Homeworld and Homeworld 2 then there is some good news you should hear; if you did play them then the news is even better! Gearbox is not just re-releasing the two games they have redone the cinematics, models, textures and effects, plus support for up to 4K resolution. The news at Rock, Paper SHOTGUN makes it sound like some content from mods might be included as well as revamped multiplayer. Maybe they will up the quality to the point where some later levels chug just as hard as they did when you first sat down in front of your mothership.
"We learned last year that Gearbox were planning to re-release the enormously loved Homeworld games. Having plucked the license from the THQ jumble sale, apparently because their Chief Creative Officer Brian Martel has maximum love for the series, they made clear their intentions to release an HD version of the first two games. They’re upping their terminology now, from “HD” to “Remastered“."
Here is some more Tech News from around the web:
- Nostalgiablivion: Morrowind's First Quests In Skywind @ Rock, Paper, SHOTGUN
- Frog Fractions 2′s Kickstarter Exists, Is Utterly Insane @ Rock, Paper, SHOTGUN
- Boooooo! - The Witcher 3 Delayed To 2015 @ Rock, Paper, SHOTGUN
- Titanfall @ The Inquirer
- AMD Mantle & TrueAudio Patch for THIEF Tested @ Legit Reviews
- Sony announces Project Morpheus virtual reality headset @ Hexus
Subject: General Tech, Shows and Expos | March 19, 2014 - 01:20 PM | Jeremy Hellstrom
Tagged: Imagination Technologies, gdc 14, wizard, ray tracing
The Tech Report visited Imagination Technologies' booth at GDC where they were showing off a new processor, the Wizard GPU. It is based on the PowerVR Series6XT Rogue graphics processor which is specifically designed to accelerate ray tracing performance, a topic we haven't heard much about lately. They describe the performance as capable of processing 300 million rays and 100 million dynamic triangles per second which translates to 7 to 10 rays per pixel at 720p and 30Hz or 3 to 5 rays a pixel at 1080p and 30Hz. That is not bad, though Imagination Technologies estimates movies display at a rate of 16 to 32 rays per pixel so it may be a while before we see a Ray Tracing slider under Advanced Graphics Options.
"When we visited Imagination Technologies at CES, they were showing off some intriguing hardware that augments their GPUs in order to accelerate ray-traced rendering. Ray tracing is a well-known and high-quality form of rendering that relies on the physical simulation of light rays bouncing around in a scene. Although it's been used in movies and in static scene creation, ray tracing has generally been too computationally intensive to be practical for real-time graphics and gaming. However, Imagination Tech is looking to bring ray-tracing to real-time graphics—in the mobile GPU space, no less—with its new family of Wizard GPUs."
Here is some more Tech News from around the web:
- MoOx contacts make p-type transistor @ Nanotechweb
- Surrender your crypto keys or you're off to chokey, says Australia @ The Register
- Win XP holdouts storm eBay and licence brokers, hiss: Give us all your Windows 7 @ The Register
- Ubuntu Now Runs Well On The MacBook Air, Beats OS X In Graphics @ Phoronix
- Hidden 'Windigo' UNIX ZOMBIES are EVERYWHERE @ The Register
- Xbox boss Marc Whitten leaves Microsoft for Sonos as PS4 leads console sales @ The Inquirer
- Big Brother China Censors WeChat... Again @ TechARP
- Ergotech Freedom Quad 1-over-3 Desk Stand Review @ Techgage
- 10 Old Sprint Phones Can Now Get Totally Free Voice, Texts, and Data @ Gizmodo
Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | March 19, 2014 - 09:03 AM | Scott Michaud
Tagged: WebCL, gdc 14, GDC
The Khronos Group has just ratified the standard for WebCL 1.0. The API is expected to provide a massive performance boost to web applications which are dominated by expensive functions which can be offloaded to parallel processors, such as GPUs and multi-core CPUs. Its definition also allows WebCL to communicate and share buffers between it and WebGL with an extension.
Frequent readers of the site might remember that I have a particular interest in WebCL. Based on OpenCL, it allows web apps to obtain a list of every available compute device and target it for workloads. I have personally executed tasks on an NVIDIA GeForce 670 discrete GPU and other jobs on my Intel HD 4000 iGPU, at the same time, using the WebCL prototype from Tomi Aarnio of Nokia Research. The same is true for users with multiple discrete GPUs installed in their system (even if they are not compatible with Crossfire, SLi, or are from different vendors altogether). This could be very useful for physics, AI, lighting, and other game middleware packages.
Still, browser adoption might be rocky for quite some time. Google, Mozilla, and Opera Software were each involved in the working draft. This leaves both Apple and Microsoft notably absent. Even then, I am not sure how much interest exists within Google, Mozilla, and Opera to take it from a specification to a working feature in their browsers. Some individuals have expressed more faith in WebGL compute shaders than WebCL.
Of course, that can change with just a single "killer app", library, or middleware.
I do expect some resistance from the platform holders, however. Even Google has been pushing back on OpenCL support in Android, in favor of their "Renderscript" abstraction. The performance of a graphics processor is also significant leverage for a native app. There is little, otherwise, that cannot be accomplished with Web standards except a web browser itself (and there is even some non-serious projects for that). If Microsoft can support WebGL, however, there is always hope.
The specification is available at the Khronos website.
Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | March 19, 2014 - 09:02 AM | Scott Michaud
Tagged: OpenGL ES, opengl, opencl, gdc 14, GDC, EGL
The Khronos Group has also released their ratified specification for EGL 1.5. This API is at the center of data and event management between other Khronos APIs. This version increases security, interoperability between APIs, and support for many operating systems, including Android and 64-bit Linux.
The headline on the list of changes is the move that EGLImage objects makes, from the realm of extension into EGL 1.5's core functionality, giving developers a reliable method of transferring textures and renderbuffers between graphics contexts and APIs. Second on the list is the increased security around creating a graphics context, primarily designed for WebGL applications which any arbitrary website can become. Further down the list is the EGLSync object which allows further partnership between OpenGL (and OpenGL ES) and OpenCL. The GPU may not need CPU involvement when scheduling between tasks on both APIs.
During the call, the representative also wanted to mention that developers have asked them to bring EGL back to Windows. While it has not happened yet, they have announced that it is a current target.
The EGL 1.5 spec is available at the Khronos website.
Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | March 19, 2014 - 09:01 AM | Scott Michaud
Tagged: SYCL, opencl, gdc 14, GDC
To gather community feedback, the provisional specification for SYCL 1.2 has been released by The Khronos Group. SYCL extends itself upon OpenCL with the C++11 standard. This technology is built on another Khronos platform, SPIR, which allows the OpenCL C programming language to be mapped onto LLVM, with its hundreds of compatible languages (and Khronos is careful to note that they intend for anyone to make their own compatible alternative langauge).
In short, SPIR allows many languages which can compile into LLVM to take advantage of OpenCL. SYCL is the specification for creating C++11 libraries and compilers through SPIR.
As stated earlier, Khronos wants anyone to make their own compatible language:
While SYCL is one possible solution for developers, the OpenCL group encourages innovation in programming models for heterogeneous systems, either by building on top of the SPIR™ low-level intermediate representation, leveraging C++ programming techniques through SYCL, using the open source CLU libraries for prototyping, or by developing their own techniques.
SYCL 1.2 supports OpenCL 1.2 and they intend to develop it alongside OpenCL. Future releases are expected to support the latest OpenCL 2.0 specification and keep up with future developments.
The SYCL 1.2 provisional spec is available at the Khronos website.
Subject: General Tech, Storage | March 18, 2014 - 06:58 PM | Ryan Shrout
Tagged: ssd, Samsung, ocz, Intel, corsair
Back in January I wrote a short editorial that asked the question: "Is now the time to buy an SSD?" At that time we were looking at a combination of price drops with a lack of upcoming hardware releases. Since that published we have seen the release of the Intel 730 Series SSDs and even the new Crucial M550. While those are interesting drives to be sure (review pending on the M550), they aren't changing our opinions on the currently available, and incredibly cheap, solid state options.
While looking for some new hardware for the office, I found that the 1TB Samsung 840 EVO is now at an all time low $469! That is one of the faster SSDs on the market, and one of Allyn's favorites, for $0.469/GB!! I have included an updated table below with some of the most popular SSDs and their prices.
|Samsung 840 EVO||120 GB||$0.69/GB||$83 - Amazon|
|250 GB||$0.55/GB||$139 - Amazon|
|500 GB||$0.51/GB||$259 - Amazon|
|750 GB||$0.51/GB||$388 - Amazon|
|1000 GB||$0.46/GB||$469 - Amazon|
|Samsung 840 Pro||128 GB||$0.92/GB||$119 - Amazon|
|256 GB||$0.77/GB||$199 - Amazon|
|512 GB||$0.74/GB||$413 - Amazon|
|Intel 530 Series||120 GB||$0.91/GB||$89 - Amazon|
|180 GB||$0.80/GB||$144 - Amazon|
|240 GB||$0.62/GB||$149 - Amazon|
|480 GB||$0.87/GB||$419 - Amazon|
|Crucial M500 Series||120 GB||$0.57/GB||$69 - Amazon|
|240 GB||$0.49/GB||$119 - Amazon|
|480 GB||$0.47/GB||$229 - Amazon|
|960 GB||$0.45/GB||$439 - Amazon|
The biggest price drops were seen in the higher capacity drives including, the Samsung 840 EVO 1TB and 750GB models, the Intel 530 Series 480GB drive and even the Crucial M500 960GB and 480GB drives. Numerically the best value is with the 960GB Crucial M500 drive at $0.45/GB but it is followed very closely by that 1TB Samsung 840 EVO.
As of now, the Intel 730 Series of SSDs is available for sale on Amazon.com but their price per GB comparisons don't really match that of the EVO or M500. They are great drives, just read Allyn's review to see the proof of that, but they are targeted at the very performance conscious. The Crucial M550 is brand new, and looks interesting; expect us to dive more into that line very soon.
For me personally, grabbing a 750GB SSD is incredibly enticing and I think I'll find a handful in my cart to update our older 180GB SSD test beds.
Subject: General Tech, Shows and Expos | March 18, 2014 - 12:38 PM | Jeremy Hellstrom
Tagged: gdc 14, amd, ocz, Vector 150
If you make it to the Game Developers Conference this year make sure to pay a visit to the AMD booth where you can get a look at OCZ's Vector 150 drives in action. They aim to show that these drives are not only good for the gamer, they are good for the game designer as well.
OCZ Vector 150 SSDs on Display at AMD Booth #1024, March 17-21 in San Francisco, CA
SAN JOSE, CA - March 17, 2014 - OCZ Storage Solutions - a Toshiba Group Company and leading provider of high-performance solid state drives (SSDs) for computing devices and systems, today announced its partnership with AMD to showcase the power of high performance technology at the Game Developer Conference (GDC) March 17-21 at the Moscone Center in San Francisco, CA. AMD's demo systems will feature best-in-class Vector 150 Series solid state drives demonstrating how developers can enhance productivity and efficiency in their work.
"We are excited to partner with AMD for the upcoming Game Developers Conference to support the fast growing interactive game development industry," said Alex Mei, CMO for OCZ Storage Solutions. "OCZ is dedicated to delivering premium solid state storage solutions that are not only a useful tool for developers, but also meet the unique demands of enthusiasts and gamers on all levels."
"Our presence at the 2014 Game Developer Conference will feature a number of high-performance gaming systems running 24/7 in harsh conditions," said Darren McPhee, director of product marketing, Graphics Business Unit, AMD. "We knew that OCZ Vector SSDs were uniquely ready to meet the reliability requirements of our gaming installations. Between the high performance graphics of AMD Radeon™ GPUs and the fast load times of OCZ Vector SSDs, visitors to AMD's booth in the South Hall are in for a great gaming experience!"
GDC is the world's largest game industry event, attracting over 23,000 professionals including programmers, artists, producers, designers, audio professionals, business decision-makers, and other digital gaming industry authorities. OCZ's premium Vector 150 Series, designed for workstation users along with bleeding-edge enthusiasts, will be in AMD systems that promote improved CPU and GPU performance, enhanced rendering, speed, and overall system performance. Professional developer applications demand peak transfer speeds and ultra-high performance; OCZ SSDs offer 100 times faster access to data, quicker boot ups, faster file transfers, and a more responsive computing experience than hard drives.
GDC enables OCZ to team up with valued industries partners like AMD to reaffirm the Company's commitment to the gaming segment, and promote the use of flash storage for both developers and the gamers themselves.
Subject: General Tech | March 17, 2014 - 02:33 PM | Jeremy Hellstrom
Tagged: Internet, icann, iana, ntia
The root DNS of the internet has been run by National Telecommunications and Information Administration, a division of of the US Government though in truth they contracted out this responsibility to the non-profit Internet Corporation for Assigned Names and Numbers and their Internet Assigned Numbers Authority. This will not really change much in the way that the root DNS has handled but is more a handover of official responsibility as the US Government hands off it's past role as overseer of the internet. If ICANN and IANA are unfamiliar to you then pop over to The Register for a very brief overview of what they actually do.
"The US Department of Commerce is ready to leave the keys to the internet's worldwide DNS system in the hands of non-profit net overseer ICANN."
Here is some more Tech News from around the web:
- What could you do with 264 Sandy Bridge cores and 6TB of RAM? @ The Register
- Is no browser safe? Security bods poke holes in Chrome, Safari, IE, Firefox and earn $1m @ The Register
- Digitimes Research: Insufficient panel supplies to limit 2K-resolution smartphone penetration in 2014
- Origami Microscope for Just 50 Cents @ MAKE:Blog
- Intel to increase focus on desktop computers @ Kitguru
- CeBIT 2014: A Quick Tour @ Madshrimps
- CeBIT 2014: Fractal Shows off Node 804 and Watercooling Solution @ Madshrimps
- Office 2013 Crashes on Start Up - Solved - PCSTATS TechTip
Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | March 17, 2014 - 09:01 AM | Scott Michaud
Tagged: OpenGL ES, opengl, Khronos, gdc 14, GDC
Today, day one of Game Developers Conference 2014, the Khronos Group has officially released the 3.1 specification for OpenGL ES. The main new feature, brought over from OpenGL 4, is the addition of compute shaders. This opens GPGPU functionality to mobile and embedded devices for applications developed in OpenGL ES, especially if the developer does not want to add OpenCL.
The update is backward-compatible with OpenGL ES 2.0 and 3.0 applications, allowing developers to add features, as available, for their existing apps. On the device side, most functionality is expected to be a driver update (in the majority of cases).
OpenGL ES, standing for OpenGL for Embedded Systems but is rarely branded as such, delivers what they consider the most important features from the graphics library to the majority of devices. The Khronos Group has been working toward merging ES with the "full" graphics library over time. The last release, OpenGL ES 3.0, was focused on becoming a direct subset of OpenGL 4.3. This release expands upon the feature-space it occupies.
OpenGL ES also forms the basis for WebGL. The current draft of WebGL 2.0 uses OpenGL ES 3.0 although that was not discussed today. I have heard murmurs (not from Khronos) about some parties pushing for compute shaders in that specification, which this announcement puts us closer to.
The new specification also adds other features, such as the ability to issue a draw without CPU intervention. You could imagine a particle simulation, for instance, that wants to draw the result after its compute shader terminates. Shading is also less rigid, where vertex and fragment shaders do not need to be explicitly linked into a program before they are used. I inquired about the possibility that compute devices could be targetted (for devices with two GPUs) and possibly load balanced, in a similar method to WebCL but no confirmation or denial was provided (although he did mention that it would be interesting for apps that fall somewhere in the middle of OpenGL ES and OpenCL).
The OpenGL ES 3.1 spec is available at the Khronos website.
Subject: General Tech | March 16, 2014 - 10:27 PM | Sebastian Peak
Tagged: supercomputer, solid state drive, NSF, flash memory
We know that SSD's help any system perform better by reducing the storage bottlenecks we all experienced from hard disk drives. But how far can flash storage go in increasing performance if money is no object?? Enter the multi-million dollar world of supercomputers. Historically supercomputers have relied on the addition of more CPU cores to increase performance, but two new system projects funded by the National Science Foundation (NSF) will try a different approach: obscene amounts of high-speed flash storage!
The news comes as the NSF is requesting a cool $7 billion in research money for 2015, and construction has apparently already begun on two new storage-centered supercomputers. Memory and high-speed flash storage arrays will be loaded on the Wrangler supercomputer at Texas Advanced Computing Center (TACC), and the Comet supercomputer at the San Diego Supercomputer Center (SDSC).
Check out the crazy numbers from the TACC's Wrangler: a combination of 120 servers, each with Haswell-based Xeon CPU's, and a total of 10 petabytes (10,000TB!) of high performance flash data storage. The NSF says the supercomputer will have 3,000 processing cores dedicated to data analysis, with flash storage layers for analytics. The Wrangler supercomputer's bandwidth is said to be 1TB/s, with 275 million IOPS! By comparison, the Comet supercomputer will have “only” 1,024 Xeon CPU cores, with a 7 petabyte high-speed flash storage array. (Come on, guys... That’s like, wayyy less bytes.)
Supercomputer under construction…probably (Image credit CBS/Paramount)
The supercomputers are part of the NSF's “Extreme Digital” (XD) research program, and their current priorities are "relevant to the problems faced in computing today”. Hmm, kind of makes you want to run a big muilti-SSD deathwish RAID, huh?
Get notified when we go live!