All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Subject: General Tech, Cases and Cooling | March 21, 2013 - 01:52 PM | Tim Verry
Tagged: galileo, newton, akasa, nuc, case, thin mini-itx
FanlessTech recently spotted two new fan-less and small form factor cases from Akasa ahead of the official launch. The Akasa Galileo and Akasa Newton are compatible with thin Mini-ITX and Intel's Next Unit of Computing (NUC) motherboards respectively.
Both cases are constructed of aluminum, have VESA mounting holes, and double as a fan-less heatsink for your components. The Galileo is 37mm thick and can cool processors rated up to a 35W TDP. The Newton is a small case with fins around the sides to increase surface area (for better cooling capability), and aesthetic flair.
According to Fanless Tech, the two PC cases will be officially unveiled at Computex in Taipei, Taiwan this summer. There is no word on pricing or when they will be available for purchase, however.
Subject: General Tech | March 21, 2013 - 12:49 PM | Jeremy Hellstrom
Tagged: Vengeance K70, corsair, cherry mx red
FREMONT, California — March 21, 2013 — Corsair, a worldwide designer of high-performance PC gaming peripherals, today announced the Vengeance K70 fully mechanical gaming keyboard.
The new Vengeance K70 gaming keyboard is built on a rugged, brushed, aluminum chassis and features highly responsive Cherry MX Red mechanical switches under every key. The high performance switches combined with the keyboard's 100% anti-ghosted matrix, 20-key rollover and 1000Hz reporting rate provide fast, accurate input for gaming.
The Vengeance K70 gaming keyboards are available in two color schemes: silver aluminum with blue backlighting, and anodized black with deep red backlighting. Overall backlighting can be adjusted to four levels of intensity and each key is individually backlit, enabling the lighting for each key to be independently enabled or disabled. The key-by-key lighting customization allows users to highlight just the keys they need to emphasize and then save the setting directly to the K70's onboard memory. In addition, the Vengeance K70 comes with alternate colored, contoured keycaps for the WASD and 1-6 keycaps to allow additional customization.
"When we launched the Vengeance K60, customers loved the look and quality, but some wanted a backlit version," said Ruben Mookerjee, VP and General Manager of the Peripherals Business Unit at Corsair. "In typical Corsair fashion, we over-delivered and created Vengeance K70 with key-by-key backlighting, mechanical switches on every key, and two color schemes."
The Vengeance K70 also features dedicated multimedia controls to allow users to play, stop, pause, skip tracks and adjust volume. An extra USB connector is provided for attaching to USB devices such as a Vengeance gaming mouse or headset. A removeable soft-touch wrist rest provides comfort for long gaming or typing sessions.
See Vengeance K70 at PAX East from March 22-24
The Vengeance K70 keyboard will make its public debut at Corsair's booth at PAX East in Boston from March 22-24. Corsair is located in booth 1062.
Pricing and Availability
The Corsair Vengeance K70 will be available in April at suggested price of $129.99.
Subject: General Tech | March 21, 2013 - 12:35 PM | Jeremy Hellstrom
Tagged: amd, price cuts, Richland, a4-4000
It might be worth waiting until next month if you are going to be building a low cost AMD based system as prices on the A8-5600K, FX-8320, FX-6300 and FX-4300 will be dropping in April. You can also expect to see the A4-4000 hit stores the following month at a very low price according to DigiTimes but don't have anything new to report about the release date of the Richland chips. It is nice to have low cost CPUs available on the market, but it seems odd when even the lowest priced motherboard will run you about twice as much as the CPU you will be putting on it.
"AMD plans to cut some of its APU prices at the end of April to welcome its next-generation Richland APUs, set for launch in early June, according to sources from PC players.
Prices of AMD's APUs including A8-5600K, FX-8320, FX-6300 and FX-4300 will see 8-15% drops and the CPU maker will also start shipping its A4-4000 in mid-April for sales in mid-May priced at US$40."
Here is some more Tech News from around the web:
- Poking Holes In Samsung's Android Security @ Slashdot
- Nvidia and ARM: It's a parallel, parallel, parallel world @ The Register
- Skype tips up for Blackberry 10 as it hits 100,000 apps @ The Inquirer
- Google’s Note-Taking Service Keep Is Live, And It’s Wonderful @ Gizmodo
- Windows 8 Outperforming Ubuntu Linux With Intel OpenGL Graphics @ Phoronix
- Unigine Valley 1.0 Benchmark Tool Walk Through @ OCC
- Team AU Overclocking in Perth with Liquid Nitrogen - Deanzo's Thoughts @ Tweaktown
GTC 2013: Cortexica Vision Systems Talks About the Future of Image Recognition During the Emerging Companies Summit
Subject: General Tech, Graphics Cards | March 20, 2013 - 09:44 PM | Tim Verry
Tagged: video fingerprinting, image recognition, GTC 2013, gpgpu, cortexica, cloud computing
The Emerging Companies Summit is an series of sessions at NVIDIA's GPU Technology Conference (GTC) that gives the floor to CEOs from several up-and-coming technology startups. Earlier today, the CEO of Cortexica Vision Systems took the stage to talk briefly about the company's products and future direction, and to answer questions from a panel of industry experts.
If you tuned into NVIDIA's keynote presentation yesterday, you may have noticed the company showing off a new image recognition technology. That technology is being developed by a company called Cortexica Vision Systems. While it cannot perform facial recognition, it is capable of identifying everything else, according the company's CEO Ian McCready. Currently, Cortexica is employing a cluster of approximately 70 NVIDIA graphics cards, but it is capable of scaling beyond that. Mcready estimates that about 100 GPUs and a CPU would be required by a company like eBay, should they want to implement Cortexica's image recognition technology in-house.
The Cortexica technology uses images captured by a camera (such as the one in your smartphone), which is then sent to Cortexica's servers for processing. The GPUs in the Cortexica cluster handle the fingerprint creation task while the CPU does the actual lookup in the database of known fingerprints to either find an exact match, or return similar image results. According to Cortexica, the fingerprint creation takes only 100ms, though as more powerful GPUs make it into mobile devices, it may be possible to do the fingerprint creation on the device itself, reducing the time between taking a photo and getting relevant results back.
The image recognition technology is currently being used by Ebay Motors in the US, UK, and Germany. Cortexica hopes to find a home with many of the fashion companies that would use the technology to allow people to identify and ultimately purchase clothing they take photos of on television or in public. The technology can also perform 360-degree object recognition, identify logos that are as small as .4% of the screen, and identify videos. In the future Cortexica hopes to reduce latency, improve recognition accuracy, and add more search categories. Cortexica is also working on enabling an "always on" mobile device that will constantly be indentifying everything around it, which is both cool and a bit creepy. With mobile chips like Logan and Parker coming in the future, Cortexica hopes to be able to do on-device image recognition, which would greatly reduce latency and allow the use of the recognition technology while not connected to the internet.
The number of photos taken is growing rapidly, where as many as 10% of all photos stored "in the cloud" were taken last year alone. Even Facebook, with it's massive data centers is moving to a cold-storage approach to save money on electricity costs of storing and serving up those photos. And while some of these photos have relevant meta data, the majority of photos taken do not, and Cortexica claims that its technology can be used to get around that issue, but identifying photos as well as finding similar photos using its algorithms.
Stay tuned to PC Perspective for more GTC coverage!
Additional slides are available after the break:
Subject: Editorial, General Tech, Processors, Shows and Expos | March 20, 2013 - 06:26 PM | Scott Michaud
Tagged: windows rt, nvidia, GTC 2013
NVIDIA develops processors, but without an x86 license they are only able to power ARM-based operating systems. When it comes to Windows, that means Windows Phone or Windows RT. The latter segment of the market has disappointing sales according to multiple OEMs, which Microsoft blames them for, but the jolly green GPU company is not crying doomsday.
NVIDIA just skimming the Surface RT, they hope.
As reported by The Verge, NVIDIA CEO Jen-Hsun Huang was optimistic that Microsoft would eventually let Windows RT blossom. He noted how Microsoft very often "gets it right" at some point when they push an initiative. And it is true, Microsoft has a history of turning around perceived disasters across a variety of devices.
They also have a history of, as they call it, "knifing the baby."
I think there is a very real fear for some that Microsoft could consider Intel's latest offerings as good enough to stop pursuing ARM. Of course, the more the pursue ARM, the more their business model will rely upon the-interface-formerly-known-as-Metro and likely all of its certification politics. As such, I think it is safe to say that I am watching the industry teeter on a fence with a bear on one side and a pack of rabid dogs on the other. On the one hand, Microsoft jumping back to Intel would allow them to perpetuate the desktop and all of the openness it provides. On the other hand, even if they stick with Intel they likely will just kill the desktop anyway, for the sake of user confusion and the security benefits of cert. We might just have less processor manufacturers when they do that.
So it could be that NVIDIA is confident that Microsoft will push Windows RT, or it could be that NVIDIA is pushing Microsoft to continue to develop Windows RT. Frankly, I do not know which would be better... or more accurately, worse.
Subject: General Tech | March 20, 2013 - 02:28 PM | Jeremy Hellstrom
HP Wireless Audio for $54.99 with Free Shipping (normally $100 - use $150 Coupon Code: 15LOGICBUY).
15.6" Dell Inspiron 15R Special Edition Core i5 + 2GB Radeon HD 7730M Laptop w/Backlit Keyboard, 8GB RAM for $550 with free shipping (normally $650 - use coupon code: 2Q?XNXR2DXQ13G).
60" Sharp LC-60E69U 1080p 120Hz LCD HDTV for $698.00 with free shipping (normally $1,100).
20" HP Pavilion 20xi 1600 x 900 IPS LED-backlit LCD Monitor for $105 with free shipping (normally $130 - use coupon code: 15LOGICBUY).
HP bd335i Blu-ray Burner (Retail) for $105.00 with free shipping (normally $130- use coupon code: 15LOGICBUY).
Subject: General Tech, Graphics Cards | March 20, 2013 - 01:47 PM | Tim Verry
Tagged: tesla, tegra 3, supercomputer, pedraforca, nvidia, GTC 2013, GTC, graphics cards, data centers
There is a lot of talk about heterogeneous computing at GTC, in the sense of adding graphics cards to servers. If you have HPC workloads that can benefit from GPU parallelism, adding GPUs gives you computing performance in less physical space, and using less power, than a CPU only cluster (for equivalent TFLOPS).
However, there was a session at GTC that actually took things to the opposite extreme. Instead of a CPU only cluster or a mixed cluster, Alex Ramirez (leader of Heterogeneous Architectures Group at Barcelona Supercomputing Center) is proposing a homogeneous GPU cluster called Pedraforca.
Pedraforca V2 combines NVIDIA Tesla GPUs with low power ARM processors. Each node is comprised of the following components:
- 1 x Mini-ITX carrier board
1 x Q7 module (which hosts the ARM SoC and memory)
- Current config is one Tegra 3 @ 1.3GHz and 2GB DDR2
- 1 x NVIDIA Tesla K20 accelerator card (1170 GFLOPS)
- 1 x InfiniBand 40Gb/s card (via Mellanox ConnectX-3 slot)
- 1 x 2.5" SSD (SATA 3 MLC, 250GB)
The ARM processor is used solely for booting the system and facilitating GPU communication between nodes. It is not intended to be used for computing. According to Dr. Ramirez, in situations where running code on a CPU would be faster, it would be best to have a small number of Intel Xeon powered nodes to do the CPU-favorable computing, and then offload the parallel workloads to the GPU cluster over the InfiniBand connection (though this is less than ideal, Pedraforca would be most-efficient with data-sets that can be processed solely on the Tesla cards).
While Pedraforca is not necessarily locked to NVIDIA's Tegra hardware, it is currently the only SoC that meets their needs. The system requires the ARM chip to have PCI-E support. The Tegra 3 SoC has four PCI-E lanes, so the carrier board is using two PLX chips to allow the Tesla and InfiniBand cards to both be connected.
The researcher stated that he is also looking forward to using NVIDIA's upcoming Logan processor in the Pedraforca cluster. It will reportedly be possible to upgrade existing Pedraforca clusters with the new chips by replacing the existing (Tegra 3) Q7 module with one that has the Logan SoC when it is released.
Pedraforca V2 has an initial cluster size of 64 nodes. While the speaker was reluctant to provide TFLOPS performance numbers, as it would depend on the workload, with 64 Telsa K20 cards, it should provide respectable performance. The intent of the cluster is to save power costs by using a low power CPU. If your sever kernel and applications can run on GPUs alone, there are noticeable power savings to be had by switching from a ~100W Intel Xeon chip to a lower-power (approximately 2-3W) Tegra 3 processor. If you have a kernel that needs to run on a CPU, it is recommended to run the OS on an Intel server and transfer just the GPU work to the Pedraforca cluster. Each Pedraforca node is reportedly under 300W, with the Tesla card being the majority of that figure. Despite the limitations, and niche nature of the workloads and software necessary to get the full power-saving benefits, Pedraforca is certainly an interesting take on a homogeneous server cluster!
In another session relating to the path to exascale computing, power use in data centers was listed as one of the biggest hurdles to getting to Exaflop-levels of performance, and while Pedraforca is not the answer to Exascale, it should at least be a useful learning experience at wringing the most parallelism out of code and pushing GPGPU to the limits. And that research will help other clusters use the GPUs more efficiently as researchers explore the future of computing.
The Pedraforca project built upon research conducted on Tibidabo, a multi-core ARM CPU cluster, and CARMA (CUDA on ARM development kit) which is a Tegra SoC paired with an NVIDIA Quadro card. The two slides below show CARMA benchmarks and a Tibidabo cluster (click on image for larger version).
Stay tuned to PC Perspective for more GTC 2013 coverage!
Subject: General Tech | March 20, 2013 - 01:21 PM | Jeremy Hellstrom
Tagged: tomb raider, tressfx, gaming
Tomb Raider is as divisive a game as Halo, either you love the series or can't understand why people are interested in it at all. As is usual, [H]ard|OCP put the gaming considerations aside to take a look at the technology showcased in the game as well as finding the settings which provide the best gaming experience on several different single and dual GPU systems. Those who want to experience AMD's new TressFX feature will be glad to hear that you can enable that setting even on a GTX 660Ti. As far as general performance, high end card owners will be able to use Super Sample AA while others will have to content themselves with FXAA, for resolutions over 1080p you are going to want a pair of GPUs as single GPU solutions struggled to meet even 1080p with high or ultimate settings. Read on to see how your system will perform and discover which side of the fence [H] is on when it comes to Lara Croft.
"Tomb Raider is the first game to sport AMD's new TressFX feature. This DX11 effect creates a new sense of realism in-game with each strand of Lara's hair reacting to her movement and environmental features like wind and rain. Crystal Dynamics has worked hard to advance our expectations as gamers and enthusiasts alike!"
Here is some more Tech News from around the web:
- Waaagh-Face: Slitherine Announce Turn-Based 40K Game @ Rock, Paper, SHOTGUN
- StarCraft II Heart Of The Swarm @ Kitguru
- 150 Mods at Once (And a $1,500 PC) Give Skyrim a Next-Gen Makeover @ Wired
- Firaxis Talk Us Through Civilization V’s Brave New World @ Rock, Paper, SHOTGUN
- Crysis 3 @ eTeknix
- SimCity Tested, Benchmarked @ Techspot
- Battlefield 4 unveiling teased for 27th March @ HEXUS
- Duke Is Out: Duke Nukem 3D: Megaton Edition @ Rock, Paper, SHOTGUN
- Gears of War: Judgement @ The Inquirer
Subject: General Tech | March 20, 2013 - 01:02 PM | Jeremy Hellstrom
Tagged: DRAM, micron, ssd, Samsung, Hynix
It is perhaps not obvious to many because of the huge number of DRAM resellers but there are only three major manufacturers of DRAM left at this point. Apart from Micron, who claim top spot in this article on The Register, Samsung and Hynix are the only other big players left supplying DRAM. Considering the instability of memory and SSD pricing it seems odd that it is a component with only three possible sources, the instability could be coming from the fact that many of the mergers are still rather recent or in the case of Elpida, not quite complete yet. One very interesting comment from Kipp Bedard, Micron's investor relations VP, might also explain the volatilty of flash, "there simply isn't enough NAND fab capacity to store even 20 per cent of the data people are generating." If demand outstrips supply by that order of magnitude you can dictate almost any price you wish.
"When I first started at Micron, there were about 40 to 50 DRAM companies in the space," said Bedard. "And we spent most of the '80s with the Japanese deciding they wanted to own the DRAM space which they went from 10 per cent market share to about 90 per cent, [and] took all of the US companies out except for two, us and Texas Instruments."
Here is some more Tech News from around the web:
- Fusion-io gobbles Brit Linux SCSI gurus ID7 @ The Register
- Report: BlackBerry BYOD-ware doesn't pass UK.gov security test @ The Register
- Netatmo review: weather station with app @ Hardware.info
Subject: General Tech | March 20, 2013 - 12:05 AM | Tim Verry
Tagged: noctua, lga 1150, hsf, heatsink, haswell, cpu cooler
Noctua has recently announced that the company is providing free mounting kits to owners of existing coolers to make them compatible with Intel's latest LGA 1150 (Haswell) motherboards. The new NM-i115x mounting kit will allow enthusiasts to recycle their older Noctua coolers with the new platform without issue. The kit includes a new back plate with fixed struts and the necessary connectors (screws, springs, et al) to make alignment and mounting easier than previous setups.
Because the LGA 1150 socket keeps the same mounting hole spacing as the current LGA 1156 and LGA 1155 sockets, many newer Noctua cooler will not need the mounting kit upgrade, and can simply be installed into the Haswell machine as is. In other words, if the heatsink worked with your Lynnfield, Sandy Bridge, or Ivy Bridge-based system, it will work in a Haswell system as well. According to Noctua, the following coolers are already compatible with Haswell:
NH-C14, NH-D14, NH-C12P SE14, NH-L12, NH-L9i, NH-U12P SE2, NH-U9B SE2
If your cooler was released prior to LGA 1156, you will need to grab the NM-i115x mounting kit upgrade by filling out this form. Noctua will make the kit available on its website as well as in retail stores (for a minimal charge, though the company did not provide specific pricing). You will need to provide proof of purchase for your existing cooler by sending Noctua a scan or screenshot of your invoice or receipt.
For more information on the NM-i115x, head over to the Noctua product page.
It is nice to see Noctua standing behind its products like this, even if it only affects a small number of users that will be making the jump for LGA 775/ect to LGA 1150.
Subject: General Tech, Graphics Cards | March 19, 2013 - 06:52 PM | Tim Verry
Tagged: GTC 2013, tyan, HPC, servers, tesla, kepler, nvidia
Server platform manufacturer TYAN is showing off several of its latest servers aimed at the high performance computing (HPC) market. The new servers range in size from 2U to 4U chassis and hold up to 8 Kepler-based Tesla accelerator cards. The new product lineup consists of two motherboards and three bare-bones systems. The S7055 and S7056 are the motherboards while the FT77-B7059, TA77-B7061, and FT48-B7055.
The TA77-B7061 is the smallest system, with support for two Intel Xeon E5-2600 processors and four Kepler-based Tesla accelerator cards. The FT48-B7055 has si7056 specifications but is housed in a 4U chassis. Finally, the FT77-B7059 is a 4U system with support for two Intel Xeon E5-2600 processors, and up to eight Tesla accelerator cards. The S7055 supports a maximum of 4 GPUs while the S7056 can support two Tesla cards, though these are bare boards so you will have to supply your own cards, processors, and RAM (of course).
According to TYAN, the new Kepler-based HPC systems will be available in Q2 2013, though there is no word on pricing yet.
Stay tuned to PC Perspective for further GTC 2013 Coverage!
Subject: General Tech | March 19, 2013 - 06:25 PM | Jeremy Hellstrom
Tagged: input, roccat, apuri hybrid, gadget
The Apuri Hybrid USB Hub & Mouse Bungie looks a little familiar, though it does have more functionality that the Cooler Master version as it is also a 4 port powered USB 2.0 hub with an LED. Not only will your mouse tail look snazzy hanging from a scorpion like device but you can also keep a variety of USB devices close to hand. Neoseeker was a little disappointed at the $40 price tag, rather high for a non-USB 3.0 hub, but if you are looking to get your cords out of the way the added functionality is a nice feature.
"If you have a very particular need for a USB hub that also serves to reduce mouse cable clutter and keep your work area in order, ROCCAT just might have you covered with the Apuri. This unique peripheral is both mouse cable bungie and 4-port USB 2.0 hub in one scorpion-shaped package, so hit our review to see whether the Apuri delivers good value for its proposed convenience."
Here is some more Tech News from around the web:
- Razer DeathAdder 2013 (4G) Gaming Mouse Review @ Custom PC Review
- FUNC MS-3 Laser Gaming Mouse @ Tweaktown
- AZiO Levetron GM533U Gaming Mouse @ Benchmark Reviews
- Razer Naga Hex, Goliathus League of Legends Gaming Peripherals Review @ Custom PC Review
- AZiO Levetron GM533U Gaming Mouse Review @ OCC
- Genius Gila GX Gaming Mouse @ Benchmark Reviews
- Razer Ouroboros Wireless Gaming Mouse @ eTeknix
- ROCCAT Sense Chrome Blue Gaming Mouse Pad Review @ Neoseeker
- Cooler Master CM Storm Power-RX Mouse Pad Review @ Ninjalane
- Roccat ISKU FX Gaming Keyboard @ eTeknix
- Razer Blackwidow Ultimate gaming keyboard @ Rbmods
- Enermax Aurora Micro Wireless Keyboard @ Kitguru
Subject: General Tech, Graphics Cards | March 19, 2013 - 02:55 PM | Tim Verry
Tagged: unified virtual memory, ray tracing, nvidia, GTC 2013, grid vca, grid, graphics cards
Today, NVIDIA's CEO Jen-Hsun Huang stepped on stage to present the GTC keynote. In the presentation (which was live streamed on the GTC website and archived here.), NVIDIA discussed five major points, looking back over 2013 and into the future of its mobile and professional products. In addition to the product roadmap, NVIDIA discussed the state of computer graphics and GPGPU software. Remote graphics and GPU virtualization was also on tap. Finally, towards the end of the Keynote, the company revealed its first appliance with the NVIDIA GRID VCA. The culmination of NVIDIA's GRID and GPU virtualization technology, the VCA is a device that hosts up to 16 virtual machines which each can tap into one of 16 Kepler-based graphics processors (8 cards, 16 GPUs per card) to fully hardware accelerate software running of the VCA. Three new mobile Tegra parts and two new desktop graphics processors were also hinted at, with improvements to power efficiency and performance.
On the desktop side of things, NVIDIA's roadmap included two new GPUs. Following Kepler, NVIDIA will introduce Maxwell and Volta. Maxwell will feature a new virtualized memory technology called Unified Virtual Memory. This tech will allow both the CPU and GPU to read from a single (virtual) memory store. Much as with the promise of AMD's Kaveri APU, the Unified Virtual Meory will result in speed improvements in heterogeneous applications because data will not have to be copied to/from the GPU and CPU in order for the data to be processed. Server applications will really benefit from the shared memory tech. NVIDIA did not provide details, but from the sound of it, the CPU and GPU both continue to write to their own physical memory, but their is a layer of virtualized memory on top of that, that will allow the two (or more) different processors to read from each other's memory store.
Following Maxwell, Volta will be a physically smaller chip with more transistors (likely a smaller process node). In addition to the power efficiency improvements over Maxwell, it steps up the memory bandwidth significantly. NVIDIA will use TSV (through silicon via) technology to physically mount the graphics DRAM chips over the GPU (attached to the same silicon substrate electrically). According to NVIDIA, this new TSV-mounted memory will achieve up to 1 Terabytes/second of memory bandwidth, which is a notable increase over existing GPUs.
NVIDIA continues to pursue the mobile market with its line of Tegra chips that pair an ARM CPU, NVIDIA GPU, and SDR modem. Two new mobile chips called Logan and Parker will follow Tegra 4. Both new chips will support the full CUDA 5 stack and OpenGL 4.3 out of the box. Logan will feature a Kepler-based graphics porcessor on the chip that can “everything a modern computer ought to do” according to NVIDIA. Parker will have a yet-to-be-revealed graphics processor (Kepler successor). This mobile chip will utilize 3D FinFET transistors. It will have a greater number of transistors in a smaller package than previous Tegra parts (it will be about the size of a dime), and NVIDIA also plans to ramp up the frequency to wrangle more performance out of the mobile chip. NVIDIA has stated that Logan silicon should be completed towards the end of 2013, with the mobile chips entering production in 2014.
Interestingly, Logan has a sister chip that NVIDIA is calling Kayla. This mobile chip is capable of running ray tracing applications and features OpenGL geometric shaders. It can support GPGPU code and will be compatible with Linux.
NVIDIA has been pushing CUDA for several years, now. The company has seen some respectable adoption rates, by growing from 1 Tesla supercomputer in 2008 to its graphics cards being used in 50 supercomputers, with 500 million CUDA processors on the market. There are now allegedly 640 universities working with CUDA and 37,000 academic papers on CUDA.
Finally, NVIDIA's hinted-at new product announcement was the NVIDIA VCA, which is a GPU virtualization appliance that hooks into the network and can deliver up to 16 virtual machines running independant applications. These GPU accelerated workspaces can be presneted to thin clinets over the netowrk by installing the GRID client software on users' workstations. The specifications of the GRID VCA is rather impressive, as well.
The GRID VCA features:
- 2 x Intel Xeon processors with 16 threads each (32 total threads)
- 192GB to 384GB of system memory
- 8 Kepler-based graphics cards, with two GPUs each (16 total GPUs)
- 16 x GPU-accelerated virtual machines
The GRID VCA fits into a 4U case. It can deliver remote graphics to workstations, and is allegedly fast enough to deliver gpu accelerated software that is equivalent to having it run on the local machine (at least over LAN). The GRID Visual Computing Appliance will come in two flavors at different price points. The first will have 8 Kepler GPUs with 4GB of memory each, 16 CPU threads, and 192GB of system memory for $24,900. The other version will cost $34,900 and features 16 Kepler GPUs (4GB memory), 32 CPU threads, and 384GB system memory. On top of the hardware cost, NVIDIA is also charging licensing fees. While both GRID VCA devices can support unlimited devices, the licenses cost $2,400 and $4,800 per year respectively.
Overall, it was an interesting keynote, and the proposed graphics cards look to be offering up some unique and necessary features that should help hasten the day of ubiquitous general purpose GPU computing. The Unified Virtual Memory was something I was not expecting, and it will be interesting to see how AMD responds. AMD is already promising shared memory in its Kaveri APU, but I am interested to see the details of how NVIDIA and AMD will accomplish shared memory with dedicated grapahics cards (and whether CrossFire/SLI setups will all have a single shared memory pool)..
Stay tuned to PC Perspective for more GTC 2013 Coverage!
Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | March 18, 2013 - 09:10 PM | Scott Michaud
Tagged: GTC 2013, nvidia
We just received word from Tim Verry, our GTC correspondent and news troll, about his first kick at the conference. This... is his story.
Graphics card manufacturer, NVIDIA, is hosting its annual GPU Technology Conference (GTC 2013) in San Jose, California this week. PC Perspective will be roaming the exhibit floor and covering sessions as NVIDIA and its partners discuss upcoming graphics technologies, GPGPU, programming, and a number of other low level computing topics.
The future... is tomorrow!
A number of tech companies will be on site and delivering presentations to show off their latest Kepler-based systems. NVIDIA will deliver its keynote presentation tomorrow for the press, financial and industry analysts, and business partners to provide a glimpse at the green team's roadmap throughout 2013 - and maybe beyond.
We cannot say for certain what NVIDIA will reveal during its keynote; but, since we have not been briefed ahead of time, we are completely free to speculate! I think one certainty is the official launch of the Kepler-based K6000 workstation card; for example. While I do not expect to see Maxwell, we could possibly see a planned refresh of the Kepler-based components with some incremental improvements: I predict power efficiency over performance. Perhaps we will receive a cheaper Titan-like consumer card towards the end of 2013? Wishful thinking on my part? A refresh of its GK104 architecture would be nice to see as well, even if actual hardware will not show up until next year. I expect that NVIDIA will react to whatever plans AMD has to decide whether it is in their interest to match them or not.
I do expect to see more information on GRID and Project SHIELD, however. NVIDIA has reportedly broadened the scope of this year's conference to include mobile sessions: expect Tegra programming and mobile GPGPU goodness to be on tap.
It should be an interesting week of GPU news. Stay tuned to PC Perspective for more coverage as the conference gets underway.
What are you hoping to see from NVIDIA at GTC 2013?
Subject: General Tech | March 18, 2013 - 02:23 PM | Jeremy Hellstrom
Tagged: nvidia, hack, GTX 690, K5000, K10, quadro, tesla, linux
It will take a bit of work with a soldering iron but Hack a Day has posted an article covering how to mod one of the GPUs on a GTX690 into thinking it is either a Quadro K5000 or Tesla K10. More people will need to apply this mod and test it to confirm that the performance of the GPU actually does match or at least compare to the professional level graphics but the ID string is definitely changed to match one of those two much more expensive GPUs. They also believe that a similar mod could be applied to the new TITAN graphics card as it is electronically similar to the GTX690. Of course, if things go bad during the modification you could kill a $1000 card so do be careful.
"If hardware manufacturers want to keep their firmware crippling a secret, perhaps they shouldn’t mess with Linux users? We figure if you’re using Linux you’re quite a bit more likely than the average Windows user to crack something open and see what’s hidden inside. And so we get to the story of how [Gnif] figured out that the NVIDIA GTX690 can be hacked to perform like the Quadro K5000. The thing is, the latter costs nearly $800 more than the former!"
Here is some more Tech News from around the web:
- The TR Podcast 130: A series of grunts about convertible tablets
- Microsoft updates its Kinect for Windows SDK @ The Inquirer
- Asustek to launch new Intel-based smartphone in June @ DigiTimes
- The 2013 Top 7 Best Linux Distributions for You @ Linux.com
- Watch out, office bods: A backdoor daemon lurks in HP LaserJets @ The Register
Subject: General Tech, Storage | March 18, 2013 - 12:51 PM | Jeremy Hellstrom
15.6" Dell Inspiron 15R Special Edition Core i5 Laptop w/2GB Radeon HD 7730M, Backlit Keyboard, 6GB RAM for $549.99 with free shipping (normally $799.99 - use $150 Coupon Code: 2Q?XNXR2DXQ13G).
23.6" HP Spectre ONE 23-e010se Core i5 Slim All-in-One PC w/TrackPad for $974.99 with free shipping (normally $1,299.99 - use coupon code: DT2617).
90" Sharp AQUOS LC-90LE745U 3D 1080p 120Hz LED HDTV + Free Wall Mount for $7,390 with free shipping (normally $10,000 - use coupon code: FREEMOUNT).
Two (2) Dual UltraSharp U2412M IPS LCD Monitors with Dual Monitor Stand for $594.99 with free shipping (normally $699.99 - use coupon code: 6DBNK$ZJLR$L4J).
128GB OCZ Vertex 4 SATA III Solid State Drive for $116.99 with free shipping (normally $129.99 - use coupon code: VZQG7WPT?PJ4C4).
Audio-Technica Portable Stainless Steel Headphones for $109 with free shipping (normally $249.99 - use coupon code: VMESAVESU20).
Subject: General Tech | March 16, 2013 - 11:36 PM | Scott Michaud
Tagged: nvidia, tomb raider
The last month has been good to PC gamers: from Starcraft, to SimCity, to Tomb Raider, all with the promise of Bioshock Infinite just around the corner. We are being dog piled by one bulky release after another... most of which we are theoretically able to play.
Of course this is a call to action for GPU driver engineers. The software required to make your video card run is extremely complex with graphics instructions being compiled and interpreted at runtime for routinely shifting architectures. Performance increases are often measured in the double digit percentages albeit for some set "X" of components in some set "Y" of games.
GeForce 314.14 beta drivers launched early in the month with decent performance increases particularly for setups with SLi-paired 680s. Tomb Raider fans found themselves quite a bit left out with the reboot of the franchise doing everything but rebooting their PCs with NVIDIA and Intel hardware.
Now, two weeks later, NVIDIA has released yet another beta driver, dubbed 314.21, aimed squarely at Tomb Raider. Performance increases are claimed to be an average 45% higher than previous versions with some configurations seeing upwards of 60% increases in performance. The delay was allegedly caused by the hardware developer not receiving the game code with enough time before launch to create the updates.
If you are a Tomb Raider, check out the drivers at NVIDIA's website.
Subject: General Tech | March 16, 2013 - 03:08 PM | Tim Verry
Tagged: piixl, PC, Media Center, htpc, edgecenter
London-based startup PiixL recently launched a new media center PC called the EdgeCenter that attaches to the back of your television via VESA mount to turn any TV into a so-called smart TV. The PC comes in one of three configurations with (Media, Gamer, and Max) Windows 8 and increasing levels of hardware performance. The aluminum EdgeCenter chassis will attach to most TVs larger than 32-inches and can extend to bring the optical drive and other front IO ports to the edge of your TV for easy access. The EdgeCenter reportedly offers a quiet cooling system capable of dissipating 500W in a chassis that is (up to) 54mm thick. Users can use traditional mouse, keyboard, or remote to control it, or they can use gesture-based controls from up to 5 meters away.
The Media Edition offers up an AMD A10 5700 APU with HD7660D graphics, 1TB of mechanical storage, and 4GB of RAM. The Gamer Edition steps things up a notch with an Intel Core i5 3550 processor, an AMD 7870 2GB graphics card, 2TB of mechanical storage, and 8GB of RAM. Finally, the Max Edition features an Intel Core i7 3770 CPU, a NVIDIA GTX 680 4GB graphics card, 2TB HDD, 20GB SLC SSD (Intel SRT), and 16GB of RAM. Not bad at all for a PC that sits behind the TV. Having a PC mounted via VESA mount is not a new concept, but the EdgeCenter looks to pack the most horsepower an OEM has managed to cram into such a PC.
All three models support Gigabit Ethernet, USB 3.0, Blu Ray playback, optical and analog audio output, and an SD card slot for getting your media onto the device. The Media Edition EdgeCenter has VGA, HDMI, and DVI vidio outputs, while the Gamer edition has DVI, HDMI, and two mini-DisplayPort outputs. Finally, the Max Edition EdgeCenter PC has one DisplayPort, one DVI, and one HDMI port. It is definitely an interesting design with plenty of computing horsepower for gaming and media center needs. PiixL has fitted each model with an 80+ Gold power supply and has stated that the PCs are designed with 24/7 operation in mind.
The PiixL EdgeCenter is available for purchase now, but the performance will cost you a lot more money than your typical media center PC. The Media Edition, Gamer Edition, and Max Edition PCs start at £720.28, £1,116.76, and £1,513.25 respectively. For US customers that works out to about $1,085.97, $1,683.74, and $2,281.45. And that’s the bad news, it offers some impressive hardware, but is fairly expensive. Hopefully, if the EdgeCenter does well, we will see cheaper versions stateside at some point.
Subject: General Tech | March 15, 2013 - 04:00 PM | PCPer Staff
PNY Prevail Elite 120GB SATA 6Gb/s SSD for $93.09 with Free Shipping (normally $159 - use coupon code VZQG7WPT?PJ4C4).
Samsung Galaxy Tab 2 8GB 7" Tablet (Refurbished) for $149.99 (normally $200).
Apple iPad 3 16GB WiFi Tablet (refurbished) for $399.99 (normally $200).
Subject: General Tech | March 15, 2013 - 01:12 PM | Jeremy Hellstrom
Tagged: Samsung, galaxy s4, exynos 5, bad acting, Android 4.2.1
It is a close race between Blackberry and Samsung as far as which company provided the most stilted and uncomfortable launch of a new smartphone but those who survived it managed to pass on details about the brand new phone. We have not seen it dissected yet, nor blended, but we know inside the phone you will find an Samsung Exynos 5 5410 Octa 8-core processor clocked at 1.8GHz, a PowerVR SGX 544 graphics chip, 2GB of RAM, 16GB of firmware flash and runs Android 4.2.1, similar to the S3. On the outside is a 5" Gorilla Glass 3 Super AMOLED screen at 1920 x 1080 resolution, or 441ppi which is certainly higher than others but close enough to the limits of a perfect human eye as to make very little real difference.
Connectivity can come through WiFi, BlueTooth, HSPA+ 42Mbps, 4G LTE and even infrared transmitter for remote control functions. User interaction sees some new tricks however, eye tracking software will scroll webpages and documents as you read through them and those who despair over smudges on their screens will like the ability to control the phone with a finger hovering over the screen, not quite touching it. It bears two cameras, a 13MP on the back capable of recording at quite respectable resolutions as well as a 2MP front facing camera for video calls. On this translated page, the only connectivity seems to be a microUSB port, but there is is mention of MHL which can provide HDMI out, or you might be able to use the infrared transmitter to send your pictures and movies to another device. Charging can be done wirelessly via Qi in theory, though that did not work so well during the demonstration. You can follow the various links for a bit more detail but until a reviewer can get a Galaxy S4 in hand to benchmark it and perhaps tear it apart we don't know exactly how this phone will fare against the competition.
"Samsung Galaxy S4 will be available from the second quarter globally including the US market, partnering with telecom carriers such as AT&T, Sprint, T-Mobile, Verizon Wireless, as well as US Cellular and Cricket, Samsung said. In Europe, Samsung Galaxy S4 is partnering with global mobile operators such as Deutsche Telecom, EE, H3G, Orange, Telenor, Telia Sonera, Telefonica, and Vodafone."
Here is some more Tech News from around the web:
- Stealing cars and ringing doorbells with radio @ Hack a Day
- Enabling an unused touchscreen overlay on a consumer LCD @ Hack a Day
- HOT SWEATY RACKS blamed for Outlook.com, Hotmail MELTDOWN @ The Register
- Roxio Game Capture HD Pro @ LanOC Reviews
- Google Reader Alternatives for Android & iOS @ Techgage
- Jabra And NikKTech Joint Giveaway