Subject: Systems | June 21, 2011 - 03:52 AM | Tim Verry
Tagged: supercomputing, mic, larrabee, knights corner, Intel
Silicon Graphics International and Intel recently announced plans to reach exascale levels of computational power within ten years. Exascale computing amounts to computers that are capable of delivering 1,000+ petaflops (One exaflop is 1000 petaflops) of computational horsepower to process quintillions of calculations. To put that in perspective, today’s supercomputers are just now breaking into the level of single-digit petaflop performance, with the fastest supercomputer delivering 8.16 petaflops. It is capable of this thanks to many thousands of eight core CPUs, whereas other top 500 supercomputers are starting to utilize a CPU and GPU combination in order to achieve petaflop performance.
The Aubrey Isle Silicon Inside Knights Corner
This partnering of Central Processing Unit (CPU) and GPU (or other accelerator) allows high performance supercomputers to achieve much higher performance than with CPUs alone. Intel CPUs power close to 80% of the top 500 Supercomputers; however, they have begun to realize that specialized accelerators are able to speed up highly parallel computing tasks. Specifically, Intel plans to combine Xeon processors with successors to their Knights Corner Many Integrated Core accelerator to reach exascale performance levels when combined with other data transfer and inter-core communication advancements. Knights Corner is an upcoming successor to the Knights Ferry and Larrabee processors.
Computer World quotes Eng Lim Goh, the CTO of SGI, in stating that “Accelerators such as graphics processors (GPUs) are currently being used with CPUs to execute more calculations per second. While some accelerators achieve desired results, many are not satisfied with the performance related to the time and cost spent porting applications to work with accelerators.”
Knights corner will be able to run x86 based software and features 50 cores based on a 22nm manufacturing process. Each core will run four threads at 1.2 GHz, have 8 MB of cache, and will be supported by 512 bit vector processing units. It’s predecessor, Knights Ferry is based on 32 45nm cores and eight contained in a Xeon server and are capable of 7.4 teraflops. Their MIC chip is aimed directly at NVIDIA’s CUDA and AMD’s OpenCL graphics processors, and is claimed to offer performance in addition to ease of use as they are capable of running traditional x86 based software.
It looks like the CPU-only supercomputers will be seeing more competition from GPU and MIC accelerated supercomputers, and will eventually be replaced at the exascale level. AMD and NVIDIA are betting heavily on their OpenCL and CUDA programmable graphics cards while Intel is going with a chip capable of running less specialized but widely used x86 programmable chips. It remains to be seen which platform will be victorious; however, the increased competition should hasten the advancement of high performance computing power. You can read more about Intel’s plan for Many Integrated Core accelerated supercomputing here.
Subject: Processors | June 21, 2011 - 12:51 AM | Tim Verry
Tagged: ulv, sandy bridge, Intel, cpu, celeron
According to Maximum PC, Intel recently revamped its official price list by adding four new ULV (ultra-low-voltage chips generally found in ultraportable notebooks) processors. The new additions feature three Sandy Bridge based chips and one Intel Celeron processor. The three new Sandy Bridge ULV CPUs include the dual core hyperthreaded Core i5 2557M with 3 MB cache running at 1.7 GHz, Core i7 2637M with 4 MB cache running at 1.7GHz, and the Core i7 2677M at 1.8 GHz with 4 MB cache. Utilizing Turbo Boost, the chips are able to reach 2.7 GHz, 2.8 GHz, and 2.9 GHz respectively. Further, the new Celeron ULV is the dual core Celeron 847 processor with 2 MB cache running at 1.1 GHZ.
The Core i5 2557M carries a pricetag of $250, while the Core i7 2637M goes for $289, and the Core i7 2677M has an MSRP of $317. You can see the entire price list here. The new Sandy Bridge based ULV processors are able to Turbo Boost from between 1.0 and 1.1 GHz depending on model, which should provide plenty of power for mobile devices while sipping battery power with a TDP (themal design power) of only 17 watts.
Subject: Storage | June 20, 2011 - 12:25 PM | Jeremy Hellstrom
Tagged: ssd, srt, Intel, kingston, cache
It is a common question with the release of the Z68 series of boards, as people wonder if they really need to shell out the money for an Intel SSD in order to take advantage of Intel Smart Response Technology, which lets you use an SSD of 60GB or less as a cache drive. Techgage took it upon themselves to investigate and compared the performance improvements to a HDD when using an Intel 20GB 311 SATA II SSD and a Kingston 64GB SDnow 100V+ SATA II SSD. As happens all to often lately the answer is not clear cut; the best cache drive depends heavily on the file sizes you commonly deal with.
"When we tested out Intel's 'Smart Response Technology' last month, we liked what we saw. But at $110 for a 20GB SLC SSD, we wondered if a larger, more cost-effective option could still make the best use of the technology. With that, we're pitting Kingston's SSDNow V+100 64GB drive, at $150, against Intel's, to see if we retain SRT's effectiveness."
Here are some more Storage reviews from around the web:
- OCZ Vertex 3 240GB Max IOPS Edition SSD Review @ Legit Reviews
- Intel SSD 320 Series Solid State Drive @ Benchmark Reviews
- OCZ RevoDrive 3 x2 240GB PCIe SSD Quick Look: This Is Going To Be Fast! @ SSD Review
- OCZ Agility 3 240 GB SSD Review @ Legit Reviews
- Patriot Supersonic 64GB USB 3.0 Flash Drive Review @ Hi Tech Legion
- RAIDON GT5630-SB3 USB 3.0 4 Bay Desktop Data Backup Storage Solution @ Real World Labs
- RaidSonic Icy Box IB-NAS6220 HDD Network Mediaserver Review @ Real World Labs
- ASUS BC-12B1ST Internal 12X BD-Combo Drive Review @Hi Tech Legion
Subject: General Tech | June 20, 2011 - 12:11 PM | Jeremy Hellstrom
Tagged: Intel, mic, larrabee, knights corner, 50 GPGPU
Knights Corner is not exactly Larrabee but the idea behind both are very similar. A large number of GPGPUs are integrated with a CPU, Intel is using a Xeon core now as opposed to a Pentium; with the GPGPUs hooked up in a similar method to Larrabee's ring of Pentium cores. The design is proven as they have sold units of the previous generation Kights Ferry and offers a feature that a lot of programmers are going to appreciate; instead of needing to learn a new language like CUDA or OpenCL, standard x86 scalar code is used to program these chips. This architecture is also expected to scale very well, for as ARM recently pointed out only specific multithreaded applications continue to scale well as more cores are added. Drop by The Inquirer for more information.
They will likely be sold as PCIe card like the Knights Ferry card pictured above.
"CHIPMAKER Intel has announced its second generation hybrid core technology codenamed 'Knights Corner'.
Knights Corner is Intel's second chip in its Many Integrated Core (MIC) chip line and will feature Xeon X86 cores and more than 50 GPGPU cores loosely based on what was previously known as Larrabee. Knights Corner will be fabricated using Intel's 22nm tri-gate process node beginning in 2012, though the firm would not be drawn on the exact core count at this time."
Here is some more Tech News from around the web:
- Japan's 8-petaflop K Computer Is Fastest On Earth @ Slashdot
- Intel admits that Moore's Law is not enough @ The Inquirer
- When WiFi doesn't work: a guide to home networking alternatives @ Ars Technica
- Western Digital Livewire PowerLine AV Kit @ TechwareLabs
- US reveals Stuxnet-style vuln in Chinese SCADA 'ware @ The Register
- Adobe offloads unwanted Linux AIR onto OEMs @ The Register
- Designcord 5 Metre Autorewind Cable Reel Extension Lead Review @ eTeknix
- x264 HD Benchmark 4.0 @ TechARP
- ArtRage: quality digital painting on the cheap @ Ars Technica
- Ultra Simple 360-degree Photo Hack @ Make
- AMD Developer Summit lacks Bulldozer details @ The Inquirer
- Nokia Connections 2011 - Our Expectations @ t-break
- DreamHack Summer Festival kicks off! Day One! @ eTeknix
- Interview with Ziad Matar of Qualcomm @ t-break
- Patriot Xporter XT Rage 32GB Flash Drive Giveaway! @ ThinkComputers
- Weekly Giveaway #2: Foxconn Flaming Blade GTI, Innergie mCube Lite, SteelSeries Spectrum AudioMixer, StarTech ExpressCard eSATA Controller Adapter Card @ eTeknix
Subject: General Tech, Storage | June 16, 2011 - 03:02 PM | Scott Michaud
Tagged: ssd, Intel, enterprise
Intel is currently in the process of releasing their 2011 lineup of solid state hard drives. A lot of news and products came out regarding their consumer 300-series and enthusiast 500-series line however it has been pretty silent regarding their enterprise 700-series products. That has changed recently with the release of specifications as a result of Anandtech’s coverage of the German hardware website ComputerBase.de.
And how does it compare to OCZ?
Intel will be releasing two enterprise SSDs: the SATA 3 Gbps based 710 SSD codename Lyndonville and the PCI express 2.0 based 720 SSD codename Ramsdale. The SATA based 710 will feature 25nm MLC-HET flash at capacities of 100, 200, and 300 GB. The 710 will have read and write speeds of 270/210 MB/s with 35,000/3300 read and write IOPS at 4KB and a 64MB cache. The PCIe based 720 will feature 34nm SLC flash at capacities of 200 and 400 GB. The 720 will be substantially faster than the 710 with read and write speeds of 2200/1800 MB/s with 180,000/56,000 read and write IOPS at 4KB and a 512MB cache. On the security front the 710 will be encrypted with 128 bit AES encryption where the 720 will be encrypted with 256 bit AES.
While there has been no hint toward pricing of these drives Intel is still expected to make a second quarter release date for their SATA based 710 SSD. If you are looking for a PCI express SSD you will need to be a bit more patient as they are still expected to be released in the fourth quarter. It will be interesting to see how the Intel vs OCZ fight will play out in 2012 for dominance in the PCIe-based SSD space.
Subject: General Tech | June 16, 2011 - 12:57 PM | Jeremy Hellstrom
Tagged: amd, Intel, nvidia
In some sort of bizarre voyeuristic hardware love/hate triangle AMD, Intel and NVIDIA are all semi-intertwined and being observed by Microsoft. Speaking with The Inquirer the VP of product and platform marketing at AMD, Leslie Sobon, stated that there was no chance that Intel would attempt to purchase NVIDIA as AMD did with ATI. AMD's purchase was less about the rights to the Radeon series as it was taking possession of the intellectual property that ATI owned after a decade of creating GPUs and lead directly to the APUs that AMD has recently released which will likely become their main product. Intel already has a working architecture that combines GPU and CPU and doesn't need to purchase another company's IP in order to develop that type of product.
There is another reason for purchasing NVIDIA though, which has very little to do with their discreet graphics card IP and everything to do with Tegra and Fermi which are two specialized products which so far Intel doesn't have an answer for. A vastly improved and shrunken Atom might be able to push Tegra off of mobile platforms and perhaps specialized SandyBridge CPUs could accelerate computation like the Fermi products do but so far there are no solid leads, only speculation.
If you learn more from your failures than your successes then Intel knows a lot about graphics.
"CHIP DESIGNER AMD believes that it is on a divergent path from Intel thanks to its accelerated processor unit (APU) and that Intel buying Nvidia "would never happen"."
Here is some more Tech News from around the web:
- Find Out if Your Passwords Were Leaked by LulzSec Right Here @ Gizmodo
- Adobe patches critical bugs in Flash and Reader @ The Register
- Umi, we hardly knew ye: contemplating the fate of the videophone in 2011 @ Ars Technica
- 'A SHARK attacked my ROBOT', gasps ex-Sun exec @ The Register
- We’ve got a real bone to pick with this mouse @ Hack a Day
- Fun Quotes from the AFDS Media Roundtable @ SemiAccurate
Subject: Motherboards | June 14, 2011 - 06:35 PM | Tim Verry
Tagged: x79, rumor, lga 2011, lga 1366, Intel, cpu
Xbit Labs recently detailed a new rumor concerning Intel’s upcoming X79 chipset. According to a leaked document viewed by them, X79 will support both Intel’s current and upcoming high end processors sockets in the form of LGA 1366 and LGA 2011. What this means for the end user is that they will be able to purchase a x79 based motherboard that will support either Nehalem or Sandy Bridge-E processors, unless motherboard manufacturers decide to splurge and include both sockets on one board like the Asus’ concept board shown at Computex 2011. This means that while DIY enthusiasts and gamers are not likely to use these motherboards as an upgrade path to Sandy Bridge (as a CPU upgrade would likely still necessitate a motherboard upgrade due to both sockets not being physically present), IT departments will likely appreciate the continued support of the older 1366 processors on new motherboards as it will make replacement parts easy to find for high end 1366 based workstations.
On the other hand, manufacturers will benefit the most from the X79 chipset supporting multiple sockets, and thus reducing costs. This cost reduction may then allow for cheaper end-user costs.
Intel itself is planning to manufacture two X79 motherboards named the DX79SI and DX79TO, will each support LGA 1366 and LGA 2011 respectively. Xbit Labs reports that the DX79SI board is planned to be a feature packed LGA 2011, no-compromise affair, with support for up to 64GB of RAM (eight DIMM slots), three PCI-E 3.0 slots for multi-GPU configurations, 12 SATA (six SATA 3 6GB/s, six SATA 2 3GB/s) ports, four USB 3.0, 14 USB 2.0, 8-channel audio, Wifi and Bluetooth, and two Gigabit Ethernet connections.
In contrast, the DX79TO will feature a LGA 1366 socket, and brings two PCI-E 2.0 x16 slots, 8 SATA connectors (likely four SATA 3, four SATA 2), 2 USB 3.0, 6-channel audio, a single Gigabit Ethernet connection, and DDR3 memory support (there are no details on the exact DIMM configuration supported yet).
By lowering the cost of supporting two high-end CPU lines and platforms, Intel, motherboard manufacturers, and consumers likely have a win-win-win situation, providing that the rumor comes to fruition.
Subject: Editorial, Processors, Shows and Expos | June 14, 2011 - 05:09 PM | Ryan Shrout
Tagged: nvidia, Intel, heterogeneous, fusion, arm, AFDS
Before the AMD Fusion Developer Summit started this week in Bellevue, WA the most controversial speaker on the agenda was Jem Davies, the VP of Technology at ARM. Why would AMD and ARM get together on a stage with dozens of media and hundreds of developers in attendance? There is no partnership between them in terms of hardware or software but would there be some kind of major announcement made about the two company's future together?
In that regard, the keynote was a bit of a letdown and if you thought there was going to be a merger between them or a new AMD APU being announced with an ARM processor in it, you left a bit disappointed. Instead we got a bit of background on ARM how the race of processing architectures has slowly dwindled to just x86 and ARM as well as a few jibes at the competition NOT named AMD.
As is usually the case, Davies described the state of processor technology with an emphasis on power efficiency and the importance of designing with that future in mind. One of the interesting points was shown in regard to the "bitter reality" of core-type performance and the projected DECREASE we will see from 2012 onward due to leakage concerns as we progress to 10nm and even 7nm technologies.
The idea of dark silicon "refers to the huge swaths of silicon transistors on future chips that will be underused because there is not enough power to utilize all the transistors at the same time" according to this article over at physorg.com. As the process technology gets smaller then the areas of dark silicon increase until the area of the die that can be utilized at any one time might hit as low as 10% in 2020. Because of this, the need to design chips with many task-specific heterogeneous portions is crucial and both AMD and ARM on that track.
Those companies not on that path today, NVIDIA specifically and Intel as well, were addressed on the below slide when discussing GPU computing. Davies pointed out that if a company has a financial interest in the immediate success of only CPU or GPU then benchmarks will be built and shown in a way to make it appear that THAT portion is the most important. We have seen this from both NVIDIA and Intel in the past couple of years while AMD has consistently stated they are going to be using the best processor for the job.
Amdahl's Law is used in parallel computing to predict the theoretical maximum speed up using multiple processors. Davies reiterated what we have been told for some time that if only 50% of your application can actually BE parallelized, then no matter how many processing cores you throw at it, it will only ever be 50% faster. The heterogeneous computing products of today and the future can address both the parallel computing and serial computing tasks with improvements in performance and efficiency and should result in better computing in the long run.
So while we didn't get the major announcement from ARM and AMD that we might have been expecting, the fact that ARM would come up and share a stage with AMD reiterates the message of the Fusion Developer Summit quite clearly: a combined and balanced approach to processing might not be the sexiest but it is very much the correct one for consumers.
Subject: General Tech, Processors | June 14, 2011 - 02:47 AM | Scott Michaud
Tagged: Intel, haswell
Intel’s new processor lines come in two flavors: process shrinks and new architectures. Each revision comes out approximately a year after the prior one alternative between new architectures (tock) and process shrinks (tick). Sandy Bridge was the most recent new architecture which will be followed by Ivy Bridge, a process shrink of Sandy Bridge, and that will be succeeded by Intel’s newest architecture: Haswell.
I can Haswell?
The instructions added by Intel for their upcoming Haswell architecture are useful for a whole range of applications from image and video processing; to face detection; to database manipulation; to the generation of hashes; as well as arithmetic in general. As you can see the addition of instructions in this revision is quite wide in its scope. Keep in mind that the introduction of a new instruction set does not mean that programs will be optimized to take advantage of the added benefits for some time. However, when programs do start optimizing for the newer architectures it looks as though Haswell’s new offerings will speed up otherwise complicated tasks into a single instruction.
What task would you like to see a speedup on? Comment below.
Subject: Processors, Systems | June 12, 2011 - 08:57 PM | Tim Verry
Tagged: SFF, Intel, htpc, hd, DIY, atom
Habley has recently shown off a new small, embedded computer dubbed the SOM-6670E6XX. The new computer is the size of a post-it note; however, it sports an Atom E600 processor running at 1.0Gh as well as an integrated GMA600 graphics core. To be more specific, the motherboard in question measures 70mm x 70mm.
The CPU and GPU blend is able to support two displays and pipe two HD video streams to each. Using Media Player Class Home Cinema 1.5, the computer is able to play both a 1080p MPEG4 trailer of the X-Men First Class film and a HD FLV version of SpiderWic simultaneously. While playing both films, the CPU saw around 93% usage and 210 MB of RAM from the Windows Embedded 2009 operating system. Further, while playing an HD FLV film trailer while also watching an HD YouTube clip, the processor was again pegged at 93% usage; however, in this test the RAM usage was much higher, at 422 MB. The test system used, in addition to the SOM-6670, it consisted of a SOMB-073 Carrier board (which provides the various IO including video and audio output, mouse and keyboard input, and SATA ports), 1GB of on-board RAM, and a 5400RPM laptop form factor (2.5”) 120GB hard drive.
Including the two monitors, at 1280x768 (over HDMI) and 1920x1080 (SDVO) respectively, the system drew 18 watts during usage. You can see the test system of the small HD-capable computer in action in the video below. What uses do you have in mind for a micro-sized computer such as this?
Get notified when we go live!