Subject: Editorial, Processors | March 12, 2015 - 08:29 PM | Tim Verry
Tagged: Xeon D, xeon, servers, opinion, microserver, Intel
Intel dealt a blow to AMD and ARM this week with the introduction of the Xeon Processor D Product Family of low power server SoCs. The new Xeon D chips use Intel’s latest 14nm process and top out at 45W. The chips are aimed at low power high density servers for general web hosting, storage clusters, web caches, and networking hardware.
Currently, Intel has announced two Xeon D chips, the Xeon D-1540 and Xeon D-1520. Both chips are comprised of two dies inside a single package. The main die uses a 14nm process and holds the CPU cores, L3 cache, DDR3 and DDR4 memory controllers, networking controller, PCI-E 3.0, and USB 3.0 while a secondary die using a larger (but easier to implement) manufacturing process hosts the higher latency I/O that would traditionally sit on the southbridge including SATA, PCI-E 2.0, and USB 2.0.
In all, a fairly typical SoC setup from Intel. The specifics are where things get interesting, however. At the top end, Xeon D offers eight Broadwell-based CPU cores (with Hyper-Threading for 16 total threads) clocked at 2.0 GHz base and 2.5 GHz max all-core Turbo (2.6 GHz on a single core). The cores are slightly more efficient than Haswell, especially in this low power setup. The eight cores can tap into 12MB of L3 cache as well as up to 128GB of registered ECC memory (or 64GB unbuffered and/or SODIMMs) in DDR3 1600 MHz or DDR4 2133 MHz flavors. Xeon D also features 24 PCI-E 3.0 lanes (which can be broken up to as small as six PCI-E 3.0 x4 lanes or in a x16+x8 configuration among others), eight PCI-E 2.0 lanes, two 10GbE connections, six SATA III 6.0 Gbps channels, four USB 3.0 ports, and four USB 2.0 ports.
All of this hardware is rolled into a part with a 45W TDP. Needless to say, this is a new level of efficiency for Xeons! Intel chose to compare the new chips to its Atom C2000 “Avoton” (Silvermont-based) SoCs which were also aimed at low power servers and related devices. According to the company, Xeon D offers up to 3.4-times the performance and 1.7-times the performance-per-watt of the top end Atom C2750 processor. Keeping in mind that Xeon D uses approximately twice the power as Atom C2000, it is still looking good for Intel since you are getting more than twice the performance and a more power efficient part. Further, while the TDPs are much higher,
Intel has packed Xeon D with a slew of power management technology including Integrated Voltage Regulation (IVR), an energy efficient turbo mode that will analyze whether increased frequencies actually help get work done faster (and if not will reduce turbo to allow extra power to be used elsewhere on the chip or to simply reduce wasted energy), and optional “hardware power management” that allows the processor itself to determine the appropriate power and sleep states independently from the OS.
Being server parts, Xeon D supports ECC, PCI-E Non-Transparent Bridging, memory and PCI-E Checksums, and corrected (errata-free) TSX instructions.
Ars Technica notes that Xeon D is strictly single socket and that Intel has reserved multi-socket servers for its higher end and more expensive Xeons (Haswell-EP). Where does the “high density” I mentioned come from then? Well, by cramming as many Xeon D SoCs on small motherboards with their own RAM and IO into rack mounted cases as possible, of course! It is hard to say just how many Xeon Ds will fit in a 1U, 2U, or even 4U rack mounted system without seeing associated motherboards and networking hardware needed but Xeon D should fare better than Avoton in this case since we are looking at higher bandwidth networking links and more PCI-E lanes, but AMD with SeaMicro’s Freedom Fabric and head start on low power x86 and ARM-based Opteron chip research as well as other ARM-based companies like AppliedMicro (X-Gene) will have a slight density advantage (though the Intel chips will be faster per chip).
Which brings me to my final point. Xeon D truly appears like a shot across both ARM and AMD’s bow. It seems like Intel is not content with it’s dominant position in the overall server market and is putting its weight into a move to take over the low power server market as well, a niche that ARM and AMD in particular have been actively pursuing. Intel is not quite to the low power levels that AMD and other ARM-based companies are, but bringing Xeon down to 45W (with Atom-based solutions going upwards performance wise), the Intel juggernaut is closing in and I’m interested to see how it all plays out.
Right now, ARM still has the TDP and customization advantage (where customers can create custom chips and cores to suit their exact needs) and AMD will be able to leverage its GPU expertise by including processor graphics for a leg up on highly multi-threaded GPGPU workloads. On the other hand, Intel has the better manufacturing process and engineering budget. Xeon D seems to be the first step towards going after a market that they have in the past not really focused on.
With Intel pushing its weight around, where will that leave the little guys that I have been rooting for in this low power high density server space?
Subject: General Tech | May 7, 2014 - 02:33 PM | Jeremy Hellstrom
Tagged: arm, servers, CoreLink, CCN-508, CN-504
ARM has a new chip on the block, the CCN-508, It is a capable of combining up to eight 64-bit ARMv8 CPU clusters of four cores apiece, either all ARM Cortex-53s or ARM Cortex-57s, using ARM's AMBA 5 CHI interconnect technology. Those processors can then be attached to a wide variety of what ARM refers to as partners, including up to 24 other AMBA interconnects for other CPUs, DDR3 or DDR4 memory controllers, PCIe, SATA, and 10-40 gigabit Ethernet. So much for ARM just being a mobile processor; check out more at The Register.
"ARM has released more details about the innards of its cache-coherent on-chip networking scheme for use cases ranging from storage to servers to networking – specifically, its CCN-5xx microarchitecture family and its newest member, the muscular CoreLink CCN-508."
Here is some more Tech News from around the web:
- Danger, Will Robinson! Beware the hidden perils of BYOD @ The Register
- Amped Wireless REC15A 802.11ac Wi-Fi Range Extender Review @ Legit Reviews
- Seagate outs 2TB wireless hard drive with support for Android, iOS and Windows 8 @ The Inquirer
- 3D Printing's Success Points to a Rosy Future for Open Hardware @ Linux.com
Subject: General Tech, Systems | January 25, 2014 - 07:42 PM | Scott Michaud
Tagged: Lenovo, IBM, x86, servers
Lenovo will take (or purchase) the x86 torch away from IBM in the high-end server and mainframe market, too. The deal is worth $2.3 billion of which $2 billion will be cash, the remains will be paid to IBM in stock. IBM walked away from talks with Lenovo last year in a deal that was believed to be similar to this one.
Lenovo, famously, took over IBM's PC business in 2005.
... which is increasingly not IBM.
x86-based servers have been profitable, even for IBM. This is yet another example of a large company with a desire to increase their margins at the expense of overall profits. This is similar to the situation with HP when they considered getting out of consumer devices. Laptops and desktops were still profitable but not as much as, say, an ink cartridge. Sometimes leaving money on the table tells a better story and that is okay. Someone will take it.
Lenovo will also become an authorized reseller of IBM cloud computing and storage solutions (plus some of their software). IBM will continue to operate their server and mainframe businesses based on their own architectures (such as Power and Z/Architecture).
Approximately 7,500 of IBM's current employees will be hired by Lenovo as a part of this agreement. Unfortunately, I do not know how many current employees are affected. 7,500 could be the vast majority of that workforce or only a small fraction of it. Hopefully this deal will not mean too many layoffs, if any at all.
Subject: General Tech | September 18, 2013 - 12:49 PM | Jeremy Hellstrom
Tagged: amd, arm, Cortex-A57, servers, seattle
DigiTimes spoke with AMD's current server guru about their move from providing only x86/64 based processors in their server chips to the inclusion of ARM cores in the Seattle chip family. These will be the first processors from AMD using 64-bit Cortex-A57 cores and they hope to sell them to companies who depend on Hadoop or run web hosting services which will benefit from scalability. As these will be true APUs as well, any application which can be accelerated by a GPU will also greatly benefit from the new design from AMD. It is AMD's hope that they will be able to offer server customers a choice in the architecture they want to use in their server rooms and able to choose between more than just competing x86/64 chips.
"Commenting on AMD's decision to make ARM-based processors for servers, corporate vice president and general manager of AMD's server business, Suresh Gopalakrishnan, said that as more server applications will show up in the future, different architectures will provide different advantages to clients. Providing solutions based on market demand will be the major business strategy for AMD's server business, Gopalakrishnan noted."
Here is some more Tech News from around the web:
- Explaining the low level stuff you don’t know about ARM programming @ Hack a Day
- Nvidia announces the Tegra Note Android tablet prototype @ The Inquirer
- Microsoft relents: 'Go ahead, install Windows 8.1 on clean PCs' @ The Register
- IBM Bets Big Again on Linux: $1B for Linux on Power Systems @ Linux.com
- Windows Phone 8 is deemed secure by the US and Canadian governments @ The Inquirer
- Blackberry Z30 Phablet Announced @ Slashdot
Subject: General Tech | September 6, 2013 - 01:26 PM | Jeremy Hellstrom
Tagged: servers, windows server 2012 R2, microsoft, nifty, RDMA
If you play with VM's in a Windows environment you have probably gotten quite good at using FTP as that was the easiest way to copy files or even text between two or more of your virtual machines. No more will you need to do that as the new version of Windows Server will have a shared clipboard allowing you to copy and paste not just text but also files between your VMs. They will still limit you to 64 virtual CPUs but they did add Remote Direct Memory Access which offers a huge boost in speed to your machines and for doing live migrations. Check out more at The Register.
"If you want to see a TechEd audience break into spontaneous applause – and here I am one-hundred-percent serious – give them something that they really care about. Like a shared clipboard. The people running virtual servers really did interrupt Benjamin Armstrong, Microsoft Hyper-V program manager, to applaud the simple act of being able to cut and paste text or files between VMs."
Here is some more Tech News from around the web:
- Windows 8.1 to freeze out small business apps @ The Register
- Microsoft Surface Pro 2 leaks with Intel Haswell and Windows 8.1 @ The Inquirer
- IFA 2013: Highlights from the German technology show @ The Inquirer
- Canon EOS Rebel SL1 Review @ TechReviewSource
- Fire at SK Hynix China plant sends DRAM spot prices higher @ DigiTimes
- Schneier: The US Government Has Betrayed the Internet, We Need To Take It Back @ Slashdot
- Charlie Miller Releases Open Source "Car Sabotage Toolkit" @ DailyTech
Subject: General Tech | June 7, 2013 - 03:18 PM | Jeremy Hellstrom
Tagged: arm, 64bit, servers
With Calxeda and Applied Micro showing off ARM64 based servers at Computex this year, in addition to the existing products coming from Marvell and Dell, DigiTimes prediction that 64bit ARM processors will quickly grow in popularity seems to be based in fact. It was not too long ago that many thought that ARM was fooling themselves if they thought they could take server space from AMD and Intel but it looks like they were right to develop server chips. With low power usage becoming more popular than processor overkill and modularity growing in importance ARM seems poised to perform far beyond expectations. Expect to see a lot more new on ARM64 processors and products over the coming months.
"Although Intel platforms are still the mainstream in the server industry, since 64-bit products have a broader range of applications, and ARM has been aggressively promoting related products, sources from the server industry expect more 64-bit ARM-based products to appear in the market between the end of 2013 and the first quarter of 2014."
Here is some more Tech News from around the web:
- One Year After World IPv6 Launch — Are We There Yet? @ Slashdot
- The best and worst of Computex 2013 @ The Inquirer
- YES, Xbox One DOES need internet, DOES restrict game trading @ The Register
- Interview: Steve Jackson, role-playing game titan @ The Register
- Neteller vs Payoneer - Online Payment and Prepaid Cards @ FunkyKit
- How to Install Linux @ Linux.com
Subject: Systems | April 19, 2013 - 03:56 AM | Tim Verry
Tagged: X-Gene, servers, project moonshot, microserver, hp, arm, Applied Micro Circuits, 64-bit
A recent press release from AppliedMicro (Applied Micro Circuits Corporation) announced that the company’s X-Gene server on a chip technology would be used in an upcoming HP Project Moonshot server.
An HP Moonshot server (expect the X-Gene version to be at least slightly different).
The X-Gene is a 64-bit ARM SoC that combines ARM processing cores with networking and storage offload engines as well as a high-speed interconnect networking fabric. AppliedMicro designed the chip to provide ARM-powered servers that will reportedly reduce the Total Cost of Ownership of running webservers in a data center by reducing upfront hardware and ongoing electrical costs.
The X-Gene chips that will appear in HP’s Project Moonshot servers feature a SoC with eight AppliedMicro-designed 64-bit ARMv8 cores clocked at 2.4GHz, four ARM Cortex A5 cores for running the Software Defined Network (SDN) controller, and support for storage IO, PCI-E IO, and integrated Ethernet (four 10Gb Ethernet links). The X-Gene chips are located on card-like daughter cards that slot into a carrier board that has networking fabric to connect all the X-Gene cards (and the SoCs on those cards). Currently, servers using X-Gene SoCs require a hardware switch to connect all of the X-Gene cards in a rack. However, the next-generation 28nm X-Gene chips will eliminate the need for a rack-level hardware switch as well as featuring 100Gb networking links).
The X-Gene chips in HP Project Moonshot will use relatively little power compared to Xeon-based solutions. AppliedMicro has stated that the X-Gene chips will be at least two-times as power efficient, but has not officially release power consumption numbers for the X-Gene chips under load. However, at idle the X-Gene SoCs will use as little as 500mW and 300mW of power at idle and standby (sleep mode) respectively. The 64-bit quad issue, Out of Order Execution chips are some of the most-powerful ARM processors to date, though they will soon be joined by ARM’s own 64-bit design(s). I think the X-Gene chips are intriquing, and I am excited to see how well they fare in the data center environment running server applications. ARM has handily taken over the mobile space, but it is still relatively new in the server world. Even so, the 64-bit ARM chips by AppliedMicro (X-Gene) and others are the first step towards ARM being a viable option for servers.
According to AppliedMicro, HP Project Moonshot servers with X-Gene SoCs will be available later this year. You can find the press blast below.
Subject: General Tech, Graphics Cards | March 19, 2013 - 06:52 PM | Tim Verry
Tagged: GTC 2013, tyan, HPC, servers, tesla, kepler, nvidia
Server platform manufacturer TYAN is showing off several of its latest servers aimed at the high performance computing (HPC) market. The new servers range in size from 2U to 4U chassis and hold up to 8 Kepler-based Tesla accelerator cards. The new product lineup consists of two motherboards and three bare-bones systems. The S7055 and S7056 are the motherboards while the FT77-B7059, TA77-B7061, and FT48-B7055.
The TA77-B7061 is the smallest system, with support for two Intel Xeon E5-2600 processors and four Kepler-based Tesla accelerator cards. The FT48-B7055 has si7056 specifications but is housed in a 4U chassis. Finally, the FT77-B7059 is a 4U system with support for two Intel Xeon E5-2600 processors, and up to eight Tesla accelerator cards. The S7055 supports a maximum of 4 GPUs while the S7056 can support two Tesla cards, though these are bare boards so you will have to supply your own cards, processors, and RAM (of course).
According to TYAN, the new Kepler-based HPC systems will be available in Q2 2013, though there is no word on pricing yet.
Stay tuned to PC Perspective for further GTC 2013 Coverage!
Subject: General Tech | February 22, 2013 - 07:31 AM | Tim Verry
Tagged: servers, facebook, exabyte, data centers, cold storage, cloud computing
Facebook is planning to construct a new cold storage facility to house archived and less-frequently-used media files. The new data center will reside in a new 62,000 sq. ft. building on the company's existing 127-acre property in Prineview, Oregon.
As cold storage, the data center will house servers with up to 3 Exabytes of total data capacity. The machines will be in a sleep state the majority of the time, but will be automatically turned on to serve up media files when accessed on the social network. Because the servers are normally in a lower-power sleep state, there will be a slight delay when users request files. According to Oregon Live, Facebook has stated that the delay will be as much as a couple of seconds and as little as several milliseconds.
The new cold storage facility will enable Facebook to save a great deal on electrical usage and hardware wear and tear (though primarily power bill savings). The company claims that its users upload 350 million photos each day, but that 82% of the social networking site's traffic focuses on a mere 8% of available photos.
Err, not quite the cold storage Facebook has in mind...
Considering Facebook's existing Prineview data center used a whopping 71 million Kilowatts of power in the first 9 months, moving to a new cold storage system for infrequently accessed files is an excellent idea. The photos will still be available, but Facebook will save big on the power bill--a fair compromise for retaining all of those lolcat and meme photos, i think.
The new data center will be rolled out in three phases, each measuring 16,000 sq. ft. in the Prineview facility. The first phase of cold storage servers should be up and running by Q4 2013. There is no estimate on the power savings, but it will be interesting to see how beneficial it will be--and whether other cloud service providers will adopt similar policies.
Also read: Amazon Glacier offers cheap long-term storage.
Subject: Processors | December 5, 2012 - 02:58 PM | Tim Verry
Tagged: servers, opteron 4300, opteron 3300, opteron, amd
AMD has officially released a number of new server processors based on its latest Piledriver cores. The new Opteron 4300 and Opteron 3300 series processors will replace the 4200 and 3200 series, and are aimed at the server market. The 4300 series uses Socket C32 while the Opteron 3300 processors use socket AM3+. They are significantly cheaper Piledriver-based parts than the higher-end Opteron 6300 series processors. AMD is aiming these lower cost Opterons at servers hosting websites and internal applications for small to medium businesses.
There are a total of nine new Opteron processors, with three being 3300 series an six being 4300 series. Both the 3300 and 4300 series Opterons are socket compatible with the previous generation 3200 and 4200 series respectively, allowing for an upgrade path in existing servers. According to AMD, the new Piledriver-based processors have 24% higher performance per watt and use 15% less power than the previous generation parts based on the SPECpower and SPECint benchmarks. AMD is also touting support for low power 1.25V memory with the new chips.
The chart below details the specifications and pricing all of the new Opteron parts.
The new AMD Opteron 3300 series includes two quad core and one eight core processor. The parts range from 1.9GHz to 2.6GHz base and have TDPs from 25W to 65W for the lowest and top end parts respectively. AMD-P, AMD-V, and AMD Turbo Core technologies are also supported. As far as memory goes, the 3300 series supports up to four DIMMs and 32GB per CPU. Further, a single x16 HyperTransport 3.0 link rated at 5.2GT/s is included.
Moving up to the 4300 series comes with an increase in price but you also get more cores, more memory, and faster clockspeeds. The Opteron 4300 series has one quad core 4310 EE, three six core CPUs, and two eight core parts. Base clocks range from 2.2GHz to 3.1GHz while boost clocks start at 3.0GHz and go to 3.8GHz. On the low end, the Opteron 4310 EE has a 35W TDP and the top-end 4386 has a 95W TDP. The 4300 series supports dual channel DDR3 1866 memory with up to six DIMMs and 192GB per CPU. Moving up from the 3300 series also gets you two x16 HyperTransport 3.0 links at 6.4 GT/s.
The new server processors are available now with prices ranging from $174 to $501. In addition, pre-built server options from Supermicro and Seamicro (SM15000) are currently available, with options from Dell and a number of other companies on the way. The prices seem decent, and these chips could make the base for a nice 2P server that brings you Piledriver improvements for much less than the relatively expensive 6300 series processors that we covered previously.