Intel today made a number of product and strategy announcements that are all coordinated to continue the company’s ongoing “data-centric transformation.” Building off of recent events such as last August’s Data-Centric Innovation Summit but with roots spanning back years, today’s announcements further solidify Intel’s new strategy: a shift from the “PC-centric” model that for decades drove hundreds of billions of dollars in revenue but is now on the decline, to the rapidly growing and ever changing “data-centric” world of cloud computing, machine learning, artificial intelligence, automated vehicles, Internet-connected devices, and the seemingly unending growth of data that all of these areas generate.
Rather than abandon its PC roots in this transition, Intel’s plan is to leverage its existing technologies and market share advantages in order to attack the data-centric needs of its customers from all angles. Intel sees a huge market opportunity when considering the range of requirements “from edge to cloud and back:” that is, addressing the needs of everything from IoT devices, to wireless and cellular networking, to networked storage, to powerful data center and cloud servers, and all of the processing, analysis, and security that goes with it.
Intel’s goal, at least as I interpret it, is to be a ‘one stop shop’ for businesses and organizations of all sizes who are transitioning alongside Intel to data-centric business models and workloads. Sure, Intel will be happy to continue selling you Xeon-based servers and workstations, but they can also address your networking needs with new 100Gbps Ethernet solutions, speed up your storage-speed-limited workloads with Optane SSDs, increase performance and reduce costs for memory-dependent workloads by supplementing DRAM with Optane, and address specialized workloads with highly optimized Xeon SKUs and FPGAs. In short, Intel isn’t the company that makes your processor or server, it’s now (or rather wants to be) the platform that can handle your needs from end-to-end. Or, as the company’s recent slogan states: “move faster, store more, process everything.”
Subject: Graphics Cards | November 13, 2017 - 10:35 PM | Scott Michaud
Tagged: nvidia, data center, Volta, tesla v100
There have been a few NVIDIA datacenter stories popping up over the last couple of months. A month or so after Google started integrating Pascal-based Tesla P100s into their cloud, Amazon announced Telsa V100s for their rent-a-server service. They have also announced Volta-based solutions available or coming from Dell EMC, Hewlett Packard Enterprise, Huawei, IBM, Lenovo, Alibaba Cloud, Baidu Cloud, Microsoft Azure, Oracle Cloud, and Tencent Cloud.
This apparently translates to boatloads of money. Eyeball-estimating from their graph, it looks as though NVIDIA has already made about 50% more from datacenter sales in their first three quarters (fiscal year 2018) than all last year.
They are also seeing super-computer design wins, too. Earlier this year, Japan announced that it would get back into supercomputing, having lost ground to other nations in recent years, with a giant, AI-focused offering. Turns out that this design will use 4352 Tesla V100 GPUs to crank out 0.55 ExaFLOPs of (tensor mixed-precision) performance.
As for product announcements, this one isn’t too exciting for our readers, but should be very important for enterprise software developers. NVIDIA is creating optimized containers for various programming environments, such as TensorFlow and GAMESS, with their recommended blend of driver version, runtime libraries, and so forth, for various generations of GPUs (Pascal and higher). Moreover, NVIDIA claims that they will support it “for as long as they live”. Getting the right container for your hardware is just filling out a simple form and downloading the blob.
NVIDIA’s keynote is available on UStream, but they claim it will also be uploaded to their YouTube soon.
EPYC makes its move into the data center
Because we traditionally focus and feed on the excitement and build up surrounding consumer products, the AMD Ryzen 7 and Ryzen 5 launches were huge for us and our community. Finally seeing competition to Intel’s hold on the consumer market was welcome and necessary to move the industry forward, and we are already seeing the results of some of that with this week’s Core i9 release and pricing. AMD is, and deserves to be, proud of these accomplishments. But from a business standpoint, the impact of Ryzen on the bottom line will likely pale in comparison to how EPYC could fundamentally change the financial stability of AMD.
AMD EPYC is the server processor that takes aim at the Intel Xeon and its dominant status on the data center market. The enterprise field is a high margin, high profit area and while AMD once had significant share in this space with Opteron, that has essentially dropped to zero over the last 6+ years. AMD hopes to use the same tactic in the data center as they did on the consumer side to shock and awe the industry into taking notice; AMD is providing impressive new performance levels while undercutting the competition on pricing.
Introducing the AMD EPYC 7000 Series
Targeting the single and 2-socket systems that make up ~95% of the market for data centers and enterprise, AMD EPYC is smartly not trying to swing over its weight class. This offers an enormous opportunity for AMD to take market share from Intel with minimal risk.
Many of the specifications here have been slowly shared by AMD over time, including at the recent financial analyst day, but seeing it placed on a single slide like this puts everything in perspective. In a single socket design, servers will be able to integrate 32 cores with 64 threads, 8x DDR4 memory channels with up to 2TB of memory capacity per CPU, 128 PCI Express 3.0 lanes for connectivity, and more.
Worth noting on this slide, and was originally announced at the financial analyst day as well, is AMD’s intent to maintain socket compatibility going forward for the next two generations. Both Rome and Milan, based on 7nm technology, will be drop-in upgrades for customers buying into EPYC platforms today. That kind of commitment from AMD is crucial to regain the trust of a market that needs those reassurances.
Here is the lineup as AMD is providing it for us today. The model numbers in the 7000 series use the second and third characters as a performance indicator (755x will be faster than 750x, for example) and the fourth character to indicate the generation of EPYC (here, the 1 indicates first gen). AMD has created four different core count divisions along with a few TDP options to help provide options for all types of potential customers. It is worth noting that though this table might seem a bit intimidating, it is drastically more efficient when compared to the Intel Xeon product line that exists today, or that will exist in the future. AMD is offering immediate availability of the top five CPUs in this stack, with the bottom four due before the end of July.
Subject: Editorial | May 10, 2017 - 09:45 PM | Josh Walrath
Tagged: nvidia, earnings, revenues, Q1 2018, Q1, v100, data center, automotive, gpu, gtx 1080 ti
NVIDIA had a monster Q1. The quarter before the company had their highest revenue numbers in the history of the company. Q1 can be a slightly more difficult time and typically the second weakest quarter of the year. The Holiday rush is over and the market slows down. For NVIDIA, this was not exactly the case. While NVIDIA made $2.173 billion in Q4 2017, they came remarkably close to that with revenues of $1.937 billion. While $250 million is a significant drop, it is not an unexpected one. In fact, it shows NVIDIA being slightly stronger than expectations.
The past year has shown tremendous growth for NVIDIA. Their GPUs remain strong and they have the highest performing parts at the upper midrange and high end markets. AMD simply has not been able to compete with NVIDIA, much less overcome the company with higher performing parts at the top end. GPUs still make up the largest portion of income that NVIDIA receives. NVIDIA continues to invest in new areas and those investments are starting to pay off.
Automotive is still in the growth stages for the company, but they have successfully taken the Tegra CPU division and moved away from the cellphone and tablet markets. NVIDIA continues to support their Shield products, but the main focus looks to be the automotive industry with these high performing, low power parts that sport advanced graphical options. Professional graphics continues to be a stronghold for NVIDIA. While it did drop quite a bit from the previous quarter, it is a high margin area that helps bolster revenues.
The biggest mover over this past year seems to be the Data Center. Last year NVIDIA focused on delivering entire solutions to the market as well as their individual GPUs. The past two years have seen them have essentially no income in this area to having a $400 million quarter. This is simply tremendous growth in an area that is still relatively untapped when it comes to GPU compute.
NVIDIA continues to be very aggressive in their product design and introductions. They have simply owned the $300+ range of graphics cards with the GTX 1070, GTX 1080, and the recently introduced GTX 1080 Ti. This is somewhat ignoring the even higher end TitanXp that is priced well above most enthusiasts’ budgets. Today they announced the V100 chip that is the first glimpse we have of a high end part running on TSMC’s new 12nm FinFET process. It also features 16 GB of HBM2 memory and a whopping 21 billion transistors in total.
Next quarter looks to be even better than this one, which is a shock because Q2 has traditionally been the slowest quarter of the year. NVIDIA expects around $1.95 billion in revenues (actually increasing from Q1). NVIDIA also is rewarding shareholders with not only a quarterly dividend, but also has been actively buying back shares (which tends to keep share prices healthy). Early last year NVIDIA had a share price of around $30 while today they are trending well above $100.
If NVIDIA keeps this up while continuing to expand in automotive and data center, it is a fairly safe bet that they will easily overtop $8 billion in revenues for the year. Q3 and Q4 will be stronger if they continue to advance in those areas while retaining marketshare in the GPU market. With rumors hinting that AMD will not have a product that will top the GTX 1080Ti, it is a safe bet that NVIDIA can easily adjust their prices across the board to stay competitive with whatever AMD throws at them.
It is interesting to look back when AMD was shopping around for a graphics firm and wonder what could have happened. Hector Ruiz was in charge of AMD and tried to leverage a deal with NVIDIA. Rumors have it that Huang would not agree to it unless he was CEO. Hector laughed and talked to ATI who was more than happy to sell (and cover up some real weaknesses in the company). We all know what happened to Hector and how his policies and actions started the spiral that AMD is only now recovering from. What would that have been like if Jensen had actually become CEO of that merged company?
Subject: General Tech | December 4, 2011 - 10:28 PM | Tim Verry
Tagged: server farm, Internet, data center, cloud, apple
CNet is reporting that Apple is currently considering constructing a new data center outside of Prineville, Oregon. The 31 Megawatt facility would be built on 160 acres outside of the small Oregon town and would join other prominent tech companies’ data centers including those of Facebook, Amazon, and Google.
According to Oregon Live, it is the area’s mild climate (meaning lower cooling costs compared to naturally warmer climates in addition to all the heat from servers), low electricity costs, and certain “rural enterprise zones” that exempt computers and equipment from normal business property taxes. They state that such exemptions could save Apple several million dollars.
Although Apple has so far declined to comment, city officials have commented that the company looking to purchase the land for the data center codenamed “Maverick” appears to be serious about going through with the purchase. Two major issues stand in the way of Apple building a large data center in the area, however. The company is concerned about tax issues against their intangible assets. Due to Apple putting a great deal of stock (er, the other kind :P) in their brand name, trademarks, and patents, they could face further taxes in the way Oregon’s State Department of Revenue taxes data centers. The largest issue; however, lies in power concerns. In order to supply enough electricity to the various data centers in the area (including Apples should they indeed be building one), Bonneville Power Administration would need to upgrade the Ponderosa Substation, construct an additional substation, and add further transmission lines. This is because the utility company’s transmission capacity to the area is currently nearly maxed out. A 31 Megawatt data center would consume enough electricity to power approximately 22,000 homes and that kind of capacity is not available in an area where towns are a fifth of that size.
The upgrade to the areas electrical subsystems would cost nearly $26.5 million and would take almost three years. Member Services Director for the Central Electric Cooperative, Jeff Beaman, believes that after the appropriate upgrades, a new data center “seems doable.”
Whether this elusive “Maverick” is indeed Apple, and whether the company decides to build a data center remains to be seen; however, it is certainly plausible. Now that Apple is moving more services to the Internet, and the increased adoption of IOS devices thanks to the iPhone being available on all the major US carriers, the company would definitely benefit from having another facility on the other side of the country as their current North Carolina based data center for performance as well as redundancy and stability reasons. What are your thoughts on the reports, is Apple looking to put more cloud (server horsepower) in your icloud?