Subject: General Tech | November 28, 2018 - 01:46 PM | Jeremy Hellstrom
Tagged: arm, amd, AWS, Annapurna Labs, graviton
You may have never heard of Graviton before, but chances are you've interacted with one on Amazon. The current chip which powers many AWS instances is based on a Cortex-A72 design and runs 2.3GHz and was almost designed by AMD. Instead, AMD was focusing on Zen design and were not able to commit enough resources to the development of the ARM chip, which is why Amazon chose to buy Annapurna Labs outright and have the chip designed in house. We did see that AMD ARM chip, the A1100, which did not see much market success.
There is quite a story behind this, catch up on it over at The Register.
"Up until early 2015, Amazon and AMD were working together on a 64-bit Arm server-grade processor to deploy in the internet titan's data centers. However, the project fell apart when, according to one well-placed source today, "AMD failed at meeting all the performance milestones Amazon set out."
Here is some more Tech News from around the web:
- Sci-Hub: Breaking Down The Paywalls @ Hackaday
- Windows To Go: How to Install and Run Windows 10 from a USB Drive @ Techspot
- Microsoft Warns Of Two Apps That Installed Root Certificates Then Leaked the Private Keys @ Slashdot
- Sennheiser's HeadSetup software is vulnerable to MITM attacks @ The Inquirer
- AKRacing Summit Desk @ Kitguru
Subject: General Tech | October 5, 2017 - 10:39 AM | Alex Lustenberg
Tagged: Zotac Zbox, Z370 Godlike, VROC, video, usb 3.2, Samsung Odyssey, ryzen, PS2000e, podcast, Pixel 2 XL, Pixel 2, Pinnacle, msi, lumberyard, Intel, Grado, google, Glaive, cryorig h5 ultimate, corsair, Cooler Master Cosmos C700P, AWS, apple, amd, a11
PC Perspective Podcast #470 - 10/05/17
Join us for discussion on Intel VROC, AMD TR RAID, Google Pixel 2, and more!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the iTunes Store (audio only)
- Google Play - Subscribe to our audio podcast directly through Google Play!
- RSS - Subscribe through your regular RSS reader (audio only)
- MP3 - Direct download link to the MP3 file
Hosts: Ryan Shrout, Jermey Hellstrom, Josh Walrath, Allyn Malventano
Peanut Gallery: Alex Lustenberg
Program length: 1:41:19
Week in Review:
News items of interest:
1:28:10 Zotac steps up their Zbox game
Hardware/Software Picks of the Week
Subject: General Tech | February 9, 2016 - 05:11 PM | Scott Michaud
Tagged: amazon, AWS, game engine
Another video game engine has entered the world, this time from Amazon. It is basically a fork of CryEngine that they purchased the rights to sub-license. Amazon states that their engine will diverge over time, as they modify it in-house for licensees and their internal game studio, Amazon Game Studios. It is licensed for free, with full source access, but it has a few restrictions.
The market is currently dominated with a variety of offerings with different business models. Unreal Engine 4 is free to use, but takes a portion of revenue after some grace amount. CryEngine is available on a relatively cheap subscription, but has no royalty requirements. Unigine offers a few lump-sum options, starting at almost a grand-and-a-half. Unity has a few options, from a cut down free version, to a relatively expensive subscription, to lump-sum payments. Finally, at least for this list, Source 2 is completely free, with the only requirement that published games must be available on Steam at launch.
That last one, Source 2, is basically the business model that Amazon chose with their new Lumberyard engine. The difference is that, instead of requiring games to be published at a certain retailer, they require that games use Amazon Web Services for online interactions, like multiplayer and cloud, unless the developer maintains their own servers. I'm not exactly sure what that distinction ("If you own and operate your own private servers") allows, but I'd assume that Microsoft Azure and Google Cloud are big no-nos. On the other hand, single-player experiences and games with local multiplayer, assuming neither has “cloud” features, are completely free to make.
While it would be nice to have a purely open source offering that can compete with these proprietary engines, developers should be able to find a suitable option. Each seems to ask for something slightly different, and they are very permissive otherwise.
Subject: General Tech | September 24, 2015 - 12:57 PM | Jeremy Hellstrom
Tagged: amazon, AWS, dynamoDB
It has not been a good week for internet users, with Skype suffering major outages and AWS based services such as Tinder and Netflix going down Sunday and experiencing issues again today. The Register takes you through what caused the outage in this quick article about Amazon Web Services and DynamoDB.
As with other Cloud providers, the database is spread out over the globe, with DynamoDB tables split into partitions which are not necessarily close geographically. The location of tables to which the partitions are members of are stored in metadata servers which can connect the scattered tables into seamless interface for the end user ... when all is well. In this case the metadata servers were responding to slowly for the tables to function which resulted in those tables querying updated memberships on the metadata servers which caused enough traffic to bring down AWS.
"Picture a steakhouse in which the cooks are taking so long to prepare the food, the side dishes have gone cold by the time the waiters and waitresses take the plates from the chef to the hungry diners. The orders have to be started again from scratch, the whole operation is overwhelmed, the chef walks out, and ultimately customers aren't getting fed."
Here is some more Tech News from around the web:
- GreenDispenser malware threatens to take all your dosh from Linux ATMs @ The Inquirer
- iOS 9 security blooper lets you BYPASS PINs, eye up photos, contacts @ The Register
- Nintendo joins Khronos vid API standards body @ The Register
Subject: General Tech, Graphics Cards, Systems | November 5, 2013 - 09:33 PM | Scott Michaud
Tagged: nvidia, grid, AWS, amazon
Amazon Web Services allows customers (individuals, organizations, or companies) to rent servers of certain qualities to match their needs. Many websites are hosted at their data centers, mostly because you can purchase different (or multiple) servers if you have big variations in traffic.
I, personally, sometimes use it as a game server for scheduled multiplayer events. The traditional method is spending $50-80 USD per month on a... decent... server running all-day every-day and using it a couple of hours per week. With Amazon EC2, we hosted a 200 player event (100 vs 100) by purchasing a dual-Xeon (ironically the fastest single-threaded instance) server connected to Amazon's internet backbone by 10 Gigabit Ethernet. This server cost just under $5 per hour all expenses considered. It was not much of a discount but it ran like butter.
This leads me to today's story: NVIDIA GRID GPUs are now available at Amazon Web Services. Both companies hope their customers will use (or create services based on) these instances. Applications they expect to see are streamed games, CAD and media creation, and other server-side graphics processing. These Kepler-based instances, named "g2.2xlarge", will be available along side the older Fermi-based Cluster Compute Instances ("cg1.4xlarge").
It is also noteworthy that the older Fermi-based Tesla servers are about 4x as expensive. GRID GPUs are based on GK104 (or GK107, but those are not available on Amazon EC2) and not the more compute-intensive GK110. It would probably be a step backwards for customers intending to perform GPGPU workloads for computational science or "big data" analysis. The newer GRID systems do not have 10 Gigabit Ethernet, either.
So what does it have? Well, I created an AWS instance to find out.
Its CPU is advertised as an Intel E5-2670 with 8 threads and 26 Compute Units (CUs). This is particularly odd as that particular CPU is eight-core with 16 threads; it is also usually rated by Amazon at 22 CUs per 8 threads. This made me wonder whether the CPU is split between two clients or if Amazon disabled Hyper-Threading to push the clock rates higher (and ultimately led me to just log in to an instance and see). As it turns out, HT is still enabled and the processor registers as having 4 physical cores.
The GPU was slightly more... complicated.
NVIDIA control panel apparently does not work over remote desktop and the GPU registers as a "Standard VGA Graphics Adapter". Actually, two are available in Device Manager although one has the yellow exclamation mark of driver woe (random integrated graphics that wasn't disabled in BIOS?). GPU-Z was not able to pick much up from it but it was of some help.
Keep in mind: I did this without contacting either Amazon or NVIDIA. It is entirely possible that the OS I used (Windows Server 2008 R2) was a poor choice. OTOY, as a part of this announcement, offers Amazon Machine Image (AMI)s for Linux and Windows installations integrated with their ORBX middleware.
I spot three key pieces of information: The base clock is 797 MHz, the memory size is 2990 MB, and the default drivers are Forceware 276.52 (??). The core and default clock rate, GK104 and 797 MHz respectively, are characteristic of the GRID K520 GPU with its 2 GK104 GPUs clocked at 800 MHz. However, since the K520 gives each GPU 4GB and this instance only has 3GB of vRAM, I can tell that the product is slightly different.
I was unable to query the device's shader count. The K520 (similar to a GeForce 680) has 1536 per GPU which sounds about right (but, again, pure speculation).
I also tested the server with TCPing to measure its networking performance versus the cluster compute instances. I did not do anything like Speedtest or Netalyzr. With a normal cluster instance I achieve about 20-25ms pings; with this instance I was more in the 45-50ms range. Of course, your mileage may vary and this should not be used as any official benchmark. If you are considering using the instance for your product, launch an instance and run your own tests. It is not expensive. Still, it seems to be less responsive than Cluster Compute instances which is odd considering its intended gaming usage.
Regardless, now that Amazon picked up GRID, we might see more services (be it consumer or enterprise) which utilizes this technology. The new GPU instances start at $0.65/hr for Linux and $0.767/hr for Windows (excluding extra charges like network bandwidth) on demand. Like always with EC2, if you will use these instances a lot, you can get reduced rates if you pay a fee upfront.