All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Subject: Storage | November 15, 2017 - 09:59 PM | Allyn Malventano
Tagged: NVDIMM, XPoint, 3D XPoint, 32GB, NVDIMM-N, NVDIMM-F, NVDIMM-P, DIMM
We're finally starting to see NVDIMM materialize beyond the unobtanium. Micron recently announced 32GB NVDIMM-N:
These come with 32GB of DRAM plus 64GB of SLC NAND flash.
These are in the NVDIMM-N form factor and can offer some very impressive latency improvements over other non-volatile storage methods.
Next up is Intel, who recently presented at the UBS Global Technology Conference:
We've seen Intel's Optane in many different forms, and now it looks like we finally have a date for 3D XPoint DIMMs - 2nd half of 2018! There are lots of hurdles to overcome as the JEDEC spec is not yet finalized (and might not be by the time this launches). Motherboard and BIOS support also needs to be more widely adopted for this to take off as well.
Don't expect this to be in your desktop machine anytime soon, but one can hope!
Press blast for the Micron 32GB NVDIMM-N appears after the break.
Subject: Cases and Cooling | November 15, 2017 - 03:26 PM | Jeremy Hellstrom
Tagged: RGB, fsp, CMT510
FSP's new CMT510 is not just a pretty case, it does sports some attractive features. The front and both side panels are constructed from 4mm thick tempered glass with translucent Galaxy Dark colouring. This ensures that your RGBs will show through, not just your own but also the four RGB 120mm fans included with case.
The case can handle from mini-ITX to ATX motherboards, with CPU coolers of up to 165mm in height as well as GPUs of up to 400mm in length. In the front you can swap out the fans with a radiator of up to 360mm, or replace them with 140mm fans if you prefer air cooling.
As you can see in the picture above, the design offers a lot of space to work in. Your 3.5" drives attach behind the motherboard tray while the 2.5" are installed lying flat on top of the PSU shroud. Overall you get a fair amount of features for your $100.
Subject: Graphics Cards | November 15, 2017 - 02:40 PM | Jeremy Hellstrom
Tagged: nivida, Star Wars Battlefront 2, destiny 2, 388.31, game ready
NVIDA's newest Game Ready WHQL driver arrived today, version 388.31 offers optimized support for EA's Star Wars Battlefront 2 and Injustice 2 as well as improvements to the performance of a variety of games including Destiny 2.
SLI will work as intended in the new Battlefront with this driver and a variety of other games received new SLI profiles as well. This is also the first driver to officially support the new GTX 1070 Ti, so make sure to grab it if you have been shopping recently.
Subject: General Tech | November 15, 2017 - 02:12 PM | Jeremy Hellstrom
Tagged: gaming, Wolfenstein 2, vulkan, amd, nvidia
[H]ard|OCP took a close look at the new Wolfenstein game, covering the new graphics options which appear in the menus as well as the bugs that could be caused by then, not to mention the benchmarking. For this Vulkan game they chose three AMD cards and four NVIDIA cards to test with a variety of thsoe options enabled as well as looking at the effect resolution has on your performance. As we have seen in other recent games, AMD's Vega 64 is a strong contender at 4K resolutions, surpassing the GTX 1080 but not quite matching its 1080 Ti brother. It is also worth noting this game loves VRAM, in fact 8GB is not enough for Uber settings. Read through the full review for performance numbers as well as insight into the best graphics settings to chose.
"Wolfenstein II: The New Colossus is out; this new game uses the id Tech 6 game engine and Vulkan API to give you a great gaming experience on the PC with today’s latest GPUs. We will compare performance features, see what settings work best, find what is playable in the game and compare performance among several video cards."
Here is some more Tech News from around the web:
- Battletech’s campaign mode is a robot Dark Ages @ Rock, Paper, SHOTGUN
- Doom definitely works on the Switch, but it looks noticeably worse @ Ars Technica
- Need For Speed Payback is really very terrible indeed @ Rock, Paper, SHOTGUN
- Star Wars Battlefront II PC graphics performance analysis @ Guru of 3D
- Wolfenstein 2 story DLC dated, detailed, silly-named @ Rock, Paper, SHOTGUN
- Assassin's Creed Origins: How Heavy Is It on Your CPU? @ Techspot
- Fresh cyber-hell awaits in new System Shock remake vid @ Rock, Paper, SHOTGUN
Subject: General Tech | November 15, 2017 - 11:59 AM | Jeremy Hellstrom
Once again we have bad news about RAM prices for consumers and great news for manufacturers. The price rose an average of 5% this past quarter, continuing the upwards trend we have been seeing for quite some time now. The supply shortage is due to several factors but the dominant one would be the smartphone industry which has vastly increased overall demand for DRAM. Currently demand far outstrips supply, though as new fabs come online and current ones complete their upgrades to new process technology we should hopefully see a levelling in prices. As The Inquirer points out, this is not bad news for Samsung, SK Hynix or Micron who are all seeing very nice profits.
Next time you are thinking about purchasing that shiny new phone, think about your computer for a moment before pressing add to cart.
"Prices, according to DRAMeXchange, increased by around five per cent in the third quarter, and buyers can expect to pay even more in the foreseeable future."
Here is some more Tech News from around the web:
- Windows on ARM: It's nearly here (again) @ The Register
- Boeing 757 Testing Shows Airplanes Vulnerable To Hacking, DHS Says @ Slashdot
- It took 19 years, but Linux finally dominates an entire market @ The Inquirer
- HPE's Apollo 'Skylaked', will get ARM-wrestling little brother next year @ The Register
- Open Source Underwater Glider Wins 2017 Hackaday Prize @ Hack a Day
- MariaDB coming to Azure, as Microsoft joins the MariaDB Foundation @ Ars Technica
- Shockingly, DARPA’s Brain Stimulator Might Not be Complete Nonsense @ Hack a Day
- Precursors to Today's Technology: These Products Had the Right Vision @ Techspot
- Intel and Micron Increase 3D XPoint Manufacturing Capacity with IM Flash Fab Expansion @ CTimes
Subject: Displays | November 14, 2017 - 05:24 PM | Jeremy Hellstrom
Tagged: vesa, displayid 2.0
This year has seen a lot of change in the technology used in monitors, with 4K, adaptive refresh rates above 120Hz and HDR becoming common features. These new features did not exist when DisplayID first replaced the veteran Extended Display Identification Data and so there were no overarching standards governing their implementation. We have also seen the advent of consumer VR and AR which also lacks a standard for companies to follow.
The new DisplayID 2.0 standard is specifically for these new devices, with the previous standards remaining to govern the compatibility of legacy products. The new standard describes how manufacturers can use the modular data block design to send clear information about their devices capabilities to the hardware powering the display. If followed this will greatly enhance the compatibility of variable refresh rate technology, screens with 4K or higher resolution and wearable displays.
This will help you avoid experiencing the frustrations early adopters have experienced and will hopefully restore displays to a state where they simply work when plugged into a compatible GPU. We won't see huge jumps in performance but this will certainly help in the development of 4K displays with high refresh rates, once the power of our GPUs catches up.
Subject: General Tech | November 14, 2017 - 05:09 PM | Jeremy Hellstrom
Tagged: gaming, ea, Star Wars Battlefront 2
Loot boxes may look good on paper as a way to generate extra revenue from a game but in reality they are incredibly unpopular with those who buy games. Originally EA had set the price of unlocking your first playable hero at 60,000 in game credits. According to the math done in the article Slashdot linked to, that would entail around 40 hours of gameplay assuming you never used any for the various other unlocks EA charges credits for. As EA limits the amount of credits you can earn at one time in arcade mode, most of those hours would need to be spent in multiplayer games as opposed to enjoying the game in peace and quiet. Of course, you could always pay money for them, $450 or so would unlock a hero.
In this case EA actually listened to their prospective customers, dropping the credit requirements for heroes by 75%; the loot boxes remain of course.
"Most importantly, Electronic Arts today announced that they are reducing the number of credits needed to unlock top characters in the game by 75 percent. Luke Skywalker and Darth Vader will now cost 15,000 credits. Emperor Palatine, Chewbacca and Leia Organa will now cost 10,000 and Iden will cost 5,000."
Here is some more Tech News from around the web:
- OnePlus has left a huge backdoor exploit app in Oxygen OS @ The Inquirer
- ARM emulator in a VM? Yup, done. Ready to roll, no config required @ The Register
- Thousand-dollar iPhone X's Face ID wrecked by '$150 3D-printed mask' @ The Register
- Firefox Quantum finally launches to the world with double the speed @ The Inquirer
Subject: Graphics Cards | November 13, 2017 - 10:35 PM | Scott Michaud
Tagged: nvidia, data center, Volta, tesla v100
There have been a few NVIDIA datacenter stories popping up over the last couple of months. A month or so after Google started integrating Pascal-based Tesla P100s into their cloud, Amazon announced Telsa V100s for their rent-a-server service. They have also announced Volta-based solutions available or coming from Dell EMC, Hewlett Packard Enterprise, Huawei, IBM, Lenovo, Alibaba Cloud, Baidu Cloud, Microsoft Azure, Oracle Cloud, and Tencent Cloud.
This apparently translates to boatloads of money. Eyeball-estimating from their graph, it looks as though NVIDIA has already made about 50% more from datacenter sales in their first three quarters (fiscal year 2018) than all last year.
They are also seeing super-computer design wins, too. Earlier this year, Japan announced that it would get back into supercomputing, having lost ground to other nations in recent years, with a giant, AI-focused offering. Turns out that this design will use 4352 Tesla V100 GPUs to crank out 0.55 ExaFLOPs of (tensor mixed-precision) performance.
As for product announcements, this one isn’t too exciting for our readers, but should be very important for enterprise software developers. NVIDIA is creating optimized containers for various programming environments, such as TensorFlow and GAMESS, with their recommended blend of driver version, runtime libraries, and so forth, for various generations of GPUs (Pascal and higher). Moreover, NVIDIA claims that they will support it “for as long as they live”. Getting the right container for your hardware is just filling out a simple form and downloading the blob.
NVIDIA’s keynote is available on UStream, but they claim it will also be uploaded to their YouTube soon.
Subject: Displays | November 13, 2017 - 03:48 PM | Jeremy Hellstrom
Tagged: AOC, AGON, AG322QCX, 144hz, freesync
The AGON sacrifices 4k resolution to provide refresh rates of up to 144Hz; instead the 31.5" curved display offers a 1440p resolution, demonstrating its focus on gaming. The monitor also includes a QuickSwitch control, a physical keyboard which you can control the settings on your monitor, an extremely effective alternative to navigating an OSD with the buttons build into monitors. Kitguru tested the monitor out found it to be great for large screen gaming, but perhaps not for movie viewing as all the presets are gaming focused. The inputs were another point of contention, while comprehensive with two HDMI 2.0, two DisplayPort 1.2, VGA, headphone and mic jacks as well as two USB 3.0 ports, the placement is not the most convenient for some. Drop by for a look.
"Curved screens are really starting to come of age for gaming. We are seeing more and more of these, in many different sizes, and the latest to grace the KitGuru testing table is the AOC AGON AG322QCX. It’s pretty sizeable at 31.5in, but unlike many larger screens it’s still packed with features to please the serious gamer."
Here are some more Display articles from around the web:
- Eizo Foris FS2735 144 Hz @ TechPowerUp
- Acer Predator XB321HK 4K 60Hz G-Sync @ Kitguru
- LG 24MP48HQ-P 24 Inch IPS LED Monitor Review @ NikKTech
- Philips Moda Slim 245C7QJSB Designer Monitor @ Kiitguru
- Datacolor Spyder5 Elite+ Easy Monitor Calibration Tool Review @ NikKTech
Subject: General Tech | November 13, 2017 - 03:33 PM | Jeremy Hellstrom
Tagged: 3d printing, metal, Desktop Metal
Desktop Metal's new printer follows the same design process as current 3D metal printing, layers of metal powder, wax and a plastic binding agent are sprayed out by an inkjet-like device. Upon completion of the print, the item is submerged in a debinding fluid which disolves the wax and then spends some time in a furnace to burn off the binding agent and set the powder leaving the final product between 96 and 99.8% metal. This process is currently handled much more quickly via traditional tool and die, however Desktop Metal told The Register their new printer operates at 100 times the speed of the competition and at a very competitive price to either tool and die or 3D printing. It will be interesting to see if this applies to a wide enough variety of prints and provides high enough quality to unseat the incumbent processes.
"Desktop Metal, based in Boston, USA, has opened up pre-orders for its Studio System which uses inkjet-like technology, rather than laser-based techniques, to produce precision metal parts."
Here is some more Tech News from around the web:
- Amazon Developing a Free, Ad-Supported Version of Prime Video: Report @ Slashdot
- Equifax Q3 results: Not as bad as you might have hoped – hack only cost biz about $87m @ The Register
- ESCAM QF220 WiFi Doorbell IP Camera @ Benchmark Reviews
Subject: Editorial | November 10, 2017 - 08:00 AM | Jim Tanous
Tagged: video, pcper mailbag, pcper, Allyn Malventano
It's Friday, which means it's time for PC Perspective's weekly mailbag, our video show where Ryan and team answer your questions about the tech industry, the latest and greatest hardware, the process of running a tech review website, and more!
Today, Storage Editor Allyn Malventano takes on your storage questions:
00:47 - M.2 vs SATA SSDs?
03:13 - SSD over-provisioning necessary?
07:03 - What happens when the SLC cache loses power?
10:53 - Optane for everyone?
15:41 - V-NAND layers vs. process shrink to reduce SSD prices?
19:49 - NVMe SSD on PCH lanes vs. CPU lanes?
22:58 - AHCI vs. NVMe?
28:29 - NVMe RAID WTF?
33:17 - Exciting tech on the horizon?
Be sure to subscribe to our YouTube Channel to make sure you never miss our weekly reviews and podcasts, and please consider supporting PC Perspective via Patreon to help us keep videos like our weekly mailbag coming!
Subject: Cases and Cooling | November 9, 2017 - 04:28 PM | Jeremy Hellstrom
Tagged: amd, Threadripper, watercooler, phanteks, Glacier C399A, X399
[H]ard|OCP have been working their way through every Threadripper compatible waterblock, the latest model to be tested is Phanteks' Glacier C399A. The top of the waterblock is clear acrylic, perfect if you plan on adding a little colour to your coolant especially if you make use of the Frag-Harder Disco Lights. Mounting is reasonably easy, no dedicated in or out connector to confuse and tightening can be accomplished with a small pair of pliers, which you may find necessary. The cooling performance was in line with the other coolers they've tested, though the C399A does lose some marks because of the need to tighten the mounting mechanism on occasion. Check out the full review for details.
"The Phanteks Glacier C399A is a custom-designed water cooling block built specifically for AMD's new Threadripper processors. It has great looks, Frag-Harder Disco Lights, is built like a tank, and seems to be just what the doctor ordered when it comes to cooling overclocked Threadripper CPUs."
Here are some more Cases & Cooling reviews from around the web:
- EK Supremacy EVO Threadripper TR4 Waterblock @ [H]ard|OCP
- XSPC RayStorm Neo CPU Water Block @ TechPowerUp
- Cooler Master MA610P and MA410P RGB @ Kitguru
- Fractal Design Celsius S24 @ TechPowerUp
- Raijintek Asterion Plus @ Benchmark Reviews
- Cooler Master MasterAir MA610P @ Modders-Inc
- Corsair Carbide SPEC-04 TG @ Benchmark Reviews
- BitFenix Aurora @ TechPowerUp
- NZXT H700i @ Guru3D
- Thermaltake View 71 Tempered Glass Edition @ [H]ard|OCP
Subject: General Tech | November 9, 2017 - 02:38 PM | Alex Lustenberg
Tagged: video, titan xp, teleport, starcraft 2, raja koduri, radeon, qualcomm, podcast, nvidia, Intel, centriq, amplifi, amd
PC Perspective Podcast #475 - 11/09/17
Join us for discussion on Intel with AMD graphics, Raja's move to Intel, and more!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the iTunes Store (audio only)
- Google Play - Subscribe to our audio podcast directly through Google Play!
- RSS - Subscribe through your regular RSS reader (audio only)
- MP3 - Direct download link to the MP3 file
Hosts: Josh Walrath, Jeremy Hellstrom, Allyn Malventano, Ken Addison
Peanut Gallery: Alex Lustenberg, Jim Tanous
Program length: 1:29:42
Week in Review:
News items of interest:
Hardware/Software Picks of the Week
1:13:40 Allyn: Relatively cheap Samsung 82” (!!!) 4K TV
1:17:45 Jeremy: What exactly is a "technology certificate license" Logitech?
1:23:45 Josh: 1800X for $399!!!!!
1:24:50 Ken: The Void Wallet
Subject: General Tech, Processors | November 9, 2017 - 02:30 PM | Ken Addison
Tagged: Skull Canyon, nuc, kaby lake-g, Intel, Hades Canyon VR, Hades Canyon, EMIL, amd
Hot on the heels of Intel's announcement of new mobile-focused CPUs integrating AMD Radeon graphics, we have our first glimpse at a real-world design using this new chip.
Posted on the infamous Chinese tech forum, Chiphell earlier today, this photo appears to be a small form factor PC design integrating the new Kaby Lake-G CPU and GPU solution.
Looking at the standard size components on the board like the Samsung M.2 SSD and the DDR4 SODIMM memory modules, we can start to get a better idea of the actual size of the Kaby Lake-G module.
Additionally, we get our first look at the type of power delivery infrastructure that devices with Kaby Lake-G are going to require. It's impressive how small the motherboard is taking into account all of the power phases needed to feed the CPU, GPU, and HBM 2 memory.
Looking back at the leaked NUC roadmap from September, the picture starts to become more clear. While the "Hades Canyon" NUCs on this roadmap threw us for a loop when we first saw it months ago, it's now clear that they are referencing the new Kaby Lake-G line of products. The plethora of IO options from the roadmap, including dual Gigabit Ethernet and 2 Thunderbolt 3 ports also seem to match closely with the leaked NUC photo above.
Using this information we also now have a better idea of the thermal and power requirements for Kaby Lake-G. The base "Hades Canyon" NUC is listed with a 65W processor, while the "Hades Canyon VR" is listed with as a 100W part. This means that devices retain the same levels of CPU performance from the existing Kaby Lake-H Quad Core mobile CPUs which clock in at 35W, plus roughly 30 or 65W of graphics performance.
These leaked 3DMark scores might give us an idea of the performance of the Hades Canyon VR NUC.
One thing is clear; Hades Canyon will be the highest power NUC Intel has ever produced, surpassing the 45W Skull Canyon. Considering the already unusual for a NUC footprint of Skull Canyon, I'm interested to see the final form of Hades Canyon as well as the performance it brings!
With what looks to be a first half 2018 release date on the roadmap, it seems likely that we could see this NUC or other similar devices being shown off at CES in January. Stay tuned for more continuing coverage of Intel's Kaby Lake-G and upcoming devices featuring it!
Subject: General Tech | November 9, 2017 - 02:02 PM | Jeremy Hellstrom
Tagged: hyneman, roller skates, VR, Vortrex Shoes
Jamie Hyneman is pitching a project to build prototype VR roller skates; not as a game but as a way to save your shins while using a VR headset. The design places motorized wheels under your heel and a track under the ball of your foot which will move your foot back to its starting position if you walk forward. If all goes as planned this should allow you to walk around in virtual worlds without running into walls, chairs or spectators and perhaps allow games to abandon the point and teleport currently in vogue. There are a lot of challenges as previous projects have discovered but perhaps a Mythbuster can help out. You can watch his pitch video over at The Register.
"Hyneman's pitch video points out that when one straps on goggles and gloves to enter virtual reality, your eyes are occupied and you therefore run the risk of bumping into stuff if you try to walk in meatspa ce while simulating walking in a virtual world. And bumping into stuff is dangerous."
Here is some more Tech News from around the web:
- Windows 10 Insiders using AMD processors won't receive latest preview build @ The Inquirer
- Microsoft To Integrate 3rd-party Security Info Into Its Windows Defender Advanced Threat Protection Service @ Slashdot
- 'Take the tinfoil hat off and use it for better WiFi speeds' says shocking new report @ The Inquirer
Subject: Graphics Cards | November 8, 2017 - 09:29 PM | Scott Michaud
Tagged: Intel, graphics drivers
When we report on graphics drivers, it’s almost always for AMD or NVIDIA. It’s Intel’s turn this time, however, with their latest 15.60 release. This version supports HDR playback on NetFlix and YouTube, and it adds Windows Mixed Reality for Intel HD 620 and higher.
I should note that this driver only supports Skylake-, Kaby Lake-, and Coffee Lake-based parts. I’m not sure whether this means that Haswell-and-earlier have been deprecated, but it looks like the latest ones that support those chips are from May.
In terms of game-specific optimizations? Intel has some to speak of. This driver focuses on The LEGO Ninjago Movie Video Game, Middle-earth: Shadow of War, Pro Evolution Soccer 2018, Call of Duty: WWII, Destiny 2, and Divinity: Original Sin 2. All of these name-drops are alongside Iris Pro, so I'm not sure how low you can go for any given title. Thankfully, many game distribution sites allow refunds for this very reason, although you still want to do a little research ahead-of-time.
That's all beside the point, though: Intel's advertising game-specific optimizations.
If you have a new Intel GPU, pick up the new drivers from Intel's website.
Subject: General Tech | November 8, 2017 - 03:26 PM | Jeremy Hellstrom
Tagged: gaming, Wolfenstein 2, the new colossus, nvidia, amd, vulkan
Wolfenstein II The New Colossus uses the Vulkan API which could favour AMD's offerings however NVIDIA have vastly improved their support so a win is not guaranteed. The Guru of 3D tested the three resolutions which most people are interested in, 1080p, 1440p and 4K on 20 different GPUs in total. They also took a look at the impact of 4-core versus 8-core CPUs, testing the i7-4790K, i7-5960K as well as the Ryzen 7 1800X and even explored the amount of VRAM the game uses. Drop by to see all their results as well as hints on dealing with the current bugs.
"We'll have a peek at the PC release of Wolfenstein II The New Colossus for Windows relative towards graphics card performance. The game is 100% driven by the Vulkan API. in this test twenty graphics cards are being tested and benchmarked."
Here is some more Tech News from around the web:
- Stellaris FTL changes are in the warp pipes @ Rock, Paper, SHOTGUN
- Humble Strategy Simulator Bundle
- Nearly a year later, video game voice actors end their strike @ Ars Technica
- Call of Duty WWII: Benchmark Performance Analysis @ TechPowerUp
- Take Two: All future games will feature microtransactions @ HEXUS
- Wot I Think: Call of Duty: WW2 Multiplayer @ Rock, Paper, SHOTGUN
- Call of Duty: WW2: PC graphics analysis benchmark @ Guru of 3D
- Total War: Rome II expanding again with Empire Divided @ Rock, Paper, SHOTGUN
- F1 2017 On Linux With 23 Graphics Cards Using Vulkan @ Phoronix
- Radeon vs. NVIDIA Vulkan Performance For F1 2017 On Linux @ Phoronix
Subject: Processors | November 8, 2017 - 02:03 PM | Ryan Shrout
Tagged: qualcomm, centriq 2400, centriq, arm
At an event in San Jose on Wednesday, Qualcomm and partners officially announced that its Centriq 2400 server processor based on the Arm-architecture was shipping to commercial clients. This launch is of note as it becomes the highest-profile and most partner-lauded Arm-based server CPU and platform to be released after years of buildup and excitement around several similar products. The Centriq is built specifically for enterprise cloud workloads with an emphasis on high core count and high throughput and will compete against Intel’s Xeon Scalable and AMD’s new EPYC platforms.
Paul Jacobs shows Qualcomm Centriq to press and analysts
Built on the same 10nm process technology from Samsung that gave rise to the Snapdragon 835, the Centriq 2400 becomes the first server processor in that particular node. While Qualcomm and Samsung tout that as a significant selling point, on its own it doesn’t hold much value. Where it does come into play and impact the product position with the resulting power efficiency it brings to the table. Qualcomm claims that the Centriq 2400 will “offer exceptional performance-per-watt and performance-per dollar” compared to the competition server options.
The raw specifications and capabilities of the Centriq 2400 are impressive.
|Centriq 2460||Centriq 2452||Centriq 2434|
|Process Tech||10nm (Samsung)||10nm (Samsung)||10nm (Samsung)|
|Base Clock||2.2 GHz||2.2 GHz||2.3 GHz|
|Max Clock||2.6 GHz||2.6 GHz||2.5 GHz|
|Memory Speeds||2667 MHz
|Cache||24MB L2, split
|23MB L2, split
|20MB L2, split
|PCIe||32 lanes PCIe 3.0||32 lanes PCIe 3.0||32 lanes PCIe 3.0|
Built on 18 billion transistors a die area of just 398mm2, the SoC holds 48 high-performance 64-bit cores running at frequencies as high as 2.6 GHz. (Interestingly, this appears to be about the same peak clock rate of all the Snapdragon processor cores we have seen on consumer products.) The cores are interconnected by a bi-directional ring bus that is reminiscent of the integration Intel used on its Core processor family up until Skylake-SP was brought to market. The bus supports 250 GB/s of aggregate bandwidth and Qualcomm claims that this will alleviate any concern over congestion bottlenecks, even with the CPU cores under full load.
The caching system provides 512KB of L2 cache for every pair of CPU cores, essentially organizing them into dual-core blocks. 60MB of L3 cache provides core-to-core communications and the cache is physically divided around the die for on-average faster access. A 6-channel DDR4 memory systems, with unknown peak frequency, supports a total of 768GB of capacity.
Connectivity is supplied with 32 lanes of PCIe 3.0 and up to 6 PCIe devices.
As you should expect, the Centriq 2400 supports the ARM TrustZone secure operating environment and hypervisors for virtualized environments. With this many cores on a single chip, it seems likely one of the key use cases for the server CPU.
Maybe most impressive is the power requirements of the Centriq 2400. It can offer this level of performance and connectivity with just 120 watts of power.
With a price of $1995 for the Centriq 2460, Qualcomm claims that it can offer “4X better performance per dollar and up to 45% better performance per watt versus Intel’s highest performance Skylake processor, the Intel Xeon Platinum 8180.” That’s no small claim. The 8180 is a 28-core/56-thread CPU with a peak frequency of 3.8 GHz and a TDP of 205 watts and a cost of $10,000 (not a typo).
Qualcomm had performance metrics from industry standard SPECint measurements, in both raw single thread configurations as well as performance per dollar and per watt. I will have more on the performance story of Centriq later this week.
More important than simply showing hardware, Qualcomm and several partners on hand at the press event as well as many statements from important vendors like Alibaba, HPE, Google, Microsoft, and Samsung. Present to showcase applications running on the Arm-based server platforms was an impressive list of the key cloud services providers: Alibaba, LinkedIn, Cloudflare, American Megatrends Inc., Arm, Cadence Design Systems, Canonical, Chelsio Communications, Excelero, Hewlett Packard Enterprise, Illumina, MariaDB, Mellanox, Microsoft Azure, MongoDB, Netronome, Packet, Red Hat, ScyllaDB, 6WIND, Samsung, Solarflare, Smartcore, SUSE, Uber, and Xilinx.
The Centriq 2400 series of SoC isn’t perfect for all general-purpose workloads and that is something we have understood from the outset of this venture by Arm and its partners to bring this architecture to the enterprise markets. Qualcomm states that its parts are designed for “highly threaded cloud native applications that are developed as micro-services and deployed for scale-out.” The result is a set of workloads that covers a lot of ground:
- Web front end with HipHop Virtual Machine
- NoSQL databases including MongoDB, Varnish, Scylladb
- Cloud orchestration and automation including Kubernetes, Docker, metal-as-a-service
- Data analytics including Apache Spark
- Deep learning inference
- Network function virtualization
- Video and image processing acceleration
- Multi-core electronic design automation
- High throughput compute bioinformatics
- Neural class networks
- OpenStack Platform
- Scaleout Server SAN with NVMe
- Server-based network offload
I will be diving more into the architecture, system designs, and partner announcements later this week as I think the Qualcomm Centriq 2400 family will have a significant impact on the future of the enterprise server markets.
Subject: General Tech | November 8, 2017 - 01:15 PM | Jeremy Hellstrom
Tagged: logitech, iot, harmony link
If you own a Logitech Harmony Link and registered it then you already know, but for those who did not receive the email you should know your device will become unusable in March. According to the information Ars Technica acquired, Logitech have decided not to renew a so called "technology certificate license" which will mean the Link will no longer work. It is not clear what this certificate is nor why the lack of it will brick the Link but that is what will happen. Apparently if you have a Harmony Link which is still under warranty you can get a free upgrade to a Harmony Hub; if your Link is out of warranty then you can get a 35% discount. Why exactly one would want to purchase another one of these devices which can be remotely destroyed is an interesting question, especially as there was no monthly contract or service agreement suggesting this was a possibility when customers originally purchased their device.
"Customers received an e-mail explaining that Logitech will "discontinue service and support" for the Harmony Link as of March 16, 2018, adding that Harmony Link devices "will no longer function after this date."
Here is some more Tech News from around the web:
- This could be our favorite gadget of 2017: A portable projector @ The Register
- 'How Chrome Broke the Web' @ Slashdot
- Don't worry about those 40 Linux USB security holes. That's not a typo @ The Register
- Flaw Crippling Millions of Crypto Keys Is Worse Than First Disclosed @ Slashdot
- Highly flexible organic flash memory for foldable and disposable electronics @ Phys.org
- KRACK whacked, media playback holes packed, other bugs go splat in Android patch pact @ The Register
- The Biggest Tech Fails of the Last Decade @ TechSpot
Subject: Networking | November 7, 2017 - 10:00 PM | Jim Tanous
Tagged: wi-fi, vpn, ubiquiti, networking, mesh, Amplifi HD, amplifi
Earlier this year we took a look at the AmpliFi HD Home Wi-Fi System as part of our review of mesh wireless network devices. AmpliFi is the consumer-targeted brand of enterprise-focused Ubiquiti Networks, and while we preferred the eero Mesh Wi-Fi System in our initial look, the AmpliFi HD still offered great performance and some unique features. Today, AmpliFi is introducing a new member of its networking family called AmpliFi Teleport, a "plug-and-play" device that provides a secure connection to users' home networks from anywhere.
Essentially a zero-configuration hardware-based VPN, the Teleport is linked with a user's AmpliFi account, which automatically creates a secure connection to the user's AmpliFi HD Wi-Fi System at home. Users take the small (75.85mm x 43mm x 39mm) Teleport device with them on the road, plug it in and connect it to the public Wi-Fi or Ethernet, and then connect their personal devices to the Teleport.
This provides a secure connection for private Internet traffic, but also allows access to local resources on the home network, including NAS devices, file shares, and home automation products. AmpliFi also touts that this would allow users to view their local streaming content even in locations where it would otherwise be unavailable -- e.g., watching U.S. Netflix shows while overseas, or streaming your favorite sports team while in a city where the game is blacked out.
In addition to traveling, AmpliFi notes that those with multiple homes or a vacation cottage could also benefit from Teleport, as it would allow you to share the same network resources and media streaming access regardless of location. In any case, a device like Teleport is still reliant on the speed and quality of your home and remote Internet connections, so there may be cases where network speeds are so low that it makes the device useless. That, of course, is a factor that would plague any network-dependent service or device, so while it's not a mark against the Teleport, it's something to keep in mind.
Teleport's features, while incredibly useful, are of course familiar to those experienced with VPNs and other secure remote connection methods. In terms of overall functionality, the AmpliFi Teleport isn't offering anything new here. The benefit, therefore, is its simple setup and configuration. Users don't need to setup and run a VPN on their home hardware, subscribe to a third party VPN service, or know anything about encryption protocols, firewall configuration, or network tunneling. They simply need to plug the Teleport into power, follow the connection guide, and that's it -- they're up and running with a secure connection to their home network.
You'll pay for this convenience, however, as the Teleport isn't cheap. It's launching today on Kickstarter with "early bird" pricing of $199, which will get you the Teleport device and the required AmpliFi HD router. A second round of early purchasers will see that price increase to $229, while final pricing is $269. Again, that's just for the Teleport and the router. A kit including two AmpliFi mesh access points is $399. There's no word on standalone pricing for the Teleport device only for those who already have an AmpliFi mesh network at home.
Regardless of the package, once you have the hardware there's no extra cost or subscription fee to use the Teleport, so frequent travelers might find the system worth it when compared to some other subscription-based VPN services.
The AmpliFi Teleport is expected to ship to early purchasers in December. We don't have the hardware in hand yet for performance testing, but AmpliFi has promised to loan us review samples as the product gets closer to shipping. Check out the Teleport Kickstarter page and AmpliFi's website for more information.