Flash player not detected. Click here to install flash.
« 1 2 3 4 5 »

A win for the rebels against EA's Empire

Subject: General Tech | November 14, 2017 - 05:09 PM |
Tagged: gaming, ea, Star Wars Battlefront 2

Loot boxes may look good on paper as a way to generate extra revenue from a game but in reality they are incredibly unpopular with those who buy games.  Originally EA had set the price of unlocking your first playable hero at 60,000 in game credits.  According to the math done in the article Slashdot linked to, that would entail around 40 hours of gameplay assuming you never used any for the various other unlocks EA charges credits for.  As EA limits the amount of credits you can earn at one time in arcade mode, most of those hours would need to be spent in multiplayer games as opposed to enjoying the game in peace and quiet.  Of course, you could always pay money for them, $450 or so would unlock a hero.

In this case EA actually listened to their prospective customers, dropping the credit requirements for heroes by 75%; the loot boxes remain of course.

xosshiwetigvxb9txevi.png

"Most importantly, Electronic Arts today announced that they are reducing the number of credits needed to unlock top characters in the game by 75 percent. Luke Skywalker and Darth Vader will now cost 15,000 credits. Emperor Palatine, Chewbacca and Leia Organa will now cost 10,000 and Iden will cost 5,000."

Here is some more Tech News from around the web:

Tech Talk

Source: Slashdot
Subject: General Tech
Manufacturer: Iceberg Interactive

A quiet facade

Iceberg Interactive, whom you may know from games like Killing Floor or the Stardrive series have released a new strategy game called Oriental Empires, and happened to send me a copy to try out.

On initial inspection it resembles recent Civilization games but with a more focused design as you take on a tribe in ancient China and attempt to become Emperor, or at least make your neighbours sorry that they ever met you.  Until you have been through 120 turns of the Grand Campaign you cannot access many of the tribes; not a bad thing as that first game is your tutorial.  Apart from an advisor popping up during turns or events, the game does not hold your hand and instead lets you figure out the game on your own.

open.png

That minimalist ideal is featured throughout the entire game, offering one of the cleanest interfaces I've seen in a game.  All of the information you need to maintain and grow your empire is contained in a tiny percentage of the screen or in a handful of in game menus.  This plays well as the terrain and look of the campaign map is quite striking and varies noticeably with the season.

spring.png

Spring features cherry blossom trees as well as the occasional flooding.

summer.png

Summer is a busy season for your workers and perhaps your armies.

fall.png

Fall colours indicate the coming of winter and snow.

winter.png

Which also shrouds the peaks in fog.  The atmosphere thus created is quite relaxing, somewhat at odds with many 4X games and perhaps the most interesting thing about this game.

In these screenshots you can see the entire GUI that gives you the information you need to play.  The upper right shows your turn, income and occaisonally a helpful advsor offering suggestions.  Below that you will find a banner that toggles between displaying three lists.  The first is of your cites and their current build queues and population information, the second lists your armies compositions and if they currently have any orders while the last displays any events which effect your burgeoning empire.  The bottom shows your leader and his authority which, among other things, indicates the number of cities you can support without expecting quickly increasing unrest. 

The right hand side lets you bring up the only other five menus which you use in this game.  From top to bottom they offer you diplomacy, technology, Imperial edicts you can or have applied to your Empire, player statistics to let you know how you are faring and the last offering detailed statistics of your empire and those competing tribes you have met.

Next, a bit about the gameplay mechanics.

NVIDIA's SC17 Keynote: Data Center Business on Cloud 9

Subject: Graphics Cards | November 13, 2017 - 10:35 PM |
Tagged: nvidia, data center, Volta, tesla v100

There have been a few NVIDIA datacenter stories popping up over the last couple of months. A month or so after Google started integrating Pascal-based Tesla P100s into their cloud, Amazon announced Telsa V100s for their rent-a-server service. They have also announced Volta-based solutions available or coming from Dell EMC, Hewlett Packard Enterprise, Huawei, IBM, Lenovo, Alibaba Cloud, Baidu Cloud, Microsoft Azure, Oracle Cloud, and Tencent Cloud.

nvidia-2017-sc17-money.jpg

This apparently translates to boatloads of money. Eyeball-estimating from their graph, it looks as though NVIDIA has already made about 50% more from datacenter sales in their first three quarters (fiscal year 2018) than all last year.

nvidia-2017-sc17-japanaisuper.jpg

They are also seeing super-computer design wins, too. Earlier this year, Japan announced that it would get back into supercomputing, having lost ground to other nations in recent years, with a giant, AI-focused offering. Turns out that this design will use 4352 Tesla V100 GPUs to crank out 0.55 ExaFLOPs of (tensor mixed-precision) performance.

nvidia-2017-sc17-cloudcontainer.jpg

As for product announcements, this one isn’t too exciting for our readers, but should be very important for enterprise software developers. NVIDIA is creating optimized containers for various programming environments, such as TensorFlow and GAMESS, with their recommended blend of driver version, runtime libraries, and so forth, for various generations of GPUs (Pascal and higher). Moreover, NVIDIA claims that they will support it “for as long as they live”. Getting the right container for your hardware is just filling out a simple form and downloading the blob.

NVIDIA’s keynote is available on UStream, but they claim it will also be uploaded to their YouTube soon.

Source: NVIDIA
Manufacturer: Scythe

A Trio of Air Coolers

Scythe is a major player in the air cooling space with a dizzying array of coolers for virtually any application from the Japanese company. In addition to some of the most compact coolers in the business Scythe also offers some of the highest performing - and most quiet - tower coolers available. Two of the largest coolers in the lineup are the new Mugen 5 Rev. B, and the Grand Kama Cross 3 - the latter of which is one of their most outlandish designs.

DSC_0589.jpg

Rounding out this review we also have a compact tower option from Scythe in the Byakko, which is a 130 mm tall cooler that can fit in a greater variety of enclosures than the Mugen 5 or Grand Kama Cross due to its lower profile. So how did each perform on the cooler test bench? We put these Scythe coolers against the Intel Core i7-7700K to see how potent their cooling abilities are when facing a CPU that gets quite toasty under load. Read on to see how this trio responded to the challenge!

DSC_0690.jpg

Continue reading our roundup of three Scythe CPU air coolers!

AOC's AGON AG322QCX, a nice mix of features with Freesync

Subject: Displays | November 13, 2017 - 03:48 PM |
Tagged: AOC, AGON, AG322QCX, 144hz, freesync

The AGON sacrifices 4k resolution to provide refresh rates of up to 144Hz; instead the 31.5" curved display offers a 1440p resolution, demonstrating its focus on gaming.  The monitor also includes a QuickSwitch control, a physical keyboard which you can control the settings on your monitor, an extremely effective alternative to navigating an OSD with the buttons build into monitors.  Kitguru tested the monitor out found it to be great for large screen gaming, but perhaps not for movie viewing as all the presets are gaming focused.  The inputs were another point of contention, while comprehensive with two HDMI 2.0, two DisplayPort 1.2, VGA, headphone and mic jacks as well as two USB 3.0 ports, the placement is not the most convenient for some.  Drop by for a look.

AOC-AGON-AQ322QCX-Review-on-KitGuru-Front-Off-High.jpg

"Curved screens are really starting to come of age for gaming. We are seeing more and more of these, in many different sizes, and the latest to grace the KitGuru testing table is the AOC AGON AG322QCX. It’s pretty sizeable at 31.5in, but unlike many larger screens it’s still packed with features to please the serious gamer."

Here are some more Display articles from around the web:

Displays

 

Source: Kitguru

Speed Metal on the Desktop

Subject: General Tech | November 13, 2017 - 03:33 PM |
Tagged: 3d printing, metal, Desktop Metal

Desktop Metal's new printer follows the same design process as current 3D metal printing, layers of metal powder, wax and a plastic binding agent are sprayed out by an inkjet-like device.  Upon completion of the print, the item is submerged in a debinding fluid which disolves the wax and then spends some time in a furnace to burn off the binding agent and set the powder leaving the final product between 96 and 99.8% metal.  This process is currently handled much more quickly via traditional tool and die, however Desktop Metal told The Register their new printer operates at 100 times the speed of the competition and at a very competitive price to either tool and die or 3D printing.  It will be interesting to see if this applies to a wide enough variety of prints and provides high enough quality to unseat the incumbent processes.

DM_logo.jpg

"Desktop Metal, based in Boston, USA, has opened up pre-orders for its Studio System which uses inkjet-like technology, rather than laser-based techniques, to produce precision metal parts."

Here is some more Tech News from around the web:

Tech Talk

Source: The Register
Author:
Subject: General Tech
Manufacturer: YouTube

YouTube TV for NVIDIA SHIELD

When YouTube TV first launched earlier this year, it had one huge factor in its favor compared to competing subscription streaming services: local channels. The service wasn't available everywhere, but in the markets where it was available, users were able to receive all of their major local networks. This factor, combined with its relatively low subscription price of $35 per month, immediately made YouTube TV one of the best streaming options, but it also had a downside: device support.

At launch YouTube TV was only available via the Chrome browser, iOS and Android, and newer Chromecast devices. There were no native apps for popular media devices like the Roku, Amazon Fire TV, or Apple TV. But perhaps the most surprising omission was support for Android TV via devices like the NVIDIA SHIELD. Most of the PC Perspective staff personally use the SHIELD due to its raw power and capabilities, and the lack of YouTube TV support on Google's own media platform was disappointing.

youtube-tv-shield-home.jpg

Thankfully, Google recently addressed this omission and has finally brought a native YouTube TV app to the SHIELD with the SHIELD TV 6.1 Update.

Check out our overview of the YouTube TV on SHIELD experience.

PCPer Mailbag #17 - 11/10/2017

Subject: Editorial | November 10, 2017 - 08:00 AM |
Tagged: video, pcper mailbag, pcper, Allyn Malventano

It's Friday, which means it's time for PC Perspective's weekly mailbag, our video show where Ryan and team answer your questions about the tech industry, the latest and greatest hardware, the process of running a tech review website, and more!

Today, Storage Editor Allyn Malventano takes on your storage questions:

00:47 - M.2 vs SATA SSDs?
03:13 - SSD over-provisioning necessary?
07:03 - What happens when the SLC cache loses power?
10:53 - Optane for everyone?
15:41 - V-NAND layers vs. process shrink to reduce SSD prices?
19:49 - NVMe SSD on PCH lanes vs. CPU lanes?
22:58 - AHCI vs. NVMe?
28:29 - NVMe RAID WTF?
33:17 - Exciting tech on the horizon?

Be sure to subscribe to our YouTube Channel to make sure you never miss our weekly reviews and podcasts, and please consider supporting PC Perspective via Patreon to help us keep videos like our weekly mailbag coming!

Source: YouTube

Keeping your Threadripper properly watered

Subject: Cases and Cooling | November 9, 2017 - 04:28 PM |
Tagged: amd, Threadripper, watercooler, phanteks, Glacier C399A, X399

[H]ard|OCP have been working their way through every Threadripper compatible waterblock, the latest model to be tested is Phanteks' Glacier C399A.  The top of the waterblock is clear acrylic, perfect if you plan on adding a little colour to your coolant especially if you make use of the Frag-Harder Disco Lights.  Mounting is reasonably easy, no dedicated in or out connector to confuse and tightening can be accomplished with a small pair of pliers, which you may find necessary.  The cooling performance was in line with the other coolers they've tested, though the C399A does lose some marks because of the need to tighten the mounting mechanism on occasion.  Check out the full review for details.

1510172608gyny6tgo4k_2_3_l.jpg

"The Phanteks Glacier C399A is a custom-designed water cooling block built specifically for AMD's new Threadripper processors. It has great looks, Frag-Harder Disco Lights, is built like a tank, and seems to be just what the doctor ordered when it comes to cooling overclocked Threadripper CPUs."

Here are some more Cases & Cooling reviews from around the web:

CASES & COOLING

Source: [H]ard|OCP

Podcast #475 - Intel with AMD graphics, Raja's move to Intel, and more!

Subject: General Tech | November 9, 2017 - 02:38 PM |
Tagged: video, titan xp, teleport, starcraft 2, raja koduri, radeon, qualcomm, podcast, nvidia, Intel, centriq, amplifi, amd

PC Perspective Podcast #475 - 11/09/17

Join us for discussion on Intel with AMD graphics, Raja's move to Intel, and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

Hosts: Josh Walrath, Jeremy Hellstrom, Allyn Malventano, Ken Addison

Peanut Gallery: Alex Lustenberg, Jim Tanous

Program length: 1:29:42

Podcast topics of discussion:
  1. Week in Review:
  2. 0:35:30 CASPER
  3. News items of interest:
  4. Hardware/Software Picks of the Week
    1. 1:13:40 Allyn: Relatively cheap Samsung 82” (!!!) 4K TV
    2. 1:23:45 Josh: 1800X for $399!!!!!
    3. 1:24:50 Ken: The Void Wallet
  5. Closing/outro

Source:

Rumor: Hades Canyon NUC with AMD Graphics Spotted

Subject: General Tech, Processors | November 9, 2017 - 02:30 PM |
Tagged: Skull Canyon, nuc, kaby lake-g, Intel, Hades Canyon VR, Hades Canyon, EMIL, amd

Hot on the heels of Intel's announcement of new mobile-focused CPUs integrating AMD Radeon graphics, we have our first glimpse at a real-world design using this new chip.

HadesCanyon.jpg

Posted on the infamous Chinese tech forum, Chiphell earlier today, this photo appears to be a small form factor PC design integrating the new Kaby Lake-G CPU and GPU solution.

Looking at the standard size components on the board like the Samsung M.2 SSD and the DDR4 SODIMM memory modules, we can start to get a better idea of the actual size of the Kaby Lake-G module.

Additionally, we get our first look at the type of power delivery infrastructure that devices with Kaby Lake-G are going to require. It's impressive how small the motherboard is taking into account all of the power phases needed to feed the CPU, GPU, and HBM 2 memory. 

NUC_roadmap.png

Looking back at the leaked NUC roadmap from September, the picture starts to become more clear. While the "Hades Canyon" NUCs on this roadmap threw us for a loop when we first saw it months ago, it's now clear that they are referencing the new Kaby Lake-G line of products. The plethora of IO options from the roadmap, including dual Gigabit Ethernet and 2 Thunderbolt 3 ports also seem to match closely with the leaked NUC photo above.

Using this information we also now have a better idea of the thermal and power requirements for Kaby Lake-G. The base "Hades Canyon" NUC is listed with a 65W processor, while the "Hades Canyon VR" is listed with as a 100W part. This means that devices retain the same levels of CPU performance from the existing Kaby Lake-H Quad Core mobile CPUs which clock in at 35W, plus roughly 30 or 65W of graphics performance.

core-radeon-leak.png

These leaked 3DMark scores might give us an idea of the performance of the Hades Canyon VR NUC.

One thing is clear; Hades Canyon will be the highest power NUC Intel has ever produced, surpassing the 45W Skull Canyon. Considering the already unusual for a NUC footprint of Skull Canyon, I'm interested to see the final form of Hades Canyon as well as the performance it brings! 

With what looks to be a first half  2018 release date on the roadmap, it seems likely that we could see this NUC or other similar devices being shown off at CES in January. Stay tuned for more continuing coverage of Intel's Kaby Lake-G and upcoming devices featuring it!

Source: Chiphell

A walrus on virtual rollerskates

Subject: General Tech | November 9, 2017 - 02:02 PM |
Tagged: hyneman, roller skates, VR, Vortrex Shoes

Jamie Hyneman is pitching a project to build prototype VR roller skates; not as a game but as a way to save your shins while using a VR headset.  The design places motorized wheels under your heel and a track under the ball of your foot which will move your foot back to its starting position if you walk forward.  If all goes as planned this should allow you to walk around in virtual worlds without running into walls, chairs or spectators and perhaps allow games to abandon the point and teleport currently in vogue.  There are a lot of challenges as previous projects have discovered but perhaps a Mythbuster can help out.  You can watch his pitch video over at The Register.

votrex_shoes.jpg

"Hyneman's pitch video points out that when one straps on goggles and gloves to enter virtual reality, your eyes are occupied and you therefore run the risk of bumping into stuff if you try to walk in meatspa ce while simulating walking in a virtual world. And bumping into stuff is dangerous."

Here is some more Tech News from around the web:

Tech Talk

Source: The Register
Subject: Storage
Manufacturer: Intel

Introduction and Specifications

Back in April, we finally got our mitts on some actual 3D XPoint to test, but there was a catch. We had to do so remotely. The initial round of XPoint testing done (by all review sites) on a set of machines located on the Intel campus. Intel had their reasons for this unorthodox review method, but we were satisfied that everything was done above board. Intel even went as far as walking me over to the very server that we would be remoting into for testing. Despite this, there were still a few skeptics out there, and today we can put all of that to bed.

DSC01136.jpg

This is a 750GB Intel Optane SSD DC P4800X - in the flesh and this time on *our* turf. I'll be putting it through the same initial round of tests we conducted remotely back in April. I intend to follow up at a later date with additional testing depth, as well as evaluating kernel response times across Windows and Linux (IRQ, Polling, Hybrid Polling, etc), but for now, we're here to confirm the results on our own testbed as well as evaluate if the higher capacity point takes any sort of hit to performance. We may actually see a performance increase in some areas as Intel has had several months to further tune the P4800X.

This video is for the earlier 375GB model launch, but all points apply here
(except that the 900P has now already launched)

Specifications:

specs.png

The baseline specs remain the same as they were back in April with a few significant notable exceptions:

The endurance figure for the 375GB capacity has nearly doubled to 20.5 PBW (PetaBytes Written), with the 750GB capacity logically following suit at 41 PBW. These figures are based on a 30 DWPD (Drive Write Per Day) rating spanned across a 5-year period. The original product brief is located here, but do note that it may be out of date.

We now have official sequential throughput ratings: 2.0 GB/s writes and 2.4 GB/s reads.

We also have been provided detailed QoS figures and those will be noted as we cover the results throughout the review.

Read on for our review of the 750GB P4800X!

Intel Releases 15.60.0.4849 Graphics Drivers

Subject: Graphics Cards | November 8, 2017 - 09:29 PM |
Tagged: Intel, graphics drivers

When we report on graphics drivers, it’s almost always for AMD or NVIDIA. It’s Intel’s turn this time, however, with their latest 15.60 release. This version supports HDR playback on NetFlix and YouTube, and it adds Windows Mixed Reality for Intel HD 620 and higher.

inteltf2.jpg

I should note that this driver only supports Skylake-, Kaby Lake-, and Coffee Lake-based parts. I’m not sure whether this means that Haswell-and-earlier have been deprecated, but it looks like the latest ones that support those chips are from May.

In terms of game-specific optimizations? Intel has some to speak of. This driver focuses on The LEGO Ninjago Movie Video Game, Middle-earth: Shadow of War, Pro Evolution Soccer 2018, Call of Duty: WWII, Destiny 2, and Divinity: Original Sin 2. All of these name-drops are alongside Iris Pro, so I'm not sure how low you can go for any given title. Thankfully, many game distribution sites allow refunds for this very reason, although you still want to do a little research ahead-of-time.

That's all beside the point, though: Intel's advertising game-specific optimizations.

If you have a new Intel GPU, pick up the new drivers from Intel's website.

Source: Intel
Author:
Manufacturer: Intel

The Expected Unexpected

Last night we first received word that Raja had resigned from AMD (during a sabbatical) after they had launched Vega.  The initial statement was that Raja would come back to resume his position at AMD in a December/January timeframe.  During this time there was some doubt as to if Raja would in fact come back to AMD, as “sabbaticals” in the tech world would often lead the individual to take stock of their situation and move on to what they would consider to be greener pastures.

raja_ryan.JPG

Raja has dropped by the PCPer offices in the past.

Initially it was thought that Raja would take the time off and then eventually jump to another company and tackle the issues there.  This behavior is quite common in Silicon Valley and Raja is no stranger to this.  Raja cut his teeth on 3D graphics at S3, but in 2001 he moved to ATI.  While there he worked on a variety of programs including the original Radeon, the industry changing Radeon 9700 series, and finishing up with the strong HD 4000 series of parts.  During this time ATI was acquired by AMD and he became one of the top graphics guru at that company.  In 2009 he quit AMD and moved on to Apple.  He was Director of Graphics Architecture at Apple, but little is known about what he actually did.  During that time Apple utilized AMD GPUs and licensed Imagination Technologies graphics technology.  Apple could have been working on developing their own architecture at this point, which has recently showed up in the latest iPhone products.

In 2013 Raja rejoined AMD and became a corporate VP of Visual Computing, but in 2015 he was promoted to leading the Radeon Technology Group after Lisu Su became CEO of the company. While there Raja worked to get AMD back on an even footing under pretty strained conditions. AMD had not had the greatest of years and had seen their primary moneymakers start taking on water.  AMD had competitive graphics for the most part, and the Radeon technology integrated into AMD’s APUs truly was class leading.  On the discrete side AMD was able to compare favorably to NVIDIA with the HD 7000 and later R9 200 series of cards.  After NVIDIA released their Maxwell based chips, AMD had a hard time keeping up.  The general consensus here is that the RTG group saw its headcount decreased by the company-wide cuts as well as a decrease in R&D funds.

Continue reading about Raja Koduri joinging Intel...

What is the best GPU to beat Nazis with?

Subject: General Tech | November 8, 2017 - 03:26 PM |
Tagged: gaming, Wolfenstein 2, the new colossus, nvidia, amd, vulkan

Wolfenstein II The New Colossus uses the Vulkan API which could favour AMD's offerings however NVIDIA have vastly improved their support so a win is not guaranteed.  The Guru of 3D tested the three resolutions which most people are interested in, 1080p, 1440p and 4K on 20 different GPUs in total.  They also took a look at the impact of 4-core versus 8-core CPUs, testing the i7-4790K, i7-5960K as well as the Ryzen 7 1800X and even explored the amount of VRAM the game uses.  Drop by to see all their results as well as hints on dealing with the current bugs.

newcolossus_x64vk_2017_11_04_09_39_58_587.jpg

"We'll have a peek at the PC release of Wolfenstein II The New Colossus for Windows relative towards graphics card performance. The game is 100% driven by the Vulkan API. in this test twenty graphics cards are being tested and benchmarked."

Here is some more Tech News from around the web:

Gaming

Source: Guru of 3D

Qualcomm Centriq 2400 Arm-based Server Processor Begins Commercial Shipment

Subject: Processors | November 8, 2017 - 02:03 PM |
Tagged: qualcomm, centriq 2400, centriq, arm

At an event in San Jose on Wednesday, Qualcomm and partners officially announced that its Centriq 2400 server processor based on the Arm-architecture was shipping to commercial clients. This launch is of note as it becomes the highest-profile and most partner-lauded Arm-based server CPU and platform to be released after years of buildup and excitement around several similar products. The Centriq is built specifically for enterprise cloud workloads with an emphasis on high core count and high throughput and will compete against Intel’s Xeon Scalable and AMD’s new EPYC platforms.

qc2.jpg

Paul Jacobs shows Qualcomm Centriq to press and analysts

Built on the same 10nm process technology from Samsung that gave rise to the Snapdragon 835, the Centriq 2400 becomes the first server processor in that particular node. While Qualcomm and Samsung tout that as a significant selling point, on its own it doesn’t hold much value. Where it does come into play and impact the product position with the resulting power efficiency it brings to the table. Qualcomm claims that the Centriq 2400 will “offer exceptional performance-per-watt and performance-per dollar” compared to the competition server options.

The raw specifications and capabilities of the Centriq 2400 are impressive.

  Centriq 2460 Centriq 2452 Centriq 2434
Architecture ARMv8 (64-bit)
Core: Falkor
ARMv8 (64-bit)
Core: Falkor
ARMv8 (64-bit)
Core: Falkor
Process Tech 10nm (Samsung) 10nm (Samsung) 10nm (Samsung)
Socket ? ? ?
Cores/Threads 48/48 46/46 40/40
Base Clock 2.2 GHz 2.2 GHz 2.3 GHz
Max Clock 2.6 GHz 2.6 GHz 2.5 GHz
Memory Tech DDR4 DDR4 DDR4
Memory Speeds 2667 MHz
128 GB/s
2667 MHz
128 GB/s
2667 MHz
128 GB/s
Cache 24MB L2, split
60MB L3
23MB L2, split
57.5MB L3
20MB L2, split
50MB L3
PCIe 32 lanes PCIe 3.0 32 lanes PCIe 3.0 32 lanes PCIe 3.0
Graphics N/A N/A N/A
TDP 120W 120W 120W
MSRP $1995 $1383 $888

Built on 18 billion transistors a die area of just 398mm2, the SoC holds 48 high-performance 64-bit cores running at frequencies as high as 2.6 GHz. (Interestingly, this appears to be about the same peak clock rate of all the Snapdragon processor cores we have seen on consumer products.) The cores are interconnected by a bi-directional ring bus that is reminiscent of the integration Intel used on its Core processor family up until Skylake-SP was brought to market. The bus supports 250 GB/s of aggregate bandwidth and Qualcomm claims that this will alleviate any concern over congestion bottlenecks, even with the CPU cores under full load.

qc1.jpg

The caching system provides 512KB of L2 cache for every pair of CPU cores, essentially organizing them into dual-core blocks. 60MB of L3 cache provides core-to-core communications and the cache is physically divided around the die for on-average faster access. A 6-channel DDR4 memory systems, with unknown peak frequency, supports a total of 768GB of capacity.

Connectivity is supplied with 32 lanes of PCIe 3.0 and up to 6 PCIe devices.

As you should expect, the Centriq 2400 supports the ARM TrustZone secure operating environment and hypervisors for virtualized environments. With this many cores on a single chip, it seems likely one of the key use cases for the server CPU.

Maybe most impressive is the power requirements of the Centriq 2400. It can offer this level of performance and connectivity with just 120 watts of power.

With a price of $1995 for the Centriq 2460, Qualcomm claims that it can offer “4X better performance per dollar and up to 45% better performance per watt versus Intel’s highest performance Skylake processor, the Intel Xeon Platinum 8180.” That’s no small claim. The 8180 is a 28-core/56-thread CPU with a peak frequency of 3.8 GHz and a TDP of 205 watts and a cost of $10,000 (not a typo).

Qualcomm had performance metrics from industry standard SPECint measurements, in both raw single thread configurations as well as performance per dollar and per watt. I will have more on the performance story of Centriq later this week.

qc2.jpg

perf1.jpg

More important than simply showing hardware, Qualcomm and several partners on hand at the press event as well as many statements from important vendors like Alibaba, HPE, Google, Microsoft, and Samsung. Present to showcase applications running on the Arm-based server platforms was an impressive list of the key cloud services providers: Alibaba, LinkedIn, Cloudflare, American Megatrends Inc., Arm, Cadence Design Systems, Canonical, Chelsio Communications, Excelero, Hewlett Packard Enterprise, Illumina, MariaDB, Mellanox, Microsoft Azure, MongoDB, Netronome, Packet, Red Hat, ScyllaDB, 6WIND, Samsung, Solarflare, Smartcore, SUSE, Uber, and Xilinx.

The Centriq 2400 series of SoC isn’t perfect for all general-purpose workloads and that is something we have understood from the outset of this venture by Arm and its partners to bring this architecture to the enterprise markets. Qualcomm states that its parts are designed for “highly threaded cloud native applications that are developed as micro-services and deployed for scale-out.” The result is a set of workloads that covers a lot of ground:

  • Web front end with HipHop Virtual Machine
  • NoSQL databases including MongoDB, Varnish, Scylladb
  • Cloud orchestration and automation including Kubernetes, Docker, metal-as-a-service
  • Data analytics including Apache Spark
  • Deep learning inference
  • Network function virtualization
  • Video and image processing acceleration
  • Multi-core electronic design automation
  • High throughput compute bioinformatics
  • Neural class networks
  • OpenStack Platform
  • Scaleout Server SAN with NVMe
  • Server-based network offload

I will be diving more into the architecture, system designs, and partner announcements later this week as I think the Qualcomm Centriq 2400 family will have a significant impact on the future of the enterprise server markets.

Source: Qualcomm

A disHarmonious sound has arisen from Logitech's customers

Subject: General Tech | November 8, 2017 - 01:15 PM |
Tagged: logitech, iot, harmony link

If you own a Logitech Harmony Link and registered it then you already know, but for those who did not receive the email you should know your device will become unusable in March.  According to the information Ars Technica acquired, Logitech have decided not to renew a so called "technology certificate license" which will mean the Link will no longer work.  It is not clear what this certificate is nor why the lack of it will brick the Link but that is what will happen.  Apparently if you have a Harmony Link which is still under warranty you can get a free upgrade to a Harmony Hub; if your Link is out of warranty then you can get a 35% discount.  Why exactly one would want to purchase another one of these devices which can be remotely destroyed is an interesting question, especially as there was no monthly contract or service agreement suggesting this was a possibility when customers originally purchased their device.

brick.png

"Customers received an e-mail explaining that Logitech will "discontinue service and support" for the Harmony Link as of March 16, 2018, adding that Harmony Link devices "will no longer function after this date."

Here is some more Tech News from around the web:

Tech Talk

 

Source: Ars Technica

AmpliFi Announces Teleport, a Zero-Config VPN For Travelers

Subject: Networking | November 7, 2017 - 10:00 PM |
Tagged: wi-fi, vpn, ubiquiti, networking, mesh, Amplifi HD, amplifi

Earlier this year we took a look at the AmpliFi HD Home Wi-Fi System as part of our review of mesh wireless network devices. AmpliFi is the consumer-targeted brand of enterprise-focused Ubiquiti Networks, and while we preferred the eero Mesh Wi-Fi System in our initial look, the AmpliFi HD still offered great performance and some unique features. Today, AmpliFi is introducing a new member of its networking family called AmpliFi Teleport, a "plug-and-play" device that provides a secure connection to users' home networks from anywhere.

amplifi-teleport-front-back.jpg

Essentially a zero-configuration hardware-based VPN, the Teleport is linked with a user's AmpliFi account, which automatically creates a secure connection to the user's AmpliFi HD Wi-Fi System at home. Users take the small (75.85mm x 43mm x 39mm) Teleport device with them on the road, plug it in and connect it to the public Wi-Fi or Ethernet, and then connect their personal devices to the Teleport.

amplifi-specs.jpg

This provides a secure connection for private Internet traffic, but also allows access to local resources on the home network, including NAS devices, file shares, and home automation products. AmpliFi also touts that this would allow users to view their local streaming content even in locations where it would otherwise be unavailable -- e.g., watching U.S. Netflix shows while overseas, or streaming your favorite sports team while in a city where the game is blacked out.

In addition to traveling, AmpliFi notes that those with multiple homes or a vacation cottage could also benefit from Teleport, as it would allow you to share the same network resources and media streaming access regardless of location. In any case, a device like Teleport is still reliant on the speed and quality of your home and remote Internet connections, so there may be cases where network speeds are so low that it makes the device useless. That, of course, is a factor that would plague any network-dependent service or device, so while it's not a mark against the Teleport, it's something to keep in mind.

Teleport's features, while incredibly useful, are of course familiar to those experienced with VPNs and other secure remote connection methods. In terms of overall functionality, the AmpliFi Teleport isn't offering anything new here. The benefit, therefore, is its simple setup and configuration. Users don't need to setup and run a VPN on their home hardware, subscribe to a third party VPN service, or know anything about encryption protocols, firewall configuration, or network tunneling. They simply need to plug the Teleport into power, follow the connection guide, and that's it -- they're up and running with a secure connection to their home network.

amplifi-teleport-package.jpg

You'll pay for this convenience, however, as the Teleport isn't cheap. It's launching today on Kickstarter with "early bird" pricing of $199, which will get you the Teleport device and the required AmpliFi HD router. A second round of early purchasers will see that price increase to $229, while final pricing is $269. Again, that's just for the Teleport and the router. A kit including two AmpliFi mesh access points is $399. There's no word on standalone pricing for the Teleport device only for those who already have an AmpliFi mesh network at home.

Regardless of the package, once you have the hardware there's no extra cost or subscription fee to use the Teleport, so frequent travelers might find the system worth it when compared to some other subscription-based VPN services.

The AmpliFi Teleport is expected to ship to early purchasers in December. We don't have the hardware in hand yet for performance testing, but AmpliFi has promised to loan us review samples as the product gets closer to shipping. Check out the Teleport Kickstarter page and AmpliFi's website for more information.

Source: Kickstarter

More GTX 1070 Ti overclocking

Subject: Graphics Cards | November 7, 2017 - 03:21 PM |
Tagged: pascal, nvidia, gtx 1070 ti, geforce, msi

NVIDIA chose to limit the release of their GTX 1070 Ti to reference cards, all sporting the same clocks regardless of the model.  That does not mean that the manufacturers skimped on the features which help you overclock successfully.  As a perfect example, the MSI GTX 1070 Ti GAMING TITANIUM was built with Hi-C CAPs, Super Ferrite Chokes, and Japanese Solid Caps and 10-phase PWM.  This resulted in an impressive overclock of 2050MHz on the GPU and a memory frequency of 9GHz once [H]ard|OCP boosted the power delivered to the card.  That boost is enough to meet or even exceed the performance of a stock GTX 1080 or Vega 64 in most of the games they tested.

1509608832rtdxf9e1ls_1_16_l.jpg

"NVIDIA is launching the GeForce GTX 1070 Ti today, and we’ve got a custom retail MSI GeForce GTX 1070 Ti GAMING TITANIUM video card to test and overclock, yes overclock, to the max. We’ll make comparisons against GTX 1080/1070, AMD Radeon RX Vega 64 and 56 for a complete review."

Here are some more Graphics Card articles from around the web:

Graphics Cards

 

Source: [H]ard|OCP