Flash player not detected. Click here to install flash.
« 1 2 3 4 5 »

Podcast #475 - Intel with AMD graphics, Raja's move to Intel, and more!

Subject: General Tech | November 9, 2017 - 02:38 PM |
Tagged: video, titan xp, teleport, starcraft 2, raja koduri, radeon, qualcomm, podcast, nvidia, Intel, centriq, amplifi, amd

PC Perspective Podcast #475 - 11/09/17

Join us for discussion on Intel with AMD graphics, Raja's move to Intel, and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

Hosts: Josh Walrath, Jeremy Hellstrom, Allyn Malventano, Ken Addison

Peanut Gallery: Alex Lustenberg, Jim Tanous

Program length: 1:29:42

Podcast topics of discussion:
  1. Week in Review:
  2. 0:35:30 CASPER
  3. News items of interest:
  4. Hardware/Software Picks of the Week
    1. 1:13:40 Allyn: Relatively cheap Samsung 82” (!!!) 4K TV
    2. 1:23:45 Josh: 1800X for $399!!!!!
    3. 1:24:50 Ken: The Void Wallet
  5. Closing/outro

Source:

Rumor: Hades Canyon NUC with AMD Graphics Spotted

Subject: General Tech, Processors | November 9, 2017 - 02:30 PM |
Tagged: Skull Canyon, nuc, kaby lake-g, Intel, Hades Canyon VR, Hades Canyon, EMIL, amd

Hot on the heels of Intel's announcement of new mobile-focused CPUs integrating AMD Radeon graphics, we have our first glimpse at a real-world design using this new chip.

HadesCanyon.jpg

Posted on the infamous Chinese tech forum, Chiphell earlier today, this photo appears to be a small form factor PC design integrating the new Kaby Lake-G CPU and GPU solution.

Looking at the standard size components on the board like the Samsung M.2 SSD and the DDR4 SODIMM memory modules, we can start to get a better idea of the actual size of the Kaby Lake-G module.

Additionally, we get our first look at the type of power delivery infrastructure that devices with Kaby Lake-G are going to require. It's impressive how small the motherboard is taking into account all of the power phases needed to feed the CPU, GPU, and HBM 2 memory. 

NUC_roadmap.png

Looking back at the leaked NUC roadmap from September, the picture starts to become more clear. While the "Hades Canyon" NUCs on this roadmap threw us for a loop when we first saw it months ago, it's now clear that they are referencing the new Kaby Lake-G line of products. The plethora of IO options from the roadmap, including dual Gigabit Ethernet and 2 Thunderbolt 3 ports also seem to match closely with the leaked NUC photo above.

Using this information we also now have a better idea of the thermal and power requirements for Kaby Lake-G. The base "Hades Canyon" NUC is listed with a 65W processor, while the "Hades Canyon VR" is listed with as a 100W part. This means that devices retain the same levels of CPU performance from the existing Kaby Lake-H Quad Core mobile CPUs which clock in at 35W, plus roughly 30 or 65W of graphics performance.

core-radeon-leak.png

These leaked 3DMark scores might give us an idea of the performance of the Hades Canyon VR NUC.

One thing is clear; Hades Canyon will be the highest power NUC Intel has ever produced, surpassing the 45W Skull Canyon. Considering the already unusual for a NUC footprint of Skull Canyon, I'm interested to see the final form of Hades Canyon as well as the performance it brings! 

With what looks to be a first half  2018 release date on the roadmap, it seems likely that we could see this NUC or other similar devices being shown off at CES in January. Stay tuned for more continuing coverage of Intel's Kaby Lake-G and upcoming devices featuring it!

Source: Chiphell

A walrus on virtual rollerskates

Subject: General Tech | November 9, 2017 - 02:02 PM |
Tagged: hyneman, roller skates, VR, Vortrex Shoes

Jamie Hyneman is pitching a project to build prototype VR roller skates; not as a game but as a way to save your shins while using a VR headset.  The design places motorized wheels under your heel and a track under the ball of your foot which will move your foot back to its starting position if you walk forward.  If all goes as planned this should allow you to walk around in virtual worlds without running into walls, chairs or spectators and perhaps allow games to abandon the point and teleport currently in vogue.  There are a lot of challenges as previous projects have discovered but perhaps a Mythbuster can help out.  You can watch his pitch video over at The Register.

votrex_shoes.jpg

"Hyneman's pitch video points out that when one straps on goggles and gloves to enter virtual reality, your eyes are occupied and you therefore run the risk of bumping into stuff if you try to walk in meatspa ce while simulating walking in a virtual world. And bumping into stuff is dangerous."

Here is some more Tech News from around the web:

Tech Talk

Source: The Register
Subject: Storage
Manufacturer: Intel

Introduction and Specifications

Back in April, we finally got our mitts on some actual 3D XPoint to test, but there was a catch. We had to do so remotely. The initial round of XPoint testing done (by all review sites) on a set of machines located on the Intel campus. Intel had their reasons for this unorthodox review method, but we were satisfied that everything was done above board. Intel even went as far as walking me over to the very server that we would be remoting into for testing. Despite this, there were still a few skeptics out there, and today we can put all of that to bed.

DSC01136.jpg

This is a 750GB Intel Optane SSD DC P4800X - in the flesh and this time on *our* turf. I'll be putting it through the same initial round of tests we conducted remotely back in April. I intend to follow up at a later date with additional testing depth, as well as evaluating kernel response times across Windows and Linux (IRQ, Polling, Hybrid Polling, etc), but for now, we're here to confirm the results on our own testbed as well as evaluate if the higher capacity point takes any sort of hit to performance. We may actually see a performance increase in some areas as Intel has had several months to further tune the P4800X.

This video is for the earlier 375GB model launch, but all points apply here
(except that the 900P has now already launched)

Specifications:

specs.png

The baseline specs remain the same as they were back in April with a few significant notable exceptions:

The endurance figure for the 375GB capacity has nearly doubled to 20.5 PBW (PetaBytes Written), with the 750GB capacity logically following suit at 41 PBW. These figures are based on a 30 DWPD (Drive Write Per Day) rating spanned across a 5-year period. The original product brief is located here, but do note that it may be out of date.

We now have official sequential throughput ratings: 2.0 GB/s writes and 2.4 GB/s reads.

We also have been provided detailed QoS figures and those will be noted as we cover the results throughout the review.

Read on for our review of the 750GB P4800X!

Intel Releases 15.60.0.4849 Graphics Drivers

Subject: Graphics Cards | November 8, 2017 - 09:29 PM |
Tagged: Intel, graphics drivers

When we report on graphics drivers, it’s almost always for AMD or NVIDIA. It’s Intel’s turn this time, however, with their latest 15.60 release. This version supports HDR playback on NetFlix and YouTube, and it adds Windows Mixed Reality for Intel HD 620 and higher.

inteltf2.jpg

I should note that this driver only supports Skylake-, Kaby Lake-, and Coffee Lake-based parts. I’m not sure whether this means that Haswell-and-earlier have been deprecated, but it looks like the latest ones that support those chips are from May.

In terms of game-specific optimizations? Intel has some to speak of. This driver focuses on The LEGO Ninjago Movie Video Game, Middle-earth: Shadow of War, Pro Evolution Soccer 2018, Call of Duty: WWII, Destiny 2, and Divinity: Original Sin 2. All of these name-drops are alongside Iris Pro, so I'm not sure how low you can go for any given title. Thankfully, many game distribution sites allow refunds for this very reason, although you still want to do a little research ahead-of-time.

That's all beside the point, though: Intel's advertising game-specific optimizations.

If you have a new Intel GPU, pick up the new drivers from Intel's website.

Source: Intel
Author:
Manufacturer: Intel

The Expected Unexpected

Last night we first received word that Raja had resigned from AMD (during a sabbatical) after they had launched Vega.  The initial statement was that Raja would come back to resume his position at AMD in a December/January timeframe.  During this time there was some doubt as to if Raja would in fact come back to AMD, as “sabbaticals” in the tech world would often lead the individual to take stock of their situation and move on to what they would consider to be greener pastures.

raja_ryan.JPG

Raja has dropped by the PCPer offices in the past.

Initially it was thought that Raja would take the time off and then eventually jump to another company and tackle the issues there.  This behavior is quite common in Silicon Valley and Raja is no stranger to this.  Raja cut his teeth on 3D graphics at S3, but in 2001 he moved to ATI.  While there he worked on a variety of programs including the original Radeon, the industry changing Radeon 9700 series, and finishing up with the strong HD 4000 series of parts.  During this time ATI was acquired by AMD and he became one of the top graphics guru at that company.  In 2009 he quit AMD and moved on to Apple.  He was Director of Graphics Architecture at Apple, but little is known about what he actually did.  During that time Apple utilized AMD GPUs and licensed Imagination Technologies graphics technology.  Apple could have been working on developing their own architecture at this point, which has recently showed up in the latest iPhone products.

In 2013 Raja rejoined AMD and became a corporate VP of Visual Computing, but in 2015 he was promoted to leading the Radeon Technology Group after Lisu Su became CEO of the company. While there Raja worked to get AMD back on an even footing under pretty strained conditions. AMD had not had the greatest of years and had seen their primary moneymakers start taking on water.  AMD had competitive graphics for the most part, and the Radeon technology integrated into AMD’s APUs truly was class leading.  On the discrete side AMD was able to compare favorably to NVIDIA with the HD 7000 and later R9 200 series of cards.  After NVIDIA released their Maxwell based chips, AMD had a hard time keeping up.  The general consensus here is that the RTG group saw its headcount decreased by the company-wide cuts as well as a decrease in R&D funds.

Continue reading about Raja Koduri joinging Intel...

What is the best GPU to beat Nazis with?

Subject: General Tech | November 8, 2017 - 03:26 PM |
Tagged: gaming, Wolfenstein 2, the new colossus, nvidia, amd, vulkan

Wolfenstein II The New Colossus uses the Vulkan API which could favour AMD's offerings however NVIDIA have vastly improved their support so a win is not guaranteed.  The Guru of 3D tested the three resolutions which most people are interested in, 1080p, 1440p and 4K on 20 different GPUs in total.  They also took a look at the impact of 4-core versus 8-core CPUs, testing the i7-4790K, i7-5960K as well as the Ryzen 7 1800X and even explored the amount of VRAM the game uses.  Drop by to see all their results as well as hints on dealing with the current bugs.

newcolossus_x64vk_2017_11_04_09_39_58_587.jpg

"We'll have a peek at the PC release of Wolfenstein II The New Colossus for Windows relative towards graphics card performance. The game is 100% driven by the Vulkan API. in this test twenty graphics cards are being tested and benchmarked."

Here is some more Tech News from around the web:

Gaming

Source: Guru of 3D

Qualcomm Centriq 2400 Arm-based Server Processor Begins Commercial Shipment

Subject: Processors | November 8, 2017 - 02:03 PM |
Tagged: qualcomm, centriq 2400, centriq, arm

At an event in San Jose on Wednesday, Qualcomm and partners officially announced that its Centriq 2400 server processor based on the Arm-architecture was shipping to commercial clients. This launch is of note as it becomes the highest-profile and most partner-lauded Arm-based server CPU and platform to be released after years of buildup and excitement around several similar products. The Centriq is built specifically for enterprise cloud workloads with an emphasis on high core count and high throughput and will compete against Intel’s Xeon Scalable and AMD’s new EPYC platforms.

qc2.jpg

Paul Jacobs shows Qualcomm Centriq to press and analysts

Built on the same 10nm process technology from Samsung that gave rise to the Snapdragon 835, the Centriq 2400 becomes the first server processor in that particular node. While Qualcomm and Samsung tout that as a significant selling point, on its own it doesn’t hold much value. Where it does come into play and impact the product position with the resulting power efficiency it brings to the table. Qualcomm claims that the Centriq 2400 will “offer exceptional performance-per-watt and performance-per dollar” compared to the competition server options.

The raw specifications and capabilities of the Centriq 2400 are impressive.

  Centriq 2460 Centriq 2452 Centriq 2434
Architecture ARMv8 (64-bit)
Core: Falkor
ARMv8 (64-bit)
Core: Falkor
ARMv8 (64-bit)
Core: Falkor
Process Tech 10nm (Samsung) 10nm (Samsung) 10nm (Samsung)
Socket ? ? ?
Cores/Threads 48/48 46/46 40/40
Base Clock 2.2 GHz 2.2 GHz 2.3 GHz
Max Clock 2.6 GHz 2.6 GHz 2.5 GHz
Memory Tech DDR4 DDR4 DDR4
Memory Speeds 2667 MHz
128 GB/s
2667 MHz
128 GB/s
2667 MHz
128 GB/s
Cache 24MB L2, split
60MB L3
23MB L2, split
57.5MB L3
20MB L2, split
50MB L3
PCIe 32 lanes PCIe 3.0 32 lanes PCIe 3.0 32 lanes PCIe 3.0
Graphics N/A N/A N/A
TDP 120W 120W 120W
MSRP $1995 $1383 $888

Built on 18 billion transistors a die area of just 398mm2, the SoC holds 48 high-performance 64-bit cores running at frequencies as high as 2.6 GHz. (Interestingly, this appears to be about the same peak clock rate of all the Snapdragon processor cores we have seen on consumer products.) The cores are interconnected by a bi-directional ring bus that is reminiscent of the integration Intel used on its Core processor family up until Skylake-SP was brought to market. The bus supports 250 GB/s of aggregate bandwidth and Qualcomm claims that this will alleviate any concern over congestion bottlenecks, even with the CPU cores under full load.

qc1.jpg

The caching system provides 512KB of L2 cache for every pair of CPU cores, essentially organizing them into dual-core blocks. 60MB of L3 cache provides core-to-core communications and the cache is physically divided around the die for on-average faster access. A 6-channel DDR4 memory systems, with unknown peak frequency, supports a total of 768GB of capacity.

Connectivity is supplied with 32 lanes of PCIe 3.0 and up to 6 PCIe devices.

As you should expect, the Centriq 2400 supports the ARM TrustZone secure operating environment and hypervisors for virtualized environments. With this many cores on a single chip, it seems likely one of the key use cases for the server CPU.

Maybe most impressive is the power requirements of the Centriq 2400. It can offer this level of performance and connectivity with just 120 watts of power.

With a price of $1995 for the Centriq 2460, Qualcomm claims that it can offer “4X better performance per dollar and up to 45% better performance per watt versus Intel’s highest performance Skylake processor, the Intel Xeon Platinum 8180.” That’s no small claim. The 8180 is a 28-core/56-thread CPU with a peak frequency of 3.8 GHz and a TDP of 205 watts and a cost of $10,000 (not a typo).

Qualcomm had performance metrics from industry standard SPECint measurements, in both raw single thread configurations as well as performance per dollar and per watt. I will have more on the performance story of Centriq later this week.

qc2.jpg

perf1.jpg

More important than simply showing hardware, Qualcomm and several partners on hand at the press event as well as many statements from important vendors like Alibaba, HPE, Google, Microsoft, and Samsung. Present to showcase applications running on the Arm-based server platforms was an impressive list of the key cloud services providers: Alibaba, LinkedIn, Cloudflare, American Megatrends Inc., Arm, Cadence Design Systems, Canonical, Chelsio Communications, Excelero, Hewlett Packard Enterprise, Illumina, MariaDB, Mellanox, Microsoft Azure, MongoDB, Netronome, Packet, Red Hat, ScyllaDB, 6WIND, Samsung, Solarflare, Smartcore, SUSE, Uber, and Xilinx.

The Centriq 2400 series of SoC isn’t perfect for all general-purpose workloads and that is something we have understood from the outset of this venture by Arm and its partners to bring this architecture to the enterprise markets. Qualcomm states that its parts are designed for “highly threaded cloud native applications that are developed as micro-services and deployed for scale-out.” The result is a set of workloads that covers a lot of ground:

  • Web front end with HipHop Virtual Machine
  • NoSQL databases including MongoDB, Varnish, Scylladb
  • Cloud orchestration and automation including Kubernetes, Docker, metal-as-a-service
  • Data analytics including Apache Spark
  • Deep learning inference
  • Network function virtualization
  • Video and image processing acceleration
  • Multi-core electronic design automation
  • High throughput compute bioinformatics
  • Neural class networks
  • OpenStack Platform
  • Scaleout Server SAN with NVMe
  • Server-based network offload

I will be diving more into the architecture, system designs, and partner announcements later this week as I think the Qualcomm Centriq 2400 family will have a significant impact on the future of the enterprise server markets.

Source: Qualcomm

A disHarmonious sound has arisen from Logitech's customers

Subject: General Tech | November 8, 2017 - 01:15 PM |
Tagged: logitech, iot, harmony link

If you own a Logitech Harmony Link and registered it then you already know, but for those who did not receive the email you should know your device will become unusable in March.  According to the information Ars Technica acquired, Logitech have decided not to renew a so called "technology certificate license" which will mean the Link will no longer work.  It is not clear what this certificate is nor why the lack of it will brick the Link but that is what will happen.  Apparently if you have a Harmony Link which is still under warranty you can get a free upgrade to a Harmony Hub; if your Link is out of warranty then you can get a 35% discount.  Why exactly one would want to purchase another one of these devices which can be remotely destroyed is an interesting question, especially as there was no monthly contract or service agreement suggesting this was a possibility when customers originally purchased their device.

brick.png

"Customers received an e-mail explaining that Logitech will "discontinue service and support" for the Harmony Link as of March 16, 2018, adding that Harmony Link devices "will no longer function after this date."

Here is some more Tech News from around the web:

Tech Talk

 

Source: Ars Technica

AmpliFi Announces Teleport, a Zero-Config VPN For Travelers

Subject: Networking | November 7, 2017 - 10:00 PM |
Tagged: wi-fi, vpn, ubiquiti, networking, mesh, Amplifi HD, amplifi

Earlier this year we took a look at the AmpliFi HD Home Wi-Fi System as part of our review of mesh wireless network devices. AmpliFi is the consumer-targeted brand of enterprise-focused Ubiquiti Networks, and while we preferred the eero Mesh Wi-Fi System in our initial look, the AmpliFi HD still offered great performance and some unique features. Today, AmpliFi is introducing a new member of its networking family called AmpliFi Teleport, a "plug-and-play" device that provides a secure connection to users' home networks from anywhere.

amplifi-teleport-front-back.jpg

Essentially a zero-configuration hardware-based VPN, the Teleport is linked with a user's AmpliFi account, which automatically creates a secure connection to the user's AmpliFi HD Wi-Fi System at home. Users take the small (75.85mm x 43mm x 39mm) Teleport device with them on the road, plug it in and connect it to the public Wi-Fi or Ethernet, and then connect their personal devices to the Teleport.

amplifi-specs.jpg

This provides a secure connection for private Internet traffic, but also allows access to local resources on the home network, including NAS devices, file shares, and home automation products. AmpliFi also touts that this would allow users to view their local streaming content even in locations where it would otherwise be unavailable -- e.g., watching U.S. Netflix shows while overseas, or streaming your favorite sports team while in a city where the game is blacked out.

In addition to traveling, AmpliFi notes that those with multiple homes or a vacation cottage could also benefit from Teleport, as it would allow you to share the same network resources and media streaming access regardless of location. In any case, a device like Teleport is still reliant on the speed and quality of your home and remote Internet connections, so there may be cases where network speeds are so low that it makes the device useless. That, of course, is a factor that would plague any network-dependent service or device, so while it's not a mark against the Teleport, it's something to keep in mind.

Teleport's features, while incredibly useful, are of course familiar to those experienced with VPNs and other secure remote connection methods. In terms of overall functionality, the AmpliFi Teleport isn't offering anything new here. The benefit, therefore, is its simple setup and configuration. Users don't need to setup and run a VPN on their home hardware, subscribe to a third party VPN service, or know anything about encryption protocols, firewall configuration, or network tunneling. They simply need to plug the Teleport into power, follow the connection guide, and that's it -- they're up and running with a secure connection to their home network.

amplifi-teleport-package.jpg

You'll pay for this convenience, however, as the Teleport isn't cheap. It's launching today on Kickstarter with "early bird" pricing of $199, which will get you the Teleport device and the required AmpliFi HD router. A second round of early purchasers will see that price increase to $229, while final pricing is $269. Again, that's just for the Teleport and the router. A kit including two AmpliFi mesh access points is $399. There's no word on standalone pricing for the Teleport device only for those who already have an AmpliFi mesh network at home.

Regardless of the package, once you have the hardware there's no extra cost or subscription fee to use the Teleport, so frequent travelers might find the system worth it when compared to some other subscription-based VPN services.

The AmpliFi Teleport is expected to ship to early purchasers in December. We don't have the hardware in hand yet for performance testing, but AmpliFi has promised to loan us review samples as the product gets closer to shipping. Check out the Teleport Kickstarter page and AmpliFi's website for more information.

Source: Kickstarter

More GTX 1070 Ti overclocking

Subject: Graphics Cards | November 7, 2017 - 03:21 PM |
Tagged: pascal, nvidia, gtx 1070 ti, geforce, msi

NVIDIA chose to limit the release of their GTX 1070 Ti to reference cards, all sporting the same clocks regardless of the model.  That does not mean that the manufacturers skimped on the features which help you overclock successfully.  As a perfect example, the MSI GTX 1070 Ti GAMING TITANIUM was built with Hi-C CAPs, Super Ferrite Chokes, and Japanese Solid Caps and 10-phase PWM.  This resulted in an impressive overclock of 2050MHz on the GPU and a memory frequency of 9GHz once [H]ard|OCP boosted the power delivered to the card.  That boost is enough to meet or even exceed the performance of a stock GTX 1080 or Vega 64 in most of the games they tested.

1509608832rtdxf9e1ls_1_16_l.jpg

"NVIDIA is launching the GeForce GTX 1070 Ti today, and we’ve got a custom retail MSI GeForce GTX 1070 Ti GAMING TITANIUM video card to test and overclock, yes overclock, to the max. We’ll make comparisons against GTX 1080/1070, AMD Radeon RX Vega 64 and 56 for a complete review."

Here are some more Graphics Card articles from around the web:

Graphics Cards

 

Source: [H]ard|OCP

Light And Dark-Side Collector’s Edition NVIDIA TITAN Xp for Pre-Order November 8th

Subject: Graphics Cards | November 7, 2017 - 02:04 PM |
Tagged: Star Wars, nvidia, titan xp, disney

Priced at $1200, you can choose to power your gaming rig with either the light side of the Force or the dark side.  NVIDIA have announced two new Titan Xp GPUs, one battle scarred and lightsaber green representing the Rebel Alliance and a pristine black card which glows a familiar red.  It would seem that they are a bit behind the times as neither of those organizations exist in the current Star Wars timeline but that doesn't make them any less attractive to fans. 

unnamed.jpg

The specifications are familiar, a Pascal-based GP102 GPU, with 3840 CUDA cores @ 1.6GHz, and 12GB of GDDR5X memory running at 11.4Gbps.  The look is very unique however, so if you are a big fan of Star Wars then this might just be something you want to consider.  The full PR and launch movie are just below.

side.PNG

Tatooine, Outer Rim Territory—NVIDIA has announced two new collector’s edition NVIDIA TITAN Xp GPUs created for the ultimate Star Wars fan. The new Jedi Order™ and Galactic Empire™ editions of the NVIDIA TITAN Xp have been crafted to reflect the look and feel of the Star Wars galaxy.

These new Star Wars collector’s edition GPUs pay homage to the light side/dark side dichotomy, and contain hints of the Star Wars galaxy, such as the hilt of Luke Skywalker's lightsaber and light panels reminiscent of the Death Star.

The Jedi Order GPU simulates the wear and tear and battle-worn finish of many items used by the Rebel Alliance, resulting from its diecast aluminum cover being subjected to an extensive, corrosive salt spray.

Conversely, the Galactic Empire GPU’s finish features simple, clean lines, emulating the high-end, orderly nature of the resource-rich Empire.

Both versions have multiple windowed areas to showcase internals and lighting, evoking each faction’s lightsabers, green and red, respectively. The finishes of both versions took over a year to perfect.

The retail box packaging also pays homage to the light and dark sides of the Force, with the Jedi Order edition bathed in white, and the Galactic Empire edition bathed in black.

Exclusive Pre-Order Access for GeForce Experience Users
GeForce Experience users get exclusive pre-order access to purchase(1) the Jedi Order and Galactic Empire TITAN Xp editions before the cards are broadly available in mid-November. Starting tomorrow, GeForce Experience users can purchase one card of each design by using their log-in credentials in the NVIDIA store.

Power! Unlimited Power!
The Jedi Order and Galactic Empire TITAN Xp GPUs use the NVIDIA Pascal-based GP102 GPU, each with 3,840 CUDA cores running at 1.6GHz and 12GB of GDDR5X memory running at 11.4Gbps.

Their staggering 12TFLOPs of processing power under the hood allows Star Wars fans to play any of today’s most cutting-edge titles at the highest resolution with the highest detail quality turned on.

Priced at $1,200, each edition also includes a collectible electroformed metal badge containing the insignia of their preferred alliance.

Source: NVIDIA

Sir, place the turtle on the ground and back away slowly

Subject: General Tech | November 7, 2017 - 01:35 PM |
Tagged: machine learning, ai

Not to be out done by the research conducted by Japan's Kyushu University which led to the frog is not truck portion of lasts weeks podcast, MIT researchers have also been tormenting image recognition software.  Their findings were a little more worrisome, as a 3D printed turtle was identified as a rifle which could lead to some very bad situations in airports or other secure locations.  In this case, instead of adding a few pixels to the image, they introduced different angles and lighting conditions which created enough noise to completely fool Google's image recognition AI, Inception.  The printed turtle was misidentified because of a the texture which they chose, showing that this issue extends beyond photos to include physical objects.  Pop by The Register for more details as well as an ingredient you never want to see on your toast.

Capture.PNG

"Students at MIT in the US claim they have developed an algorithm for creating 3D objects and pictures that trick image-recognition systems into severely misidentifying them. Think toy turtles labeled rifles, and baseballs as cups of coffee."

Here is some more Tech News from around the web:

Tech Talk

Source: The Register

1MORE driver will cost you, the Quad drivers are twice the price of the Triple

Subject: General Tech | November 6, 2017 - 04:54 PM |
Tagged: audio, 1MORE, Quad Drivers, in-ear

These new in-ear headphones from 1MORE have a single carbon driver for mids and lows with three balanced armatures to handle the high and ultra-high frequencies.  This is not a surround sound implementation but instead an attempt to provide very high quality sound from in-ear monitors.  TechPowerUp tested these Dolby Certified headphones and found them to be an improvement on the already impressive Triple Driver model, even powered by a smartphone; they do not require a DAC or pre-amp to provide great sound.  On the other hand, $200 is steep for this style of headphone so read through the review before jumping on Amazon.

1MORE-Quad-Driver-Diagram.jpg

"On the wings of the raging success they had with their $100 Triple Driver In-Ear Headphones, currently considered one of the best IEMs in terms of price-performance, 1MORE brings us their even more refined sibling equipped with an additional balanced armature. Do the 1MORE Quad Drivers have what it takes to justify a price bump to $200?"

Here is some more Tech News from around the web:

Audio Corner

Source: TechPowerUp

Flash cache compatibility is Crucial to the Momentum of adoption

Subject: Storage | November 6, 2017 - 03:22 PM |
Tagged: crucial, Momentum Cache, NVMe, Crucial Storage Executive

The SSD Review noticed something very interesting in the latest update to Crucial's Storage Executive software, the Momentum Cache feature now works with a variety of non-Crucial NVMe SSDs.  The software allows your system to turn part of your RAM into a cache so that reads and writes can initially be sent to that cache which results in improved performance thanks to RAM's significantly quicker response time.  If you have a Crucial SSD installed as well as another NVMe SSD and are using the default Windows NVMe driver, you can set up caching on the non-Crucial SSD if you so desire.  Stop by for a look at the performance impact as well as a list of the drives which have been successfully tested.

WD-Black-Intel-600P-Crucial-Storage-Executive-Momentum-Cache.png

"Crucial’s Momentum Cache feature, part of Crucial Storage Executive, is unlocked for all NVMe SSDs, or at least the ones we have tested in our Z170 test system; the key here, of course, is that a compatible Crucial SSD must initially be on the system to enable this feature at all."

Here are some more Storage reviews from around the web:

Storage

Intel Announces New CPUs Integrating AMD Radeon Graphics

Subject: Processors | November 6, 2017 - 02:00 PM |
Tagged: radeon, Polaris, mobile, kaby lake, interposer, Intel, HBM2, gaming, EMIB, apple, amd, 8th generation core

In what is probably considered one of the worst kept secrets in the industry, Intel has announced a new CPU line for the mobile market that integrates AMD’s Radeon graphics.  For the past year or so rumors of such a partnership were freely flowing, but now we finally get confirmation as to how this will be implemented and marketed.

Intel’s record on designing GPUs has been rather pedestrian.  While they have kept up with the competition, a slew of small issues and incompatibilities have plagued each generation.  Performance is also an issue when trying to compete with AMD’s APUs as well as discrete mobile graphics offerings from both AMD and NVIDIA.  Software and driver support is another area where Intel has been unable to compete due largely to economics and the competitions’ decades of experience in this area.

intel-8th-gen-cpu-discrete-graphics-2.jpg

There are many significant issues that have been solved in one fell swoop.  Intel has partnered with AMD’s Semi-Custom Group to develop a modern and competent GPU that can be closely connected to the Intel CPU all the while utilizing HBM2 memory to improve overall performance.  The packaging of this product utilizes Intel’s EMIB (Embedded Multi-die Interconnect Bridge) tech.

EMIB is an interposer-like technology that integrates silicon bridges into the PCB instead of relying upon a large interposer.  This allows a bit more flexibility in layout of the chips as well as lowers the Z height of the package as there is not a large interposer sitting between the chips and the PCB.  Just as interposer technology allows the use of chips from different process technologies to work seamlessly together, EMIB provides that same flexibility.

The GPU looks to be based on the Polaris architecture which is a slight step back from AMD’s cutting edge Vega architecture.  Polaris does not implement the Infinity Fabric component that Vega does.  It is more conventional in terms of data communication.  It is a step beyond what AMD has provided for Sony and Microsoft, who each utilize a semi-custom design for the latest console chips.  AMD is able to integrate the HBM2 controller that is featured in Vega.  Using HBM2 provides a tremendous amount of bandwidth along with power savings as compared to traditional GDDR-5 memory modules.  It also saves dramatically on PCB space allowing for smaller form factors.

intel_tech_manu_embedded_multi_die_interconnect_bridge-100715607-orig.jpg

EMIB provides nearly all of the advantages of the interposer while keeping the optimal z-height of the standard PCB substrate.

Intel did have to do quite a bit of extra work on the power side of the equation.  AMD utilizes their latest Infinity Fabric for fine grained power control in their upcoming Raven Ridge based Ryzen APUs.  Intel had to modify their current hardware to be able to do much the same work with 3rd party silicon.  This is no easy task as the CPU needs to monitor and continually adjust for GPU usage in a variety of scenarios.  This type of work takes time and a lot of testing to fine tune as well as the inevitable hardware revisions to get thing to work correctly.  This then needs to be balanced by the GPU driver stack which also tends to take control of power usage in mobile scenarios.

This combination of EMIB, Intel Kaby Lake CPU, HBM2, and a current AMD GPU make this a very interesting combination for the mobile and small form factor markets.  The EMIB form factor provides very fast interconnect speeds and a smaller footprint due to the integration of HBM2 memory.  The mature AMD Radeon software stack for both Windows and macOS environments provides Intel with another feature in which to sell their parts in areas where previously they were not considered.  The 8th Gen Kaby Lake CPU provides the very latest CPU design on the new 14nm++ process for greater performance and better power efficiency.

This is one of those rare instances where such cooperation between intense rivals actually improves the situation for both.  AMD gets a financial shot in the arm by signing a large and important customer for their Semi-Custom division.  The royalty income from this partnership should be more consistent as compared to the console manufacturers due to the seasonality of the console product.  This will have a very material effect on AMD’s bottom line for years to come.  Intel gets a solid silicon solution with higher performance than they can offer, as well as aforementioned mature software stack for multiple OS.  Finally throw in the HBM2 memory support for better power efficiency and a smaller form factor, and it is a clear win for all parties involved.

intel-8th-gen-cpu-discrete-graphics.jpg

The PCB savings plus faster interconnects will allow these chips to power smaller form factors with better performance and battery life.

One of the unknowns here is what process node the GPU portion will be manufactured on.  We do not know which foundry Intel will use, or if they will stay in-house.  Currently TSMC manufactures the latest console SoCs while GLOBALFOUNDRIES handles the latest GPUS from AMD.  Initially one would expect Intel to build the GPU in house, but the current rumor is that AMD will work to produce the chips with one of their traditional foundry partners.  Once the chip is manufactured then it is sent to Intel to be integrated into their product.

Apple is one of the obvious candidates for this particular form factor and combination of parts.  Apple has a long history with Intel on the CPU side and AMD on the GPU side.  This product provides all of the solutions Apple needs to manufacture high performance products in smaller form factors.  Gaming laptops also get a boost from such a combination that will offer relatively high performance with minimal power increases as well as the smaller form factor.

core-radeon-leak.png

The potential (leaked) performance of the 8th Gen Intel CPU with Radeon Graphics.

The data above could very well be wrong about the potential performance of this combination.  What we see is pretty compelling though.  The Intel/AMD product performs like a higher end CPU with discrete GPU combo.  It is faster than a NVIDIA GTX 1050 Ti and trails the GTX 1060.  It also is significantly faster than a desktop AMD RX 560 part.  We can also see that it is going to be much faster than the flagship 15 watt TDP AMD Ryzen 7 2700U.  We do not yet know how it compares to the rumored 65 watt TDP Raven Ridge based APUs from AMD that will likely be released next year.  What will be fascinating here is how much power the new Intel combination will draw as compared to the discrete solutions utilizing NVIDIA graphics.

To reiterate, this is Intel as a customer for AMD’s Semi-Custom group rather than a licensing agreement between the two companies.  They are working hand in hand in developing this solution and then both profiting from it.  AMD getting royalties from every Intel package sold that features this technology will have a very positive effect on earnings.  Intel gets a cutting edge and competent graphics solution along with the improved software and driver support such a package includes.

Update: We have been informed that AMD is producing the chips and selling them directly to Intel for integration into these new SKUs. There are no royalties or licensing, but the Semi-Custom division should still receive the revenue for these specialized products made only for Intel.

Source: Intel

Qualcomm expected to keep their extra m and say no to Broadcom

Subject: General Tech | November 6, 2017 - 01:03 PM |
Tagged: broadcom, qualcomm, billions

While the gang does some sleuthing about the current signs of the end of the world; not simply cats and dogs living together but rumours of AMD and Intel working together, lets look at a different surprise.  It seems that Broadcom have set its sights on Qualcomm, offering $130 billion to buy out the company and its assets.  In part this might be inspired by Qualcomm's pending release of the Centriq family of processors seeing as how Broadcom cancelled their ARM based server chip development earlier this year.  It sems as though Qualcomm is not looking too hard at this as a way to pay their ever expanding legal bills in their cases against Apple as according to the story that Slashdot has linked to, Qualcomm considers this an offer it can refuse.

Keep an eye out for an update as Josh and Ryan check on the mixing of Intel's Embedded Multi-die Interconnect Bridges and AMD's Polaris.

index.jpg

"Chipmaker Broadcom officially unveiled a $130bn offer, including net debt, for Qualcomm on Monday, in what could be the largest tech deal in history. Under Broadcom's proposal, Qualcomm shareholders would receive $70 per share -- $60 in cash and $10 in shares of its rival. It would value Qualcomm's equity at roughly $103bn."

Here is some more Tech News from around the web:

Tech Talk

Source: Slashdot

zSpace and Unity Announces XR Resources for Education

Subject: General Tech | November 5, 2017 - 08:14 PM |
Tagged: Unity, zspace, xr, AR, VR

The Unity Educator Toolkit was created by Unity3D to integrate learning game development into the K-12 public curriculum. Now zSpace, which we’ve mentioned a few times, is joining in to the initiative with their mixed-reality platform. The company is known for creating displays that, when viewed with their glasses, track where you are and make the object appear to be in front of you. They also have a stylus that lets you interact with the virtual object.

zspace-2017-elephant.jpg

They are focused on the educational side of VR and AR.

It’s not entirely clear what this means, because a lot of the details are behind a sign-up process. That said, if you’re an educator, then check out the package to see if it’s relevant for you. Creating games is an interesting, albeit challenging and somewhat daunting, method of expressing oneself. Giving kids the tools to make little game jam-style expressions, or even using the technology in your actual lessons, will reach a new group of students.

Source: zSpace

Rumor: Google Pixel 2 XL Slow Charging

Subject: Mobile | November 5, 2017 - 07:49 PM |
Tagged: google, Pixel 2 XL

The Pixel 2 XL launch hasn’t been going so well for Google. Early complaints were about the screen: how it had alleged burn-in problems within the first few days, and how it couldn’t support the sRGB color space. Since then, we’ve even been hearing reports that some phones shipped without the OS even installed. Whoops!

google-2017-pixel2xl.jpg

Now here’s a specific complaint: people are saying that the phone is charging slow. This is an easy one to test – run a multimeter in-line with the USB cable see what happens. Google+ user, Nathan K., apparently did, and he found that the Pixel 2 XL maxed out at 10.5W. When the screen is on, this drops to a maximum of 6W, which he claims (and I would have guessed) is likely due to the combined heat of a phone that’s both in-use and charging. Lithium batteries are very sensitive to heat.

He also says that this issue isn’t really a problem in-and-of itself. He just wishes that manufacturers advertised more about how the battery should perform, and maybe even provide the switches for users to override if needed. I could see that being a warranty nightmare, but I’m rarely going to fall on the side against user choice as a general rule, so I think that would be nice.

StarCraft II Going Free-to-play on November 14th

Subject: General Tech | November 5, 2017 - 07:13 PM |
Tagged: blizzard, starcraft 2, pc gaming

Over the last few years, Blizzard has been progressively opening up StarCraft II for non-paying customers. These initiatives ranged from allowing whole parties to share the highest expansion level of any one member, unlocking Terran for unranked games, opening up mods to the Starter edition, and so forth.

Starting on November 14th, after a handful of months of the original StarCraft going free-to-play, Blizzard will allow free access to multiplayer (including the ranked ladder), a handful of co-op commanders, and the Wings of Liberty campaign. If you already own Wings of Liberty, then you will get Heart of the Swarm for free (if you claim it between November 8th and December 8th).

If you already own both, then… well, life as usual for you.

In terms of making money, Blizzard is hoping to sell the remaining two-or-three campaigns, Heart of the Swarm, Legacy of the Void, and Nova Covert Ops, as well as the other up-sells, like announcers, co-op commanders, and so forth. If you’re in it for the vanilla (or Arcade) multiplayer, though, then you can jump in on November 14th without paying a dime.

Source: Blizzard