Flash player not detected. Click here to install flash.
« 1 2 3 4 5 »

Imagination Technologies Announces PowerVR 2NX NNA

Subject: Mobile | September 25, 2017 - 10:45 PM |
Tagged: Imagination Technologies, deep neural network

Imagination Technologies is known to develop interesting, somewhat offbeat hardware, such as GPUs with built-in ray tracers. In this case, the company is jumping into the neural network market with a Power VR-branded accelerator. The PowerVR Series2NX Neural Network Accelerator works on massively parallel, but low-precision tasks. AnandTech says that the chip can even work in multiple bit-depths on different layers in a single network, from 16-bit, down to 12-, 10-, 8-, 7-, 6-, 5-, and 4-bit.

imaginationtech-2017-powervr-neural-net.png

Image Credit: Imagination Technologies via Anandtech

Imagination seems to say that this is variable “to maintain accuracy”. I’m guessing it doesn’t give an actual speed-up to tweak your network in that way, but I honestly don’t know.

As for Imagination Technologies, they intend to have this in mobile devices for, as they suggest, photography and predictive text. They also state the usual suspects: VR/AR, automotive, surveillance, and so forth. They are suggesting that this GPU technology will target Tensorflow Lite.

The PowerVR 2NX Neural Network Accelerator is available for licensing.

Intel Core i9-7980XE Pushed to 6.1 GHz On All Cores Using Liquid Nitrogen

Subject: Processors | September 25, 2017 - 09:36 PM |
Tagged: skylake-x, overclocking, Intel Skylake-X, Intel, Cinebench, 7980xe, 3dmark, 14nm

Renowned overclocker der8auer got his hands on the new 18-core Intel Core i9-7980XE and managed to break a few records with more than a bit of LN2 and thermal paste. Following a delid, der8auer slathered the bare die and surrounding PCB with a polymer-based (Kryonaut) TIM and reattached the HIS to prepare for the extreme overclock. He even attempted to mill out the middle of the IHS to achieve a balance between direct die cooling and using the IHS to prevent bending the PCB and spread out the pressure from the LN2 cooler block, but ran into inconsistent results between runs and opted not to proceed with that method.

Core i9-7980xe LN2 overclock.png

Using an Asus Rampage VI Apex X299 motherboard and the Core i9-7980XE at an Asus ROG event in Taiwan der8auer used liquid nitrogen to push all eighteen cores (plus Hyper-Threading) to 6.1 GHz for a CPU-Z validation. To get those clockspeeds he needed to crank up the voltage to 1.55V (1.8V VCCIN) which is a lot for the 14nm Skylake X processor. Der8auer noted that overclocking was temperature limited beyond this point as at 6.1 GHz he was seeing positive temperatures on the CPU cores despite the surface of the LN2 block being as low as -100 °C! Perhaps even more incredible is the power draw of the processor as it runs at these clockspeeds with the system drawing as much as 1,000 watts (~83 amps) on the +12V rail with the CPU being responsible for almost all of that number! That is a lot of power running through the motherboard VRMs and the on-processor FIVR!

For comparison, at 5.5 GHz he measured 70 amps on the +12V rail (840W) with the chip using 1.45V vcore under load.

7980xe CPU-Z overclock 6GHz.png

For Cinebench R15, the extreme overclocker opted for a tamer 5.7 GHz where the i9-7980XE achieved a multithreaded score of 5,635 points. He compared that to his AMD Threadripper overclock of 5.4 GHz where he achieved a Cinebench score of 4,514 (granted the Intel part was using four more threads and clocked higher).

To push things (especially his power supply heh) further, the overclocker added a LN2 cooled NVIDIA Titan Xp to the mix and managed to overclock the graphics card to 2455 MHz at 1.4V. With the 3840 Pascal cores at 2.455 GHz he managed to break three single card world records by scoring 45,705 in 3DMark 11, 35,782 in 3DMark Fire Strike, and 120,425 in 3DMark Vantage!

Der8auer also made a couple interesting statements regarding overclocking at these levels including the issues of cold bugs not allowing the CPU and/or GPU to boot up if the cooler plate is too cold. On the other side of things, once the chip is running the power consumption can jump drastically with more voltage and higher clocks such that even LN2 can’t maintain sub-zero core temperatures! The massive temperature delta can also create condensation issues that need to be dealt with. He mentions that while for 24/7 overclocking liquid metal TIMs are popular choices, when extreme overclocking the alloy actually works against them because the sub-zero temperatures reduce the effectiveness and thermal conductivity of the interface material which is why polymer-based TIMs are used when cooling with liquid nitrogen, liquid helium, or TECs. Also, while most people apply a thin layer of thermal paste to the direct die or HIS, when extreme overclocking he “drowns” the processor die and PCB in the TIM to get as much contact as possible with the cooler as every bit of heat transfer helps even the small amount he can transfer through the PCB. Further, FIVR has advantages such as per-core voltage fine tuning, but it also can hold back further overclocking from cold bugs that will see the processor shut down past -100 to -110 °C temperature limiting overclocks whereas with an external VRM setup they could possibly push the processor further.

For the full scoop, check out his overclocking video. Interesting stuff!

Also read:

Source: der8auer

ASUS' Tinker can be tailored for Soldiers and Spies

Subject: Systems | September 25, 2017 - 04:15 PM |
Tagged: asus, tinker, SoC, Rockchip, rk3288, Mali-T760, Cortex-A17

ASUS' take on single board computers is the new Tinker Board, powered by a 1.8 GHz Cortex-A17 based Rockchip RK3288 and a 600MHz Mali-T760 GPU which share 2 GB of LPDDR3.  Storage is handled by a microSD slot, or the four USB 2.0 ports and the Tinker offers Gigabit wired connectivity as well as optional WiFi.  You have a choice of operating systems, either Marshmallow flavoured Android or the Debian based Tinker OS, depending on which you prefer. 

The Tech Report tested out the Tinker Board and found the hardware to outpace competitors such as Raspberry Pi, however the lack of software and documentation hamstrung the Tinker Board badly enough that they do not recommend this board.  This may change in time but currently ASUS needs to do some work before the Tinker Board becomes an actual competitor in this crowded market.

100_0229.jpg

"Asus' Tinker Board single-board computer wants to challenge the Raspberry Pi 3's popularity with a more powerful SoC and better networking, among other improvements. We put it to the test to see whether it's a worthy alternative to the status quo."

Here are some more Systems articles from around the web:

Systems

 

Double the price; not so much performance though ... Skylake-X versus ThreadRipper

Subject: Processors | September 25, 2017 - 03:19 PM |
Tagged: skylake-x, Skylake, Intel, Core i9, 7980xe, 7960x

You cannot really talk about the new Skylake-X parts from Intel without bringing up AMD's Threadripper as that is the i9-7980XE and i9-7960X's direct competition.   From a financial standpoint, AMD is the winner, with a price tag either $700 or $1000 less than Intel's new flagship processors.  As Ryan pointed out in his review, for those whom expense is not a consideration it makes sense to chose Intel's new parts as they are slightly faster and the Xtreme Edition does offer two more cores.  For those who look at performance per dollar the obvious processor of choice is ThreadRipper; for as Ars sums up in their review AMD offers more PCIe lanes, better heat management and performance that is extremely close to Intel's best.

DSC02984.jpg

"Ultimately, the i9-7960X raises the same question as the i9-7900X: Are you willing to pay for the best performing silicon on the market? Or is Threadripper, which offers most of the performance at a fraction of the price, good enough?"

Here are some more Processor articles from around the web:

Processors

Source: Ars Technica

Skimmer Scanner, a start to protecting yourself at the pump

Subject: General Tech | September 25, 2017 - 01:12 PM |
Tagged: skimmer scanner, security, bluetooth

If you haven't seen the lengths which scammers will go to when modifying ATMs to steal your bank info you should really take a look at these pictures and get in the habit of yanking on the ATM's fascia and keyboard before using them.  Unfortunately as Hack a Day posted about last week, the bank is not the only place you have to be cautious, paying at the pump can also expose your details.  In this case it is not a fake front which you need to worry about, instead a small PIC microcontroller is attached to the serial connection between card reader and pump computer, so it can read the unencrypted PIN and data and then store the result in an EEPROM device for later collection.  The device often has Bluetooth connectivity so that the scammers don't need to drive right up to the pump frequently.

There is an app you can download that might be able to help stop this, an app on Google Play will detect Bluetooth devices utilizing the standard codes the skimmers use and alert you.  You can then tweet out the location of the compromised pump to alert others, and hopefully letting the station owner and authorities know as well.  The app could be improved with automatic reporting and other tools, so check it out and see if you can help improve it as well as keeping your PIN and account safe when fuelling up. 

Skimmers-Main.jpg

"It would be nice to think that this work might draw attention to the shocking lack of security in gas pumps that facilitates the skimmers, disrupt the finances of a few villains, and even result in some of them getting a free ride in a police car. We can hope, anyway."

Here is some more Tech News from around the web:

Tech Talk

 

Source: Hack a Day

Noctua Focused on Ryzen with NH-L9a-AM4 and NH-L12S Low-Profile Coolers

Subject: Cases and Cooling | September 25, 2017 - 10:43 AM |
Tagged: ryzen, noctua, low-profile, htpc, cooler, APU, amd, AM4, air cooling

AMD's popularity with Ryzen CPUs (and upcoming APUs) has made waves across the industry, and Noctua have jumped in with a pair of low-profile offerings that update previous designs for cramped case interiors.

First up is the new version of the NH-L9a:

noctua_nh_l9a_am4_1.jpg

"The new NH-L9a-AM4 is an AM4-specific revision of Noctua’s award-winning NH-L9a low-profile CPU cooler. At a height of only 37mm, the NH-L9a is ideal for extremely slim cases and, due to its small footprint, it provides 100% RAM and PCIe compatibility as well as easy access to near-socket connectors, even on tightly packed mini-ITX motherboards."

Next is the new NH-L12S:

noctua_nh_l12s_1.jpg

"The new S-version of the renowned NH-L12 not only adds AM4 support but also gives more flexibility and improved performance in low-profile mode. Thanks to the new NF-A12x15 PWM slim 120mm fan, the NH-L12S provides even better cooling than the previous model with its 92mm fan. At the same time, the NH-L12S is highly versatile: with the fan installed on top of the fins, the cooler is compatible with RAM modules of up to 45mm in height. With the fan installed underneath the fins, the total height of the cooler is only 70mm, making it suitable for use in many compact cases."

Noctua says that these new coolers now shipping "and will be available shortly", with an MSRP of $39.90 for the NH-L9a-AM4 and $49 for the NH-L12S.

Source: Noctua
Author:
Subject: Processors
Manufacturer: Intel

Specifications and Architecture

It has been an interesting 2017 for Intel. Though still the dominant market share leader in consumer processors of all shapes and sizes, from DIY PCs to notebooks to servers, it has come under attack with pressure from AMD unlike any it has felt in nearly a decade. It started with the release of AMD Ryzen 7 and a family of processors aimed at the mainstream user and enthusiast markets. That followed by the EPYC processor release moving in on Intel’s turf of the enterprise markets. And most recently, Ryzen Threadripper took a swing (and hit) at the HEDT (high-end desktop) market that Intel had created and held its own since the days of the Nehalem-based Core i7-920 CPU.

pic1.jpg

Between the time Threadripper was announced and when it shipped, Intel made an interesting move. It decided to launch and announce its updated family of HEDT processors dubbed Skylake-X. Only available in a 10-core model at first, the Core i9-7900X was the fastest tested processor in our labs, at the time. But it was rather quickly overtaken by the likes of the Threadripper 1950X that ran with 16-cores and 32-threads of processing. Intel had already revealed that its HEDT lineup would go to 18-core options, though availability and exact clock speeds remained in hiding until recently.

  i9-7980XE i9-7960X i9-7940X i9-7920X i9-7900X  i7-7820X i7-7800X TR 1950X TR 1920X TR 1900X
Architecture Skylake-X Skylake-X Skylake-X Skylake-X Skylake-X Skylake-X Skylake-X Zen Zen Zen
Process Tech 14nm+ 14nm+ 14nm+ 14nm+ 14nm+ 14nm+ 14nm+ 14nm 14nm 14nm
Cores/Threads 18/36 16/32 14/28 12/24 10/20 8/16 6/12 16/32 12/24 8/16
Base Clock 2.6 GHz 2.8 GHz 3.1 GHz 2.9 GHz 3.3 GHz 3.6 GHz 3.5 GHz 3.4 GHz 3.5 GHz 3.8 GHz
Turbo Boost 2.0 4.2 GHz 4.2 GHz 4.3 GHz 4.3 GHz 4.3 GHz 4.3 GHz 4.0 GHz 4.0 GHz 4.0 GHz 4.0 GHz
Turbo Boost Max 3.0 4.4 GHz 4.4 GHz 4.4 GHz 4.4 GHz 4.5 GHz 4.5 GHz N/A N/A N/A N/A
Cache 24.75MB 22MB 19.25MB 16.5MB 13.75MB 11MB 8.25MB 40MB 38MB ?
Memory Support DDR4-2666 Quad Channel DDR4-2666 Quad Channel DDR4-2666 Quad Channel DDR4-2666 Quad Channel DDR4-2666
Quad Channel
DDR4-2666
Quad Channel
DDR4-2666
Quad Channel
DDR4-2666
Quad Channel
DDR4-2666 Quad Channel DDR4-2666 Quad Channel
PCIe Lanes 44 44 44 44 44 28 28 64 64 64
TDP 165 watts 165 watts 165 watts 140 watts 140 watts 140 watts 140 watts 180 watts 180 watts 180 watts?
Socket 2066 2066 2066 2066 2066 2066 2066 TR4 TR4 TR4
Price $1999 $1699 $1399 $1199 $999 $599 $389 $999 $799 $549

Today we are now looking at both the Intel Core i9-7980XE and the Core i9-7960X, 18-core and 16-core processors, respectively. The goal from Intel is clear with the release: retake the crown as the highest performing consumer processor on the market. It will do that, but it does so at $700-1000 over the price of the Threadripper 1950X.

Continue reading our review of the Intel Core i9-7980XE and Core i9-7960X!

Intel Announces 8th Gen Core Architecture, Coffee Lake

Subject: Processors, Chipsets | September 24, 2017 - 11:03 PM |
Tagged: Z370, Intel, coffee lake

The official press deck for Coffee Lake-S was leaked to the public, so Intel gave us the go-ahead to discuss the product line-up in detail (minus benchmarks). While the chips are still manufactured on the 14nm process that Kaby Lake, Skylake, and Broadwell were produced on, there’s more on them. The line-up is as follows: Core i3 gets quad-core without HyperThreading and no turbo boosting, Core i5 gets six-core without HyperThreading but with Turbo boosting, and Core i7 gets six-core with HyperThreading and Turbo boosting.

intel-2017-coffeelake-news-04.jpg

While the slide deck claims that the CPU still has 16 PCIe 3.0 lanes, the whole platform supports up to 40. They specifically state “up to” over and over again, so I’m not sure whether that means “for Z370 boards” or if there will be some variation between individual boards. Keep in mind that only 16 lane of this are from the processor itself, the rest are simply a part of the chipset. This unchanged from Z270.

intel-2017-coffeelake-news-01.png

Moving on, Intel has been branding this as “Intel’s Best Gaming Desktop Processor” all throughout their presentation. The reasoning is probably two-fold. First, this is the category of processors that high-end, mainstream, but still enthusiast PC gamers target. Second, gaming, especially at super-high frame rates, is an area that AMD has been struggling with on their Ryzen platform.

intel-2017-coffeelake-news-02-dieshot.png

Speaking of performance, the clock rate choice is quite interesting compared to Kaby Lake. In all cases, the base clock had a little dip from the previous generation, but the Turbo clock, if one exists, has a little bump. For instance, going from the Core i7-7700k to the Core i7-8700k, your base clock drops from 4.2 GHz to just 3.7 GHz, but the turbo jumps up from 4.5 GHz to 4.7 GHz. You also have a little more TDP to work with (95W vs 91W) with the 8700k. I’m not sure what this increase variance between low and high clock rates will mean, but it’s interesting to see Intel making some sort of trade-off on the back end.

(Editor's note: the base clock is only going to be a concern when running all cores for a long period of time. I fully expect performance to be higher for CFL-S parts than KBL-S parts in all workloads.)

intel-2017-coffeelake-news-03.png

The last thing that I’ll mention is that, of the two i3s, the two i5s, and the two i7s, one is locked (and lower TDP) and one is unlocked. In other words, Intel has an unlocked solution in all three classifications, even the i3. Even though it doesn’t have a turbo clock setting, you can still overclock it by hand if you desire.

Prices range from $117 to $359 USD, as seen in the slide, above. They launch on October 5th.

EFF Leaves the W3C over DRM Decision

Subject: General Tech | September 24, 2017 - 02:30 PM |
Tagged: w3c, eff, DRM

On September 18th, the Electronic Frontier Foundation, EFF, announced that they were leaving the World Wide Web Consortium, W3C, due to its stance on DRM, effective immediately. This was published in the form of an open letter from Cory Doctorow, which is available on the EFF’s website.

eff-logo_full.png

There’s several facets to the whole DRM issue. In this case, Cory Doctorow seems focused mostly on the security side of things. Creating an architecture to attach code that manipulates untrusted data is sketchy, at a time that browser vendors are limiting that attack surface by killing as many plug-ins as possible, and, in this case, a legal minefield is layered atop it due to copyright concerns. Publishers are worried about end-users moving data in ways that they don’t intend... even though every single time that content is pirated before its release date is a testament that the problem is elsewhere.

We can also get into the issue of “more control isn’t the same as more revenue” again, some other time.

As for the consequences of this action? I’m not too sure. I don’t really know how much sway the EFF had internally at the W3C. While they will still do what they do best, fight the legal side of digital freedom, it sounds like they won’t be in a position to officially guide standards anymore. This is a concern, but I’m not in a position to quantify how big.

Source: EFF

NVIDIA Releases GeForce 385.69 Drivers

Subject: Graphics Cards | September 24, 2017 - 12:33 PM |
Tagged: pc gaming, nvidia, graphics drivers

New graphics drivers for GeForce cards were published a few days ago. Unfortunately, I became a bit reliant upon GeForce Experience to notify me, and it didn’t this time, so I am a bit late on the draw. The 385.69 update adds “Game Ready” optimizations for a bunch of new games: Project Cars 2, Call of Duty: WWII open beta, Total War: WARHAMMER II, Forza Motorsport 7, EVE: Valkyrie - Warzone, FIFA 18, Raiders of the Broken Planet, and Star Wars Battlefront 2 open beta.

We’re starting the holiday games rush, folks!

nvidia-geforce.png

There isn’t really any major new features of this driver per se. It’s a lot of game-specific optimizations and a whole page of bug fixes, ranging from flickering in DOOM to preventing NVENC from freaking out at frame rates greater than 240 FPS.

One open issue is that GeForce TITAN (which I’m assuming refers to the original, Kepler-based one) cannot be installed on a Threadripper-based motherboard in Windows 10. The OS refuses to boot after the initial install. I’m guessing this has been around for a while, but in case you’re planning on upgrading to Threadripper (or buying a second-hand TITAN) it might be good to know.

If you haven’t received notification to update your drivers yet, poke GeForce Experience to make sure that it’s running and checking. Or, of course, you can download them from NVIDIA’s website.

Source: NVIDIA

New NVIDIA SHIELD TV SKU: 16GB with Remote for $179

Subject: General Tech | September 23, 2017 - 10:29 PM |
Tagged: SHIELD TV, pc gaming, nvidia

NVIDIA is adding a third SKU to their SHIELD TV line-up, shaving $20 off the price tag by including just a media remote, rather than the current low-end SKU’s media remote and a gamepad. This makes the line-up: SHIELD (16GB, Remote Only) for $179.00, SHIELD (16GB, Remote + Gamepad) for $199.99, and SHIELD PRO (500GB, Remote + Gamepad) for $299.99.

All SKUs come with MSI levels of uppercase brand names.

nvidia-2017-optional.png

This version is for those who are intending to use the device as a 4K media player. If you are not interested in gaming, then that’s $20 in your pocket instead of a controller that you will never use on your shelf. If, however, you want to game in the future, then the first-party SHIELD CONTROLLER is $59.99 USD, so buying the bundle with the gamepad now will save you about $30 (Update, Sept 24th @ 5:45pm: $40... I mathed wrong.) That leaves a little bit to think about, but the choice can now be made.

The new bundle is now available for pre-order, and it ships on October 18th.

Source: NVIDIA

Imagination Technologies Agrees to Canyon Bridge Offer

Subject: Graphics Cards, Mobile | September 23, 2017 - 09:59 PM |
Tagged: Imagination Technologies

Canyon Bridge, a private investment LLC and a believable codename for an Intel processor architecture, has just reached an agreement with Imagination Technologies to acquire most of their company. This deal is valued at £550 million GBP and does not include MIPS Technologies, Inc., which Imagination Technologies purchased on February 8th of 2013.

According to Anandtech, however, MIPS Technologies, Inc. will be purchased by Tallwood Venture Capital for $65 million USD.

imaginationtech-logo.png

The reason why Imagination Technologies is expected to be split in two like this is because purchasing CPU companies places you under national security review with the United States, and Canyon Bridge is backed by the Chinese government. As such, they can grab everything but the CPU division, which lets another party swoop in for a good price on the leftover.

That said, it is currently unclear what either company, Canyon Bridge Capital Partners or Tallwood Venture Capital, wants to do with Imagination Technologies or MIPS Technologies, Inc., respectively. When Canyon Bridge attempted to purchase Lattice Semiconductor last year, they mentioned that they were interested in their FPGAs, their “video connectivity” products (HDMI, MHL, etc.), and their wireless products (60 GHz, etc.). I would assume that they’re just picking up good technology deals, but it’s also possible that they’re looking into accelerated compute companies in particular.

There’s still a few barriers before the sale closes, but it’s looking like we’re not going to end up with Imagination just merging into an existing player or something.

Source: Reuters

Apparently Kaby Lake Is Incompatible with Z370 Chipsets

Subject: Processors, Chipsets | September 23, 2017 - 06:52 PM |
Tagged: Z370, z270, kaby lake, Intel, coffee lake

According to the Netherlands arm of Hardware.info, while Kaby Lake-based processors will physically fit into the LGA-1151 socket of Z370 motherboards, they will fail to boot. Since their post, Guru3D asked around to various motherboard manufacturers, and they claim that Intel is only going to support 8th Generation processors with that chipset via, again, allegedly, a firmware lock-out.

intel-2017-chocolatelake-theredlist.jpg

Thankfully, it's not Chocolate Lake.
Image credit: The Red List

If this is true, then it might be possible for Intel to allow board vendors to release a new BIOS that supports these older processors. Guru3D even goes one step further and suggests that, just maybe, motherboard vendors might have been able to support Coffee Lake on Z270 as well, if Intel would let them. I’m... skeptical about that last part in particular, but, regardless, it looks like you won’t have an upgrade path, even though the socket is identical.

It’s also interesting to think about the issue that Hardware.info experienced: the boot failed on the GPU step. The prevailing interpretation is that everything up to that point is close enough that the BIOS didn’t even think to fail.

My interpretation of the step that booting failed, however, is wondering whether there’s something odd about the new graphics setup that made Intel pull support for Z270. Also, Intel usually supports two CPU generations with each chipset, so we had no real reason to believe that Skylake and Kaby Lake would carry over except for the stalling of process tech keeping us on 14nm so long.

Still, if older CPUs are incompatible with Z370, and for purely artificial reasons, then that’s kind-of pathetic. Maybe I’m odd, but I tend to buy a new motherboard with new CPUs anyway, but I can’t envision the number of people who flash BIOSes with their old CPU before upgrading to a new one is all that high, so it seems a little petty to nickel and dime the few that do, especially at a time that AMD can legitimately call them out for it.

There has to be a reason, right?

Source: Guru3D

Unreal Engine 4.18 Preview Published to Epic Launcher

Subject: General Tech | September 23, 2017 - 01:39 PM |
Tagged: ue4, epic games, pc gaming

Epic Games has released a preview build of Unreal Engine 4.18. This basically sets a bar for shipped features, giving them a bit of time to crush bugs before they recommend developers use it for active projects. This version has quite a few big changes, especially in terms of audio and video media.

epic-2017-zengardenwebassembly.jpg

WebAssembly is now enabled by default for HTML5.

First, we’ll discuss platform support. As you would expect, iOS 11 and XCode 9 are now supported, and A10 processors can use the same forward renderer that was added to UE4 for desktop VR, as seen in Robo Recall. That’s cool and all, but only for Apple. For the rest of us, WebAssembler (WASM) is now enabled by default for HTML5 projects. WASM is LLVM bytecode that can be directly ingested by web browsers. In other words, you can program in C++ and have web browsers execute it, and do so without transpiling to some form of JavaScript. (Speaking of which, ASM.js is now removed from UE4.) The current implementation is still single-threaded, but browser vendors are working on adding multi-threading to WASM.

As for the cool features: Epic is putting a lot of effort in their media framework. This allows for a wider variety of audio and video types (sample rates, sample depths, and so forth) as well as, apparently, more control over timing and playback, including through Blueprints visual scripting (although you could have always made your own Blueprint node anyway). If you’re testing out Unreal Engine 4.18, Epic Games asks that you pay extra attention to this category, reporting any bugs that you find.

Epic has also improved their lighting engine, particularly when using the Skylight lighting object. They also say that Volumetric Lightmaps are also, now, enabled by default. This basically allows dynamic objects to move through a voxel-style grid of lighting values that are baked in the engine, which adds indirect lighting on them without a full run-time GI solution.

The last thing I’ll mention (although there’s a bunch of cool things, including updates to their audio engine and the ability to reference Actors in different levels) is their physics improvements. Their Physics Asset Editor has been reskinned, and the physics engine has been modified. For instance, APEX Destruction has been pulled out of the core engine into a plug-in, and the cloth simulation tools, in the skeletal mesh editor, are no longer experimental.

Unreal Engine 4.18 Preview can be downloaded from the Epic Launcher, but existing projects should be actively developed in 4.17 for a little while longer.

Source: Epic Games

Crytek Releases CRYENGINE 5.4

Subject: General Tech | September 23, 2017 - 01:10 PM |
Tagged: pc gaming, crytek

The latest version of CRYENGINE, 5.4, makes several notable improvements. Starting with the most interesting one for our readers: Vulkan has been added at the beta support level. It’s always good to have yet another engine jump in with this graphics API so developers can target it without doing the heavy lifting on their own, and without otherwise limiting their choices.

crytek-2016-logo.png

More interesting, at least from a developer standpoint, is that CRYENGINE is evolving into an Entity Component framework. Amazon is doing the same with their Lumberyard fork, but Crytek has now announced that they are doing something similar on their side, too. The idea is that you place relatively blank objects in your level and build them up by adding components, which attaches the data and logic that this object needs. This system proved to be popular with the success of Unity, and it can also be quite fast, too, depending on how the back-end handles it.

I also want to highlight their integration of Allegorithmic Substance. With game engines switching to a PBR-based rendering model, tools can make it easier to texture 3D objects by stenciling on materials from a library. That way, you don’t need to think how gold will behave, just that gold should be here, and rusty iron should be over there. All of the major engines are doing it, and Crytek, themselves, have been using Substance, but now there’s an actual, supported workflow.

CryEngine is essentially free, including royalty-free, to use. Their business model currently involves subscriptions for webinars and priority support.

Source: Crytek

Amazon Web Services Discuss Lumberyard Roadmap

Subject: General Tech | September 23, 2017 - 12:41 PM |
Tagged: pc gaming, amazon

Lumberyard has been out for a little over a year and a half, and it has been experiencing steady development since then. Just recently, they published a blog post highlighting where they want the game engine to go. Pretty much none of this information is new if you’ve been following them, but it’s still interesting none-the-less.

amazon-2017-lumberyard-startergame.jpg

From a high level, Amazon has been progressing their fork of CryEngine into more of a component-entity system. The concept is similar to Unity, in that you place objects in the level, then add components to them to give them the data and logic that you require. Currently, these components are mostly done in Lua and C++, but Amazon is working on a visual scripting system, like Blueprints from Unreal Engine 4, called Script Canvas. They technically inherited Flow Graph from Crytek, which I think is still technically in there, but they’ve been telling people to stop using it for a while now. I mean, this blog post explicitly states that they don’t intend to support migrating from Flow Graph to Script Canvas, so it’s a “don’t use it unless you need to ship real soon” sort of thing.

One of Lumberyard’s draws, however, is their license: free, but you can’t use this technology on any cloud hosting provider except AWS. So if you make an offline title, or you use your own servers, then you don’t need to pay Amazon a dime. That said, if you do something like leaderboards, persistent logins, or use cloud-hosted multiplayer, then you will need to do it through AWS, which, honestly, you were probably going to do anyway.

The current version is Lumberyard Beta 1.10. No release date has been set for 1.11, although they usually don’t say a word until it’s published.

Source: Amazon

Unity 2017.2.0f1 Released

Subject: General Tech | September 23, 2017 - 12:22 PM |
Tagged: pc gaming, Unity

While it’s not technically released yet, Unity has flipped the naming scheme of Unity 2017.2 to Unity 2017.2.0f1. The “f” stands for final, so we will probably see a blog post on it soon. This version has a handful of back-end changes, such as improved main-thread performance when issuing commands to graphics APIs, but the visible changes are mostly in two areas: XR (VR + AR) and baked lighting.

unity-logo-rgb.png

From the XR standpoint, a few additions stand out. First, this version now supports Google Tango and Windows Mixed Reality, the latter of which is tied to the Windows 10 Fall Creators Update, so it makes sense that Unity would have support in the version before that gets released (October 17th). In terms of features, the editor now supports emulating a Vive headset, so you can test some VR elements without having a headset. I expect this will mostly be good for those who want to do a bit of development in places where they don’t have access to their headset, although that’s blind speculation from my standpoint.

The other area that got a boost is baked global illumination. Unity started introducing their new Progressive Lightmapping feature in Unity 5.6, and it bakes lighting into the scenes in the background as you work. This update allows you to turn shadows on and off on a per-object basis, and it supports double-sided materials. You cannot have independent lighting calculations for the front and back of a triangle... if you want that, then you will need to give some volume to your models. This is mostly for situations like the edge of a level, so you don’t need to create a second wall facing away from the playable area to block light coming in from outside the playable area.

I’m not sure when the official release is, but it looks like the final, supported build is out now.

Source: Unity

Google Introduces Tesla P100 to Cloud Platform

Subject: Graphics Cards | September 23, 2017 - 12:16 AM |
Tagged: google, nvidia, p100, GP100

NVIDIA seems to have scored a fairly large customer lately, as Google has just added Tesla P100 GPUs to their cloud infrastructure. Effective immediately, you can attach up to four of these GPUs to your rented servers on an hourly or monthly basis. According to their pricing calculator, each GPU adds $2.30 per hour to your server’s fee in Oregon and South Carolina, which isn’t a lot if you only use them for short periods of time.

google-2017-cloudplatformlogo.png

If you need to use them long-term, though, Google has also announced “sustained use discounts” with this blog post, too.

While NVIDIA has technically launched a successor to the P100, the Volta-based V100, the Pascal-based part is still quite interesting. The main focus of the GPU design, GP100, was bringing FP64 performance up to its theoretical maximum of 1/2 FP32. It also has very high memory bandwidth, due to its HBM 2.0 stacks, which is often a huge bottleneck for GPU-based applications.

For NVIDIA, selling high-end GPUs is obviously good. The enterprise market is lucrative, and it validates their push into the really large die sizes. For Google, it gives a huge reason for interested parties to consider them over just defaulting to Amazon. AWS has GPU instances, but they’re currently limited to Kepler and Maxwell (and they offer FPGA-based acceleration, too). They can always catch up, but they haven’t yet, and that's good for Google.

Source: Google

Asus XG27VQ; 27" of curved Freesync

Subject: Displays | September 22, 2017 - 05:25 PM |
Tagged: 27, freesync, Asus ROG Strix XG27VQ, asus, XG27VQ, 1080p, va lcd

To start with the particular specification which will upset some people, the ASUS XG37VQ is a 1080p monitor; so if life starts at 1440p then feel free to move on.  For those still reading, this Freesync monitor supports refresh rates from 48 to 144Hz and can display 95% sRGB coverage.  Techgage were impressed with the quality of the display but when it came to the RGBs present on the monitor they had some questions; the ROG logo that is projected from the bottom of the monitor only comes in red, while the glowing circle on the back of the display supports a full gamut of colours which no one will ever see.  Pop over for the full review.

S-2.jpg

"Let's cut right to the chase. The Asus ROG Strix XG27VQ is a $350 gaming monitor, 27 inches in size, with a resolution of 1920 x 1080 and a refresh rate of 144 Hz. We're looking at a VA LCD panel here with FreeSync support, sporting an 1800R curvature."

Here are some more Display articles from around the web:

Displays

Source: Techspot

AMD enables RX Vega mGPU support

Subject: Graphics Cards | September 22, 2017 - 04:36 PM |
Tagged: Radeon Software 17.9.2, crossfire, Vega

The newest Radeon Software ReLive 17.9.2 is especially worth grabbing if you have or plan to have more than one Vega based card in your system as it marks the return of Crossfire support.  You can pair up Vega64 or Vega56 cards but do make sure they are a matched set.  We haven't had time to test the performance results yet but you can be sure we will be working on that in the near future.  Below are the results which AMD suggests you can expect in several different games, as well as a look at the other notes associated with this new driver.

image001.jpg

Radeon™ Software Crimson ReLive Edition is AMD's advanced graphics software for enabling high-performance gaming and engaging VR experiences. Create, capture, and share your remarkable moments. Effortlessly boost performance and efficiency. Experience Radeon Software with industry-leading user satisfaction, rigorously-tested stability, comprehensive certification, and more.

Radeon Software Crimson ReLive Edition 17.9.2 Highlights
Support For

  • Radeon RX Vega Series Up to 2x Multi GPU support
  • Project CARS 2™ Multi GPU profile support added

Fixed Issues
Hearts of Iron IV™ may experience a system hang when the campaign scenario is launched.

Radeon Software may display an erroneous "1603 Error" after installing Radeon Software. This error will not affect your Radeon Software installation.

Source: AMD