Subject: Systems | August 17, 2016 - 04:37 PM | Ryan Shrout
Tagged: UHD, SFF, IDF 2016, idf, gigabyte, gaming, brix
While wandering around the exhibit area at this year’s Intel Developers Forum, I ran into our friends at Gigabyte a brand new BRIX small form factor PC. The BRIX Gaming UHD takes the now-standard NUC/BRIX block shape and literally raises it up, extending the design vertically to allow for higher performance components and the added cooling capability to integrate them.
The design of the BRIX Gaming UHD combines a brushed aluminum housing with a rubber base and bordering plastic sections to create a particularly stunning design that is both simple and interesting. Up top is a fan that pulls air through the entire chassis, running over the heatsink for the CPU and GPU. This is similar in function to the Mac Pro, though this is a much more compact device with a very different price point and performance target.
Around the back you’ll find all the connections that the BRIX Gaming UHD supplies: three (!!) mini DisplayPort connections, a full size HDMI output, four USB 3.0 ports, a USB 3.1 connection, two wireless antennae ports, Gigabit Ethernet and audio input and output. That is a HUGE amount of connectivity options and is more than many consumer’s current large-scale desktops.
The internals of the system are impressive and required some very custom design for cooling and layout.
The discrete NVIDIA graphics chip (in this case the GTX 950) is on the left chamber while the Core i7-6500HQ Skylake processor is on the left side along with the memory slot and wireless card.
Gigabyte measures the size of the BRIX Gaming UHD at 2.6 liters. Because of that compact space there is no room for hard drives: you get access to two M.2 2280 slots for storage instead. There are two SO-DIMM slots for DDR4 memory up to 2133 MHz, integrated 802.11ac support and support for quad displays.
Availability and pricing are still up in the air, though early reports are that starting cost will be $1300. Gigabyte updated me and tells me that the BRIX Gaming UHD will be available in October and that an accurage MSRP has not been set. It would not surprise me if this model never actually saw the light of day and instead Gigabyte waited for NVIDIA’s next low powered Pascal based GPU, likely dubbed the GTX 1050. We’ll keep an eye on the BRIX Gaming UHD from Gigabyte to see what else transpires, but it seems the trend of small form factor PCs that sacrifice less in terms of true gaming potential continues.
Subject: Systems | August 17, 2016 - 04:25 PM | Sebastian Peak
Tagged: PC, Omen 900, Omen, hp, gaming, desktop, cube, computer
HP has introduced a new pre-built gaming desktop, and while the Omen series has existed for a while the new Omen offers a very different chassis design.
This Omen isn't just cube-like, it's actually a cube (Image credit: HP)
Inside the specifications look like the typical pre-built gaming rig, with processors up to an Intel Core i7 6700K and graphics options including AMD's Radeon RX 480 and the NVIDIA GeForce GTX 1080. Configurations on HP's online store start at $1799 for a version with a GTX 960, a 1TB spinning hard drive, and a single 8GB DIMM. (Curiously, though reported as the "Omen X", the current listing is for an "Omen 900".)
A look inside an AMD Crossfire configuration (Image credit: HP via The Verge)
HP is certainly no stranger to unusual chassis designs, as those who remember the Blackbird 002 (which Ryan stood on - and reviewed - here) and subsequent Firebird 803 systems will know. The Verge is reporting that HP will offer the chassis as a standalone product for $599, itself an unusual move for the company.
(Image credit: HP)
The new Omen desktop goes on sale officially starting tomorrow.
Subject: Memory | August 17, 2016 - 04:15 PM | Jeremy Hellstrom
Tagged: Corsair Dominator Platinum, corsair, ddr4-3200
It will certainly cost you quite a bit to pick up but if you have a need for a huge pool of memory the 64GB Corsair Dominator Platinum DDR4-3200 kit is an option worth considering. The default timings are 16-18-18-36 and the heat spreader and DHX cooling fins keep the DIMMs from heating up, even when Overclockers Club upped the voltage to 1.45V. Part of the price premium is the testing which was done before these DIMMs left the factory, as well as the custom PCB and hand picked ICs which should translate to a minimum of issues running at their full speed or even when overclocked. Pop by to see how this kit performed in OC's benchmarks.
"If I break it down, you get a set of modules that have been through an extensive binning process that hand selects the memory ICs being used on these modules. There is a custom designed, cooling optimized PCB that those memory IC's are mounted to so that we can enjoy a trouble free user experience. The DHX cooling solution on these modules is easily up to the task of keeping the modules cool with minimal airflow. The heat spreader and DHX cooling fins are designed to use convective cooling in the absence of any airflow over the modules."
Here are some more Memory articles from around the web:
- Crucial Ballistix Elite DDR4-2666 @ Benchmark Reviews
- G.Skill TridentZ 3866 MHz 2x 4GB DDR4 @ techPowerUp
- 32GB Crucial Ballistix Sport LT DDR4 @ eTeknix
Subject: General Tech | August 17, 2016 - 02:02 PM | Jeremy Hellstrom
Tagged: Enderal, SureAI, mod, skyrim
Enderal: The Shards of Order is a complete conversion of the Steam version of Skyrim into a completely new game in a brand new world. The mod is 8 GB in size and requires a separate launcher, both available at Enderal.com and you can expect between 30 to 100 hours playtime. You may remember this team from Nehrim, their previous total conversion mod for Oblivion. Rock, Paper, SHOTGUN have covered this mod previously, however until now it was only available in German. The full English version, including voice acting, is now complete and ready for you to dive into. You might want to consider unmodding your Skyrim to install the mod, it does create a copy of the Skyrim installation so you can restore your Thomas the Tank Engine mod once you are set up.
"A final version of Enderal: The Shards of Order has been completed and can be downloaded for free now. While ‘Enderal’ sounds like it could be something made by a United States pharmaceutical company, it is actually a massive total conversion mod for Skyrim, not just adding new weapons or turning it into a survival game, but creating a whole new RPG using the raw materials of its parent."
Here is some more Tech News from around the web:
- Final Fantasy 15 hands-on: Brave new direction or just pandering to fans? @ Ars Technica
- MOO! Master Of Orion Reboot Leaves Early Access @ Rock, Paper, SHOTGUN
- The Legend Of THQnix: Dead Publisher Is Back, Kinda @ Rock, Paper, SHOTGUN
- AMD & NVIDIA GPU VR Performance: Valve's Robot Repair @ [H]ard|OCP
- Sunless Sea’s Former Lead Now Writing For Stellaris @ Rock, Paper, SHOTGUN
- Hands On: Dawn Of War III @ Rock, Paper, SHOTGUN
- Battlefield 1 Open Beta starts from 31st August @ HEXUS
- BioShock Original And Remastered Graphics Comparison @ Rock, Paper, SHOTGUN
- Without Kojima, Metal Gear becomes a multiplayer zombie action game @ Ars Technica
- Beyond No Man's Sky: 10 of the best games still to come in 2016 @ The Inquirer
Subject: Graphics Cards, Processors | August 17, 2016 - 01:38 PM | Scott Michaud
Tagged: Xeon Phi, larrabee, Intel
Tom Forsyth, who is currently at Oculus, was once on the core Larrabee team at Intel. Just prior to Intel's IDF conference in San Francisco, which Ryan is at and covering as I type this, Tom wrote a blog post that outlined the project and its design goals, including why it didn't hit market as a graphics device. He even goes into the details of the graphics architecture, which was almost entirely in software apart from texture units and video out. For instance, Larrabee was running FreeBSD with a program, called DirectXGfx, that gave it the DirectX 11 feature set -- and it worked on hundreds of titles, too.
Also, if you found the discussion interesting, then there is plenty of content from back in the day to browse. A good example is an Intel Developer Zone post from Michael Abrash that discussed software rasterization, doing so with several really interesting stories.
Subject: General Tech | August 17, 2016 - 12:41 PM | Jeremy Hellstrom
Tagged: nvidia, Intel, HPC, Xeon Phi, maxwell, pascal, dirty pool
There is a spat going on between Intel and NVIDIA over the slide below, as you can read about over at Ars Technica. It seems that Intel have reached into the industries bag of dirty tricks and polished off an old standby, testing new hardware and software against older products from their competitors. In this case it was high performance computing products which were tested, Intel's new Xeon Phi against NVIDIA's Maxwell, tested on an older version of the Caffe AlexNet benchmark.
NVIDIA points out that not only would they have done better than Intel if an up to date version of the benchmarking software was used, but that the comparison should have been against their current architecture, Pascal. This is not quite as bad as putting undocumented flags into compilers to reduce the performance of competitors chips or predatory discount programs but it shows that the computer industry continues to have only a passing acquaintance with fair play and honest competition.
"At this juncture I should point out that juicing benchmarks is, rather sadly, par for the course. Whenever a chip maker provides its own performance figures, they are almost always tailored to the strength of a specific chip—or alternatively, structured in such a way as to exacerbate the weakness of a competitor's product."
Here is some more Tech News from around the web:
- USB Implementers Forum introduces branding for safe USB-C charging @ The Inquirer
- Some Windows 10 Anniversary Update: SSD freeze @ The Register
- Intel Project Alloy: all-in-one VR headset takes aim at Google's Project Daydream @ The Inquirer
- Wanna build your own drone? Intel emits Linux-powered x86 brains for DIY flying gizmos @ The Register
- Intel's Optane XPoint DIMMs pushed back – source @ The Register
Subject: Cases and Cooling | August 17, 2016 - 11:43 AM | Sebastian Peak
Tagged: cooler master, MasterLiquid Maker 92, AIO, liquid cooler, self contained, convertible
Cooler Master has introduced an unusual all-in-one liquid CPU cooler with their new MasterLiquid Maker 92, a design which places all of the components together on top of the CPU block.
We've seen a similar idea from Corsair with the cooler first found in the Bulldog system, and later introduced separately as the H5 SF mini-ITX liquid cooler. Cooler Master's design uses a different arrangement, with push-pull 92mm fans sandwiching a radiator that rotates 90º to permit either a verticle or horizontal setup. The latter position allows for better low-profile chassis compatibility, and also adds airflow to motherboard components.
- Model: MLZ-H92M-A26PK-R1
- CPU: Intel LGA 2011-v3/ 2011/ 1151/ 1150/ 1155/ 1156 socket
- Power Connector : SATA and 4-Pin
- Radiator Material: Aluminum
- Vertical: 99.9 x 81.6 x 167.5mm (3.9 x 3.2 x 6.6”)
- Horizontal: 99.9 x 142 x 118.8 mm (3.9 x 5.6 x 4.7”)
- Dimension: Φ95x 25.4 mm (3.7 x 1”)
- Airflow: 49.7 CFM (max)
- Air Pressure: 6.4 mmH2O (max)
- Noise Level: 30 dBA (max)
- Noise Level: <12 dBA (max)
- MTTF: 175,000 hours
- L-10 Life: 50,000 hours
- Rated Voltage: 12VDC
- Warranty: 5 Years
Cooler Master is offer pre-orders on a first-come, first-serve basis beginning August 30 from this page. Pricing is not listed.
A Watershed Moment in Mobile
This previous May I was invited to Austin to be briefed on the latest core innovations from ARM and their partners. We were introduced to new CPU and GPU cores, as well as the surrounding technologies that provide the basis of a modern SOC in the ARM family. We also were treated to more information about the process technologies that ARM would embrace with their Artisan and POP programs. ARM is certainly far more aggressive now in their designs and partnerships than they have been in the past, or at least they are more willing to openly talk about them to the press.
The big process news that ARM was able to share at this time was the design of 10nm parts using an upcoming TSMC process node. This was fairly big news as TSMC was still introducing parts on their latest 16nm FF+ line. NVIDIA had not even released their first 16FF+ parts to the world in early May. Apple had dual sourced their 14/16 nm parts from Samsung and TSMC respectively, but these were based on LPE and FF lines (early nodes not yet optimized to LPP/FF+). So the news that TSMC would have a working 10nm process in 2017 was important to many people. 2016 might be a year with some good performance and efficiency jumps, but it seems that 2017 would provide another big leap forward after years of seeming stagnation of pure play foundry technology at 28nm.
Yesterday we received a new announcement from ARM that shows an amazing shift in thought and industry inertia. ARM is partnering with Intel to introduce select products on Intel’s upcoming 10nm foundry process. This news is both surprising and expected. It is surprising in that it happened as quickly as it did. It is expected as Intel is facing a very different world than it had planned for 10 years ago. We could argue that it is much different than they planned for 5 years ago.
Intel is the undisputed leader in process technologies and foundry practices. They are the gold standard of developing new, cutting edge process nodes and implementing them on a vast scale. This has served them well through the years as they could provide product to their customers seemingly on demand. It also allowed them a leg up in technology when their designs may not have fit what the industry wanted or needed (Pentium 4, etc.). It also allowed them to potentially compete in the mobile market with designs that were not entirely suited for ultra-low power. x86 is a modern processor technology with decades of development behind it, but that development focused mainly on performance at higher TDP ranges.
This past year Intel signaled their intent to move out of the sub 5 watt market and cede it to ARM and their partners. Intel’s ultra mobile offerings just did not make an impact in an area that they were expected to. For all of Intel’s advances in process technology, the base ARM architecture is just better suited to these power envelopes. Instead of throwing good money after bad (in the form of development time, wafer starts, rebates) Intel has stepped away from this market.
This leaves Intel with a problem. What to do with extra production capacity? Running a fab is a very expensive endeavor. If these megafabs are not producing chips 24/7, then the company is losing money. This past year Intel has seen their fair share of layoffs and slowing down production/conversion of fabs. The money spent on developing new, cutting edge process technologies cannot stop for the company if they want to keep their dominant position in the CPU industry. Some years back they opened up their process products to select 3rd party companies to help fill in the gaps of production. Right now Intel has far more production line space than they need for the current market demands. Yes, there were delays in their latest Skylake based processors, but those were solved and Intel is full steam ahead. Unfortunately, they do not seem to be keeping their fabs utilized at the level needed or desired. The only real option seems to be opening up some fab space to more potential customers in a market that they are no longer competing directly in.
The Intel Custom Foundry Group is working with ARM to provide access to their 10nm HPM process node. Initial production of these latest generation designs will commence in Q1 2017 with full scale production in Q4 2017. We do not have exact information as to what cores will be used, but we can imagine that they will be Cortex-A73 and A53 parts in big.LITTLE designs. Mali graphics will probably be the first to be offered on this advanced node as well due to the Artisan/POP program. Initial customers have not been disclosed and we likely will not hear about them until early 2017.
This is a big step for Intel. It is also a logical progression for them when we look over the changing market conditions of the past few years. They were unable to adequately compete in the handheld/mobile market with their x86 designs, but they still wanted to profit off of this ever expanding area. The logical way to monetize this market is to make the chips for those that are successfully competing here. This will cut into Intel’s margins, but it should increase their overall revenue base if they are successful here. There is no reason to believe that they won’t be.
The last question we have is if the 10nm HPM node will be identical to what Intel will use for their next generation “Cannonlake” products. My best guess is that the foundry process will be slightly different and will not provide some of the “secret sauce” that Intel will keep for themselves. It will probably be a mobile focused process node that stresses efficiency rather than transistor switching speed. I could be very wrong here, but I don’t believe that Intel will open up their process to everyone that comes to them hat in hand (AMD).
The partnership between ARM and Intel is a very interesting one that will benefit customers around the globe if it is handled correctly from both sides. Intel has a “not invented here” culture that has both benefited it and caused it much grief. Perhaps some flexibility on the foundry side will reap benefits of its own when dealing with very different designs than Intel is used to. This is a titanic move from where Intel probably thought it would be when it first started to pursue the ultra-mobile market, but it is a move that shows the giant can still positively react to industry trends.
Subject: General Tech | August 16, 2016 - 08:10 PM | Scott Michaud
Vulkan-Tutorial.com, while not affiliated with The Khronos Group, is a good read to understand how the Vulkan API is structured. It is set up like the tutorials I followed when learning WebGL, which seems to make it quite approachable. I mean, we are still talking about the Vulkan API, which was in no way designed to be easy or simple, but introduction-level material is still good for developers of all skill level, unless they're looking for specific advice.
To emphasize what I mean by “approachable”, the tutorial even includes screenshots of Visual Studio 2015 at some points, to help Windows users set up their build environment. Like... step-by-step screenshots. This explanation is also accompanied by Linux instructions, although those use Ubuntu terminal commands.
IDF 2016: ScaleMP Merges Software-Defined Memory With Storage-Class Memory, Makes Optane Work Like RAM
Subject: Storage | August 16, 2016 - 04:05 PM | Allyn Malventano
Tagged: Virtual SMP, SMP, SDM-S, SDM-F, ScaleMP, IDF 2016, idf
ScaleMP has an exciting announcement at IDF today, but before we get into it, I need to do some explaining. Most IT specialists know how to employ virtualization to run multiple virtual environments within the same server, but what happens when you want to go the other way around?
You might not have known it, but virtualization can go both ways. ScaleMP make such a solution, and it enables some amazing combinations of hardware all thrown at a single virtualized machine. Imagine what could be done with a system containing 32,768 CPUs and 2048TB (2PB) of RAM. Such a demand is actually more common than you might think:
List of companies / applications of ScaleMP.
ScaleMP's tech can fit into a bunch of different usage scenarios. You can choose to share memory, CPU cores, IO, or all three across multiple physical machines, all combined into a single beast of a virtualized OS, but with the launch of 3D XPoint there's one more thing that might come in handy as a sharable resource, as there is a fairly wide latency gap between NAND and RAM:
Alright, now that we've explained the cool technology and the gap to be filled, onto the news of the day, which is that ScaleMP has announced that their Software Defined Memory tech has been optimized for Intel Optane SSDs. This means that ScaleMP / Optane customers will be able to combine banks of XPoint installed across multiple systems all into a single VM. Another key to this announcement is that due to the way ScaleMP virtualizes the hardware, the currently developing storage-class (NVMe) XPoint/Optane solutions can be mounted as if they were system memory, which should prove to be a nice stopgap until we see second generation 3D XPoint in DIMM form.
More to follow from IDF 2016. ScaleMP's press blast appears after the break.
Subject: Cases and Cooling | August 16, 2016 - 02:23 PM | Jeremy Hellstrom
Tagged: Alphacool, Eisbaer 240, AIO, watercooler
Alphacool's Eisbaer 240 AiO CPU watercooler sports a dual 120mm bay radiator along with a DC-LT pump. The tubing is also designed by Alphacool and sports an interesting quick disconnect coupling which allows you to integrate another radiator, reservoir, or even an entire second loop for your GPU if you so desire. As well the fluid reservoir on the waterblock features a window to allow you to quickly check your water levels and to see if there are any bubbles present in your loop. All of this sounds good but the question of performance remains; stop by Hardware Canucks to see this impressive cooler in action.
"Prebuilt AIO or a potentially complicated custom loop? Alphacool believes we shouldn't have to choose and their Eisbaer 240 combines both under one roof and could blaze a new trail."
Here are some more Cases & Cooling reviews from around the web:
- Cooler Master MasterLiquid Pro 240 Review: A Gestalt Approach to Cooling @ Modders-Inc
- be quiet! Dark Rock TF @ techPowerUp
- Thermaltake Core P3 Tower @ HardwareOverclock
- Deepcool Genome @ techPowerUp
- be quiet! Dark Base Pro 900 @ Benchmark Reviews
Subject: Storage | August 16, 2016 - 02:00 PM | Allyn Malventano
Tagged: XPoint, Testbed, Optane, Intel, IDF 2016, idf
IDF 2016 is up and running, and Intel will no doubt be announcing and presenting on a few items of interest. Of note for this Storage Editor are multiple announcements pertaining to upcoming Intel Optane technology products.
Optane is Intel’s branding of their joint XPoint venture with Micron. Intel launched this branding at last year's IDF, and while the base technology is as high as 1000x faster than NAND flash memory, full solutions wrapped around an NVMe capable controller have shown to sit at roughly a 10x improvement over NAND. That’s still nothing to sneeze at, and XPoint settles nicely into the performance gap seen between NAND and DRAM.
Since modern M.2 NVMe SSDs are encroaching on the point of diminishing returns for consumer products, Intel’s initial Optane push will be into the enterprise sector. There are plenty of use cases for a persistent storage tier faster than NAND, but most enterprise software is not currently equipped to take full advantage of the gains seen from such a disruptive technology.
XPoint die. 128Gbit of storage at a ~20nm process.
In an effort to accelerate the development and adoption of 3D XPoint optimized software, Intel will be offering enterprise customers access to an Optane Testbed. This will allow for performance testing and tuning of customers’ software and applications ahead of the shipment of Optane hardware.
I did note something interesting in Micron's FMS 2016 presentation. QD=1 random performance appears to start at ~320,000 IOPS, while the Intel demo from a year ago (first photo in this post) showed a prototype running at only 76,600 IOPS. Using that QD=1 example, it appears that as controller technology improves to handle the large performance gains of raw XPoint, so does performance. Given a NAND-based SSD only turns in 10-20k IOPS at that same queue depth, we're seeing something more along the lines of 16-32x performance gains with the Micron prototype. Those with a realistic understanding of how queues work will realize that the type of gains seen at such low queue depths will have a significant impact in real-world performance of these products.
The speed of 3D XPoint immediately shifts the bottleneck back to the controller, PCIe bus, and OS/software. True 1000x performance gains will not be realized until second generation XPoint DIMMs are directly linked to the CPU.
The raw die 1000x performance gains simply can't be fully realized when there is a storage stack in place (even an NVMe one). That's not to say XPoint will be slow, and based on what I've seen so far, I suspect XPoint haters will still end up burying their heads in the sand once we get a look at the performance results of production parts.
Leaked roadmap including upcoming Optane products
Intel is expected to show a demo of their own more recent Optane prototype, and we suspect similar performance gains there as their controller tech has likely matured. We'll keep an eye out and fill you in once we've seen Intel's newer Optane goodness it in action!
Subject: General Tech, Processors, Displays, Shows and Expos | August 16, 2016 - 01:50 PM | Ryan Shrout
Tagged: VR, virtual reality, project alloy, Intel, augmented reality, AR
At the opening keynote to this summer’s Intel Developer Forum, CEO Brian Krzanich announced a new initiative to enable a completely untether VR platform called Project Alloy. Using Intel processors and sensors the goal of Project Alloy is to move all of the necessary compute into the headset itself, including enough battery to power the device for a typical session, removing the need for a high powered PC and a truly cordless experience.
This is indeed the obvious end-game for VR and AR, though Intel isn’t the first to demonstrate a working prototype. AMD showed the Sulon Q, an AMD FX-based system that was a wireless VR headset. It had real specs too, including a 2560x1440 OLED 90Hz display, 8GB of DDR3 memory, an AMD FX-8800P APU with R7 graphics embedded. Intel’s Project Alloy is currently using unknown hardware and won’t have a true prototype release until the second half of 2017.
There is one key advantage that Intel has implemented with Alloy: RealSense cameras. The idea is simple but the implications are powerful. Intel demonstrated using your hands and even other real-world items to interact with the virtual world. RealSense cameras use depth sensing to tracking hands and fingers very accurately and with a device integrated into the headset and pointed out and down, Project Alloy prototypes will be able to “see” and track your hands, integrating them into the game and VR world in real-time.
The demo that Intel put on during the keynote definitely showed the promise, but the implementation was clunky and less than what I expected from the company. Real hands just showed up in the game, rather than representing the hands with rendered hands that track accurately, and it definitely put a schism in the experience. Obviously it’s up to the application developer to determine how your hands would actually be represented, but it would have been better to show case that capability in the live demo.
Better than just tracking your hands, Project Alloy was able to track a dollar bill (why not a Benjamin Intel??!?) and use it to interact with a spinning lathe in the VR world. It interacted very accurately and with minimal latency – the potential for this kind of AR integration is expansive.
Those same RealSense cameras and data is used to map the space around you, preventing you from running into things or people or cats in the room. This enables the first “multi-room” tracking capability, giving VR/AR users a new range of flexibility and usability.
Though I did not get hands on with the Alloy prototype itself, the unit on-stage looked pretty heavy, pretty bulky. Comfort will obviously be important for any kind of head mounted display, and Intel has plenty of time to iterate on the design for the next year to get it right. Both AMD and NVIDIA have been talking up the importance of GPU compute to provide high quality VR experiences, so Intel has an uphill battle to prove that its solution, without the need for external power or additional processing, can truly provide the untethered experience we all desire.
Subject: General Tech | August 16, 2016 - 01:38 PM | Jeremy Hellstrom
Tagged: RRAM, flexible silicon
Flexible computers are quickly becoming more of a reality as researchers continue to find ways to make generally brittle components such as processors and memory out of new materials. This latest research has discovered new materials to construct RRAM which allow working memory to remain viable even when subjected to flex. Instead of using traditional CMOS they have found certain tungsten oxides which display all of the properties required for flexible memory. The use of those oxides is not new, however they came with a significant drawback; in order to fabricate the material you needed a larger amount of heat than for CMOS. Nanotechweb reports on new developments from a team led by James Tour of Rice University which have lead to a fabrication process which can take place at room temperature. Check out their article for an overview and link to their paper.
"Researchers in the US and Korea say they have developed a new way to make a flexible, resistive random access memory (RAM) device in a room-temperature process – something that has proved difficult to do until now."
Here is some more Tech News from around the web:
- Microsoft is going to roll up all your Windows 7 and 8.1 updates Windows 10-style @ The Inquirer
- TSMC to make chips for iPad coming out in 2017, says report @ DigiTimes
- Minecraft meets Oculus Rift VR on Windows 10 @ The Inquirer
- White hat pops Windows User Account Control with log viewer data @ The Register
- Microsoft: Why we had to tie Azure Stack to boxen we picked for you @ The Register
- Windows 10 Anniversary Update – Heaven for Power Users @ Hardware Secrets
- NVIDIA Pascal Mobile GPU Specifications & Details! @ TechARP
Subject: Systems, Mobile | August 16, 2016 - 11:39 AM | Sebastian Peak
Tagged: Skylake, nvidia, notebook, laptop, Intel Core i7, gtx 1070, gtx 1060, gigabyte, gaming
GIGABYTE has refreshed their gaming laptop lineup with NVIDIA's GTX 10 series graphics, announcing updated versions of the P55 & P57 Series, and thin-and-light P35 & P37.
"GIGABYTE offers a variety of options based on preference while providing the latest GeForce® GTX 10 series graphics and the latest 6th Generation Intel Core i7 Processor for the power and performance to meet the growing demands of top tier applications, games, and Virtual Reality. With the superior performance GIAGBYTE also includes industry leading features such as M.2 PCIe SSD, DDR4 memory, USB 3.1 with Type-C connection, and HDMI 2.0."
The notebooks retain 6th-gen Intel (Skylake) Core processors, but now feature NVIDIA GeForce GTX 1070 and GTX 1060 GPUs.
Here's a rundown of the new systems from GIGABYTE, beginning with the Performance Series:
The GIGABYE P57 Gaming Laptop
"The new 17” P57 is pulling no punches when it comes to performance, including the all-new, ultra-powerful NVIDIA® GeForce® GTX 1070 & 1060 Graphics. With a fresh GPU, come fresh ID changes. Along with its subtle style, curved lines and orange accents, comes all-new additional air intake ventilation above the keyboard to improve thermal cooling. The backlit keyboard itself supports Anti-Ghost with 30-Key Rollover. The Full HD 1920x1080 IPS display provides vivid and immersive visuals, while a Swappable Bay is included for user preference of an optical drive, an additional HDD, or weight reduction."
Next we have the thin-and-light ULTRAFORCE Gaming models:
The ULTRAFORCE P35
"The new 17.3” P37 reiterates what ULTRAFORCE is all about. Despite being a 17” model, the P37 weights under 2.7kg and retains an ultra-thin and light profile being less than 22.5mm thin. Paired with extreme mobility is the NVIDIA GeForce GTX 1070 graphics. The display comes in both options of 4K UHD 3840x2160 and FHD 1920x1080, achieving high-res gaming thanks to the performance boost with the new graphics.
The P37 includes a hot-swappable bay for an additional HDD, ODD, or to reduce weight for improved mobility, forming a quad-storage system with multiple M.2 PCIe SSDs and HDDs. The Macro Keys on the left, together with the included Macro Hub software, allows up to 25 programmable macros for one-click execution in any games and applications
Powerful yet portable, the thinnest gaming laptop of the series, the 15.6” P35, also has either a UHD 3840x2160 or FHD 1920x1080 display, delivering perfect and vivid colors for an enhanced gameplay experience. Included in the Ultrabook-like chassis is the powerful all-new NVIDIA® GeForce GTX 1070 GPU. The P35 also features the iconic hot-swappable bay for flexible storage and the quad-storage system."
The P37 keyboard features macro keys
We will update with pricing and availability for these new laptops when known.
Subject: Systems | August 16, 2016 - 08:00 AM | Sebastian Peak
Tagged: small form-factor, SFF, nvidia, Lenovo, Killer Networking, Intel, IdeaCentre Y710 Cube, GTX 1080, gaming, gamescom, cube
Lenovo has announced the IdeaCentre Y710 Cube; a small form-factor system designed for gaming regardless of available space, and it can be configured with some very high-end desktop components for serious performance.
"Ideal for gamers who want to stay competitive no matter where they play, the IdeaCentre Y710 Cube comes with a built-in carry handle for easy transport between gaming stations. Housed sleekly within a new, compact cube form factor, it features NVIDIA’s latest GeForce GTX graphics and 6th Gen Intel Core processors to handle today’s most resource-intensive releases."
The Y710 Cube offers NVIDIA GeForce graphics up to the GTX 1080, and up to a 6th-generation Core i7 processor. (Though a specific processor number was not mentioned, this is likely the non-K Core i7-6700 CPU given the 65W cooler specified below).
Lenovo offers a pre-installed XBox One controller receiver with the Y710 Cube to position the small desktop as a console alternative, and the machines are configured with SSD storage and feature Killer Double Shot Pro networking (where the NIC and wireless card are combined for better performance).
- Processor: Up to 6th Generation Intel Core i7 Processor
- Operating System: Windows 10 Home
- Graphics: Up to NVIDIA GeForce GTX 1080; 8 GB
- Memory: Up to 32 GB DDR4
- Storage: Up to 2 TB HDD + 256 GB SSD
- Cooling: 65 W
- Networking: Killer LAN / WiFi 10/100/1000M
- Video: 1x HDMI, 1x VGA
- Rear Ports: 1x USB 2.0 1x USB 3.0
- Front Ports: 2x USB 3.0
- Dimensions (L x D x H): 393.3 x 252.3 x 314.5 mm (15.48 x 9.93 x 12.38 inches)
- Weight: Starting at 16.3 lbs (7.4 kg)
- Carry Handle: Yes
- Accessory: Xbox One Wireless Controller/Receiver (optional)
The IdeaCentre Y710 Cube is part of Lenovo's Gamescom 2016 annoucement, and will be available for purchase starting in October. Pricing starts at $1,299.99 for a version with the GTX 1070.
Subject: Systems | August 16, 2016 - 08:00 AM | Sebastian Peak
Tagged: PC, nvidia, Lenovo, Intel Core i7, IdeaCentre Y910, GTX 1080, gaming, desktop, all in one, AIO
Lenovo has announced a new all-in-one gaming desktop, and the IdeaCentre Y910 offers up to a
7th-generation 6th-generation Intel Core i7 processor and NVIDIA GeForce GTX 1080 graphics behind its 27-inch QHD display.
But this is no ordinary all-in-one, as Lenovo has designed the Y910 to be "effortlessly upgradeable":
"Designed to game, engineered to evolve, the IdeaCentreTM AIO Y910 is easy to upgrade –
no special tools needed. Simply press the Y button to pop out the back panel, for effortless swapping of your GPU, Memory or Storage."
The specs include a 7th-gen Intel Core i7 processor, and if that's not a typo we're talking about Intel Kaby Lake here. Specs have been corrected as 6th-gen Intel Core processors up to an i7. Exactly what SKU might be inside the Y910 isn't clear just yet, and we'll update when we know for sure. It would be limited to 65 W based on the specified cooling, and notice that the CPU isn't on the list of user-upgradable parts (though it could still be possible).
Here's a rundown of specs from Lenovo:
- Processor: Up to a 6th-generation Intel Core i7 Processor
- Graphics: Up to NVIDIA GeForce GTX 1080 8 GB
- Memory: Up to 32 GB DDR4
- Storage: Up to 2 TB HDD + 256 GB SSD
- Display: 27-inch QHD (2560x1440) near-edgeless
- Audio: Integrated 7.1 Channel Dolby Audio, 5W Harmon Kardon speakers
- Webcam: 720p, Single Array Microphone
- Networking: Killer DoubleShot WiFi / LAN
- Rear Ports:
- 2x USB 2.0
- HDMI-in / HDMI-out
- Side Ports:
- 3x USB 3.0
- 6-in-1 Card Reader (SD, SDHC, SDXC, MMC, MS, MS-Pro) Headphone, Microphone
- Cooling: 65 W
- Dimensions (W x L x H): 237.6 x 615.8 x 490.25 mm (9.35 x 24.24 x 19.3 inches)
- Weight: Starting at 27 lbs (12.24 kg)
Update: The IdeaCentre Y910 starts at $1,799.99 for a version with the GTX 1070, and will be available in October.
Subject: General Tech | August 16, 2016 - 03:00 AM | Ryan Shrout
Tagged: pro, mouse, logitech g, logitech, gaming
Readers of PC Perspective have noticed that in the last couple of years a very familiar name has been asserting itself again in the world of gaming peripherals. Logitech, once the leader and creator of the gaming-specific market with devices like the G15 keyboard, found itself in a rut and was being closed in on by competitors such as Razer, Corsair and SteelSeries. The Logitech G brand was born and a renewed focus on this growing and enthusiastic market took place. We have reviewed several of the company’s new products including the G933/633 gaming headsets, G402 mouse that included an accelerometer and the G29 racing wheel.
Today Logitech is announcing the Logitech G Pro Gaming Mouse. As the name would imply, this mouse is targeted at gamers that fancy themselves as professionals, or aspiring to be so. As a result, I imagine that many “normie” PC gamers will find the design, features and pricing to be attractive enough to put next to the keyboard on their desk. This is a wired-only mouse.
The design of the Pro Gaming Mouse is very similar to that of the Logitech G100s, a long running and very popular mouse with the professional community. It falls a bit on the small side but Logitech claims that the “small and nimble profile allows gamers of many different game types to play as precisely as possible.” It’s incredibly light as well – measuring in at just 83g!
This mouse has 6 programmable buttons, much less than some of the more extreme “gaming” mice on the market, all of which can be controlled through the Logitech Gaming Software platform. The on-board memory on the Pro allows gamers to configure the mouse on their own system and take those settings with them to competition or friends’ PCs without the need to re-install software.
RGB lights are of course included with the Pro mouse and I like the idea of the wrap around the sides and back of the mouse to add some flair to the design.
Logitech is using the PMW3366 sensor in the Pro Gaming Mouse, the same used in the G502, G900 and others. Though mouse sensors might be overlooked for their importance in a gaming, the PMW3366 optical sensor is known to deliver accurate translations from 200-12,000 DPI with no acceleration or smoothing integrated that might hinder the input from the gamer.
The buttons on the Logitech G Pro use a torsion spring system rated at 20 million clicks (!!) which works out to 25 kilometers of button travel for the life of the mouse. The spring system used is designed to minimize effort and distance required for button actuation.
All aspects of the mouse were built with gamers in mind and with Logitech’s in-house professional gamers at the design table. Everything from the plastic feel, size, weight, etc. The scroll wheel is optimized for gamer’s use, not productivity, while the braided cable prevents snags. And the best part? The Logitech G Pro Gaming Mouse is set to have an MSRP of just $69.
The full press release is after the break and we are due to have a Logitech G Pro Gaming Mouse in our hands later today. We will follow up with thoughts and impressions soon!
Subject: Systems, Mobile | August 16, 2016 - 12:00 AM | Sebastian Peak
Tagged: pascal, nvidia, notebook, msi, GTX 1080, gtx 1070, gtx 1060, gaming laptop, gaming
MSI has updated their gaming notebook lineup with the new NVIDIA Pascal mobile GPUs, with the GTX 1080, GTX 1070, and GTX 1060 now available across the board. MSI says the new GPUs will provide up to 40% better performance than the company’s previous GT, GS, and GE models.
“MSI’s GT83/73VR Titan series now showcases an even more commanding design with sports car inspired exhausts and MSI’s Cooler Boost Titan, featuring multiple exhausts and dual whirlwind blade fans to guarantee the best performance even under the most stress. Available in 3 different sizes and 17 unique configurations, including with SLI graphics, 4K panels and Tobii’s eye-tracking technology, MSI’s GT series is the optimum laptop for serious gamers.”
Positioned at the top of the heap is the mighty Titan series, which naturally offers the highest possible specs for those who can afford the price tag.
Notice anything about the top-end GT83 model in the chart above? The GT83VR Titan SLI indeed contains not one, but two NVIDIA GTX 1080 graphics chips, making this $5099 gaming machine a monster of a system - though its 1080p screen real estate means a connected VR headset will be more likely to use all of that available GPU power.
Moving down to the GT72/GT62 series, we see a move to the GTX 1070 GPU accross the board:
Next up is the GS73, which offers (in addition to Pascal graphics) MSI's "Cooler Boost Trinity", which is the company's advanced cooling system for thin notebook designs.
“MSI’s redesigned GS73/63 VR Stealth Pro series now comes with MSI’s Cooler Boost Trinity, a temperature control system featuring three ultra-thin whirlwind blade fans, and a 5-pipe thermal design optimized for ultra-slim gaming notebooks. Available in 17-inch, 15-inch, and 14-inch options, MSI’s GS series gives power mobile gaming a new meaning with the performance of larger systems while measuring less than 1-inch thick.”
The more modest GTX 1060 powers the <1 inch thick notebooks in the series, and both the GS73 and GS63 VR Stealth Pro are equipped with 4K resolution IPS screens (with the GS43VR Phantom Pro at 1080p).
Next we have the VR Apache series, with another approach to cooling called "Cooler Boost 4":
“MSI’s GE72/62 VR Apache series now features MSI’s Cooler Boost 4 technology, an enhanced cooling system with multiple exhausts to keep temperatures low even during the most headed battles. Starting at $1,649, the VR-ready GE series comes in two different sizes and is the ideal unit for gaming enthusiast looking for a powerful and reliable unit.”
These lower-cost gaming machines are still equipped with Intel Core i7 processors, and offer GTX 1060 graphics for both models.
As a very interesting addition to the news of these new laptops, MSI has also announced that select machines equipped with NVIDIA GTX 10 Series graphics will feature 120Hz IPS panels with a 5ms response time.
We should have more imformation on availability soon.
Take your Pascal on the go
Easily the strongest growth segment in PC hardware today is in the adoption of gaming notebooks. Ask companies like MSI and ASUS, even Gigabyte, as they now make more models and sell more units of notebooks with a dedicated GPU than ever before. Both AMD and NVIDIA agree on this point and it’s something that AMD was adamant in discussing during the launch of the Polaris architecture.
Both AMD and NVIDIA predict massive annual growth in this market – somewhere on the order of 25-30%. For an overall culture that continues to believe the PC is dying, seeing projected growth this strong in any segment is not only amazing, but welcome to those of us that depend on it. AMD and NVIDIA have different goals here: GeForce products already have 90-95% market share in discrete gaming notebooks. In order for NVIDIA to see growth in sales, the total market needs to grow. For AMD, simply taking back a portion of those users and design wins would help its bottom line.
But despite AMD’s early talk about getting Polaris 10 and 11 in mobile platforms, it’s NVIDIA again striking first. Gaming notebooks with Pascal GPUs in them will be available today, from nearly every system vendor you would consider buying from: ASUS, MSI, Gigabyte, Alienware, Razer, etc. NVIDIA claims to have quicker adoption of this product family in notebooks than in any previous generation. That’s great news for NVIDIA, but might leave AMD looking in from the outside yet again.
Technologically speaking though, this makes sense. Despite the improvement that Polaris made on the GCN architecture, Pascal is still more powerful and more power efficient than anything AMD has been able to product. Looking solely at performance per watt, which is really the defining trait of mobile designs, Pascal is as dominant over Polaris as Maxwell was to Fiji. And this time around NVIDIA isn’t messing with cut back parts that have brand changes – GeForce is diving directly into gaming notebooks in a way we have only seen with one release.
The ASUS G752VS OC Edition with GTX 1070
Do you remember our initial look at the mobile variant of the GeForce GTX 980? Not the GTX 980M mind you, the full GM204 operating in notebooks. That was basically a dry run for what we see today: NVIDIA will be releasing the GeForce GTX 1080, GTX 1070 and GTX 1060 to notebooks.