Flash player not detected. Click here to install flash.
« 1 2 3 4 5 »

Lian Li Releases 550W and 750W SFX-L Power Supplies

Subject: Cases and Cooling | August 19, 2016 - 04:15 PM |
Tagged: Lian Li, SFX, SFX-L, power supply, PSU, small form-factor, 550W, 750w, PE-550, PE-750

Announced back in April, Lian Li has now released a pair of SFX-L power supplies for small form-factor systems with the PE-550 and PE-750. The pair offer fully-modular designs with flat, ribbon-style cables, and carry 80 Plus Gold and Platinum certifications.

pe750.jpg

The PE-750 power supply

"The PE-550 is 80Plus Gold-rated for a maximum 89.5% efficiency; the PE-750 is 80Plus Platinum-rated for a maximum 92% efficiency. Both use a near-silent 120mm smart fan and minimize noise by operating fanlessly when output power is below 30%. Both PSUs use a single 12V rail design for the best possible stability under heavy system load, matched with myriad protection features to ensure reliable operation."

pe550.jpg

The PE-550 power supply

For more information and full specs, the product page for the 550W PE-550 is here, and the 750W PE-750 page is here. The PE-550 and PE-750 retail for $115 and $169 respectively, and both are available now.

Source: Lian Li

ASUS Announces the ROG GX800 4K G-SYNC Gaming Laptop with GTX 1080 SLI

Subject: Mobile | August 19, 2016 - 03:46 PM |
Tagged: UHD, ROG, Republic of Gamers, notebook, laptop, GX800, GTX 1080, gaming, g-sync, asus, 4k

ASUS has announced perhaps the most impressively-equipped gaming laptop to date. Not only does the new ROG GX800 offer dual NVIDIA GeForce GTX 1080 graphics in SLI, but these cards are powering an 18-inch 4K display panel with NVIDIA G-SYNC.

gx800_1.jpg

Not enough? The system also offers liquid cooling (via the liquid-cooling dock) which allows for overclocking of the CPU, graphics, and memory.

gx800_2.jpg

"ROG GX800 is the world’s first 18-inch real 4K UHD gaming laptop to feature the latest NVIDIA GeForce GTX 1080 in 2-Way SLI. It gives gamers desktop-like gaming performance, silky-smooth gameplay and detailed 4K UHD gaming environments. The liquid-cooled ROG GX800 features the ROG-exclusive Hydro Overclocking System that allows for extreme overclocking of the processor, graphics card and DRAM. In 3DMark 11 and Fire Strike Ultra benchmark tests, a ROG GX800 equipped with the Hydro Overclocking System scored 76% higher than other gaming laptops in the market.

ROG GX800 comes with NVIDIA G-SYNC technology and has plug-and-play compatibility with leading VR headsets to allow gamers to enjoy truly immersive VR environments. It has the MechTAG (Mechanical Tactile Advanced Gaming) keyboard with mechanical switches and customizable RGB LED backlighting for each key."

gx800_3.jpg

Specifics on availability and pricing were not included in the announcement.

Source: ASUS

Serious mobile gaming power from ASUS, if you can afford it

Subject: Mobile | August 19, 2016 - 03:15 PM |
Tagged: asus, ROG, gtx 1070, G752VS OC Edition, pascal, gaming laptop

The mobile version of the GTX 1070, referred to here as the GTX 1070M even if NVIDIA doesn't, is a interesting part sporting 128 more cores than the desktop version albeit at a lower clock.  Hardware Canucks received the ASUS RoG G752VS OC Edition gaming laptop which uses the mobile GTX 1070, overclocked by 50MHz on the Core and by 150MHz on the 8GB of RAM, along with an i7-6820 running at 3.8GHz.  This particular model will set you back $3000US and offers very impressive performance on either it's 17.3" 1080p G-SYNC display or on an external display of your choice.  The difference in performance between the new GTX 1070(M) and the previous GTX 980M is marked, check out the full review to see just how much better this card is ... assuming the price tag doesn't immediately turn you off.

GTX1070-NOTEBOOK-10.jpg

"The inevitable has finally happened: NVIDIA's Pascal architecture has made its way into gaming notebooks....and it is spectacular. In this review we take a GTX 1070-totting laptop out for a spin. "

Here are some more Mobile articles from around the web:

More Mobile Articles

Now we know what happened to Josh's stream; does your camera do YUY2 encoding?

Subject: General Tech | August 19, 2016 - 01:06 PM |
Tagged: yuy2, windows 10, skype, microsoft, idiots

In their infinite wisdom, Microsoft has disabled MJPEG and H.264 encoding on USB webcams for Skype in their Adversary Update to Windows 10, leaving only YUY2 encoding as your choice.  The supposed reasoning behind this is to ensure that there is no duplication of encoding which could lead to poor performance; ironically the result of this change is poor performance for the majority of users such as Josh.  Supposedly there will be a fix released some time in September but for now the only option is to roll back your AU installation, assuming you are not already past the 10 day deadline.   You can thank Brad Sams over at Thurrott.com for getting to the bottom of the issue which has been plaguing users of Skype and pick up some more details on his post.

4520-max_headroom_31.jpg

"Microsoft made a significant change with the release of Windows 10 and support for webcams that is causing serious problems for not only consumers but also the enterprise. The problem is that after installing the update, Windows no longer allows USB webcams to use MJPEG or H264 encoded streams and is only allowing YUY2 encoding."

Here is some more Tech News from around the web:

Tech Talk

Source: Thurrott

Gamescom 2016: Mount & Blade II: Bannerlord Video

Subject: General Tech | August 19, 2016 - 01:40 AM |
Tagged: mount & blade ii, taleworlds

Mount & Blade is a quite popular franchise in some circles. It is based around a fairly simple, but difficult to master combat system, which mixes melee, blocking, and ranged attacks. They are balanced by reload time (and sometimes accuracy) to make all methods viable. A 100 vs 100 battle, including cavalry and other special units, is quite unique. It is also a popular mod platform, although Warband's engine can be a little temperamental.

As such, there's quite a bit of interest for the upcoming Mount & Blade II: Bannerlord. The Siege game mode involves an attacking wave beating down a fortress, trying to open as many attack paths as possible, and eventually overrunning the defenders. The above video is from the defending perspective. It seems like it, mechanically, changed significantly from Warband, particularly the Napoleonic Wars DLC that I'm used to. In that mod, attackers spawn infinitely until a time limit is reached. This version apparently focuses on single-life AI armies, which Warband had as Commander Battles.

Hmm. Still no release date, though.

AMD Announces TrueAudio Next

Subject: Graphics Cards | August 18, 2016 - 07:58 PM |
Tagged: amd, TrueAudio, trueaudio next

Using a GPU for audio makes a lot of sense. That said, the original TrueAudio was not really about that, and it didn't really take off. The API was only implemented in a handful of titles, and it required dedicated hardware that they have since removed from their latest architectures. It was not about using the extra horsepower of the GPU to simulate sound, although they did have ideas for “sound shaders” in the original TrueAudio.

amd-2016-true-audio-next.jpg

TrueAudio Next, on the other hand, is an SDK that is part of AMD's LiquidVR package. It is based around OpenCL; specifically, it uses AMD's open-source FireRays library to trace the ways that audio can move from source to receiver, including reflections. For high-frequency audio, this is a good assumption, and that range of frequencies are more useful for positional awareness in VR, anyway.

Basically, TrueAudio Next has very little to do with the original.

Interestingly, AMD is providing an interface for TrueAudio Next to reserve compute units, but optionally (and under NDA). This allows audio processing to be unhooked from the video frame rate, provided that the CPU can keep both fed with actual game data. Since audio is typically a secondary thread, it could be ready to send sound calls at any moment. Various existing portions of asynchronous compute could help with this, but allowing developers to wholly reserve a fraction of the GPU should remove the issue entirely. That said, when I was working on a similar project in WebCL, I was looking to the integrated GPU, because it's there and it's idle, so why not? I would assume that, in actual usage, CU reservation would only be enabled if an AMD GPU is the only device installed.

Anywho, if you're interested, then be sure to check out AMD's other post on it, too.

Source: AMD

More space than even Jimmy Stewart would need to satisfy his voyeurism

Subject: Storage | August 18, 2016 - 02:59 PM |
Tagged: skyhawk, Seagate, rear window, hitchcock, 10TB

Seagate designed the 10TB SkyHawk HDD for recording video surveillance by adding in firmware they refer to as ImagePerfect.  This is designed for handling 24/7 surveillance and extends the endurance life of the drive to 180TB a year, for the length of the three year warranty.  Constantly recording video means this drive will write far more often than most other usages scenarios and reads will be far less important.  eTeknix tried the drive out in their usual suite of benchmarks; being somewhat difficult to set up a test to verify the claimed support for up to 64HD recordings simultaneously.  If you are looking for durable storage at a reasonable price and might even consider needing more than eight drives of storage you should check the SkyHawk out.

Seagate_SkyHawk-Photo-top-angle.jpg

"I’ve recently had a look at the 10TB IronWolf NAS HDD from Seagate and today it is time to take a closer look at its brother, the brand new SkyHawk DVR and NVR hard disk drive with a massive 10TB capacity. Sure, you could use NAS optimized drives for simple video setups, but having a video and camera optimized surveillance disk does bring advantages. Especially when your recorded video is critical."

Here are some more Storage reviews from around the web:

Storage

Source: eTeknix

NVIDIA Officially Announces GeForce GTX 1060 3GB Edition

Subject: Graphics Cards | August 18, 2016 - 02:28 PM |
Tagged: nvidia, gtx 1060 3gb, gtx 1060, graphics card, gpu, geforce, 1152 CUDA Cores

NVIDIA has officially announced the 3GB version of the GTX 1060 graphics card, and it indeed contains fewer CUDA cores than the 6GB version.

GTX1060.jpg

The GTX 1060 Founders Edition

The product page on NVIDIA.com now reflects the 3GB model, and board partners have begun announcing their versions. The MSRP on this 3GB version is set at $199, and availablity of partner cards is expected in the next couple of weeks. The two versions will be designated only by their memory size, and no other capacities of either card are forthcoming.

  GeForce GTX 1060 3GB GeForce GTX 1060 6GB
Architecture Pascal Pascal
CUDA Cores 1152 1280
Base Clock 1506 MHz 1506 MHz
Boost Clock 1708 MHz 1708 MHz
Memory Speed 8 Gbps 8 Gbps
Memory Configuration 3GB 6GB
Memory Interface 192-bit 192-bit
Power Connector 6-pin 6-pin
TDP 120W 120W

As you can see from the above table, the only specification that has changed is the CUDA core count, with base/boost clocks, memory speed and interface, and TDP identical. As to performance, NVIDIA says the 6GB version holds a 5% performance advantage over this lower-cost version, which at $199 is 20% less expensive than the previous GTX 1060 6GB.

Source: NVIDIA

Intel's new SoC, the Joule

Subject: General Tech | August 18, 2016 - 02:20 PM |
Tagged: Intel, joule, iot, IDF 2016, SoC, 570x, 550x, Intel RealSense

Intel has announced the follow up to Edison and Curie, their current SoC device, called Joule.  They have moved away from the Quark processors they previously used to a current generation Atom.  The device is designed to compete against NVIDIA's Jetson as it is far more powerful than a Raspberry Pi and will be destined for different usage.  It will support Intel RealSense, perhaps appearing in the newly announced Project Alloy VR headset.  Drop by Hack a Day for more details on the two soon to be released models, the Joule 570x and 550x.

intel-joule-1-2x1-720x360.jpg

"The high-end board in the lineup features a quad-core Intel Atom running at 2.4 GHz, 4GB of LPDDR4 RAM, 16GB of eMMC, 802.11ac, Bluetooth 4.1, USB 3.1, CSI and DSI interfaces, and multiple GPIO, I2C, and UART interfaces."

Here is some more Tech News from around the web:

Tech Talk

Source: Hack a Day

Podcast #413 - NVIDIA Pascal Mobile, ARM and Intel partner on 10nm, Flash Memory Summit and more!

Subject: Editorial | August 18, 2016 - 02:20 PM |
Tagged: video, podcast, pascal, nvidia, msi, mobile, Intel, idf, GTX 1080, gtx 1070, gtx 1060, gigabyte, FMS, Flash Memory Summit, asus, arm, 10nm

PC Perspective Podcast #413 - 08/18/2016

Join us this week as we discuss the new mobile GeForce GTX 10-series gaming notebooks, ARM and Intel partnering on 10nm, Flash Memory Summit and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

Hosts:  Allyn Malventano, Sebastian Peak, Josh Walrath and Jeremy Hellstrom

Program length: 1:29:39
  1. Week in Review:
  2. This episode of PC Perspective is brought to you by Casper!! Use code “PCPER”
  3. News items of interest:
    1. 0:42:05 Final news from FMS 2016
  4. Hardware/Software Picks of the Week
    1. Ryan: VR Demi Moore
  5. Closing/outro

Author:
Subject: Processors
Manufacturer: AMD
Tagged: Zen, amd

Gunning for Broadwell-E

As I walked away from the St. Regis in downtown San Francisco tonight, I found myself wandering through the streets towards my hotel with something unique in tow. It was a smile. I was smiling, thinking about what AMD had just demonstrated and showed at its latest Zen processor reveal. The importance of this product launch can literally not be overstated for a company struggling to find a foothold to hang on to in a market that it once had a definitive lead. It’s been many years since I left a conference call, or a meeting, or a press conference feeling genuinely hopefully and enthusiastic about what AMD has shown me. Tonight I had that.

AMD’s CEO Lisa Su, and CTO Mark Papermaster, took stage down the street from the Intel Developer Forum to roll out a handful of new architectural details about the Zen architecture while also showing the first performance results comparing it to competing parts from Intel. The crowd in attendance, a mix of media and analysts, were impressed. The feeling was palpable in the room.

zenicon.jpg

It’s late as I write this, and while there are some interesting architecture details to discuss, I think it is in everyone’s best interest that we touch on them lightly for now, and instead refocus on the deep-dive once the Hot Chips information comes out early next week. What you really want to know is clear: can Zen make Intel work again? Can Zen make that $1700 price tag on the Broadwell-E 6950X seem even more ludicrous? Yes.

The Zen Architecture

Much of what was discussed from the Zen architecture is a re-release of what has been out in recent months. This is a completely new, from the ground up, microarchitecture and not a revamp of the aging Bulldozer design. It integrated SMT (simultaneous multi-threading), a first for an AMD CPU, to better take efficient advantage of a longer pipeline. Intel has had HyperThreading for a long time now and AMD is finally joining the fold. A high bandwidth and low latency caching system is used to “feed the beast” as Papermaster put it and utilizing 14nm process technology (starting at Global Foundries) gives efficiency, and scaling a significant bump while enabling AMD to scale from notebooks to desktops to servers with the same architecture.

zenpm-10.jpg

By far the most impressive claim from AMD thus far was that of a 40% increase in IPC over previous AMD designs. That’s a HUGE claim and is key to the success or failure of Zen. AMD proved to me today that the claims are real and that we will see the immediate impact of that architecture bump from day one.

zenpm-4.jpg

Press was told of a handful of high level changes to the new architecture as well. Branch prediction gets a complete overhaul. This marks the first AMD processor to have a micro-op cache. Wider execution width with broader instruction schedulers are integrated, all of which adds up to much higher instruction level parallelism to improve single threaded performance.

zenpm-6.jpg

Performance improvements aside, throughput and efficiency go up with Zen as well. AMD has integrated an 8MB L3 cache and improved prefetching for up 5x the cache bandwidth available per core on the CPU. SMT makes sure the pipeline stays full to prevent “bubbles” that introduce latency and lower efficiency while region-specific power gating means that we’ll see Zen in notebooks as well as enterprise servers in 2017. It truly is an impressive design from AMD.

zenfull-27.jpg

Summit Ridge, the enthusiast platform that will be the first product available with Zen, is based on the AM4 platform and processors will go up to 8-cores and 16-threads. DDR4 memory support is included, PCI Express 3.0 and what AMD calls “next-gen” IO – I would expect a quick leap forward for AMD to catch up on things like NVMe and Thunderbolt.

The Real Deal – Zen Performance

As part of today’s reveal, AMD is showing the first true comparison between Zen and Intel processors. Sure, AMD showed a Zen-powered system running the upcoming Deus Ex running at 4K with a system powered by the Fury X, but the really impressive results where shown when comparing Zen to a Broadwell-E platform.

zenfull-29.jpg

Using Blender to measure the performance of a rendering workload (a Zen CPU mockup of course), AMD ran an 8-core / 16-thread Zen processor at 3.0 GHz against an 8-core / 16-thread Broadwell-E processor at 3.0 GHz (likely a fixed clocked Core i7-6900K). The point of the demonstration was to showcase the IPC improvements of Zen and it worked: the render completed on the Zen platform a second or two faster than it did on the Intel Broadwell-E system.

DSC01490.jpg

Not much to look at, but Zen on the left, Broadwell-E on the right...

Of course there are lots of caveats: we didn’t setup the systems, I don’t know for sure that GPUs weren’t involved, we don’t know the final clocks of the Zen processors releasing in early 2017, etc. But I took two things away from the demonstration that are very important.

  1. The IPC of Zen is on-par or better than Broadwell.
  2. Zen will scale higher than 3.0 GHz in 8-core configurations.

AMD obviously didn’t state what specific SKUs were going to launch with the Zen architecture, what clock speeds they would run at, or even what TDPs they were targeting. Instead we were left with a vague but understandable remark of “comparable TDPs to Broadwell-E”.

Pricing? Overclocking? We’ll just have to wait a bit longer for that kind of information.

Closing Thoughts

There is clearly a lot more for AMD to share about Zen but the announcement and showcase made this week with the early prototype products have solidified for me the capability and promise of this new microarchitecture. We have asked for, and needed, as an industry, a competitor to Intel in the enthusiast CPU space – something we haven’t legitimately had since the Athlon X2 days. Zen is what we have been pining over, what gamers and consumers have needed.

zenpm-11.jpg

AMD’s processor stars might finally be aligning for a product that combines performance, efficiency and scalability at the right time. I’m ready for it –are you?

Author:
Subject: Editorial
Manufacturer: NVIDIA

NVIDIA Today?

It always feels a little odd when covering NVIDIA’s quarterly earnings due to how they present their financial calendar.  No, we are not reporting from the future.  Yes, it can be confusing when comparing results and getting your dates mixed up.  Regardless of the date before the earnings, NVIDIA did exceptionally well in a quarter that is typically the second weakest after Q1.

NVIDIA reported revenue of $1.43 billion.  This is a jump from an already strong Q1 where they took in $1.30 billion.  Compare this to the $1.027 billion of its competitor AMD who also provides CPUs as well as GPUs.  NVIDIA sold a lot of GPUs as well as other products.  Their primary money makers were the consumer space GPUs and the professional and compute markets where they have a virtual stranglehold on at the moment.  The company’s GAAP net income is a very respectable $253 million.

results.png

The release of the latest Pascal based GPUs were the primary mover for the gains for this latest quarter.  AMD has had a hard time competing with NVIDIA for marketshare.  The older Maxwell based chips performed well against the entire line of AMD offerings and typically did so with better power and heat characteristics.  Even though the GTX 970 was somewhat limited in its memory configuration as compared to the AMD products (3.5 GB + .5 GB vs. a full 4 GB implementation) it was a top seller in its class.  The same could be said for the products up and down the stack.

Pascal was released at the end of May, but the company had been shipping chips to its partners as well as creating the “Founder’s Edition” models to its exacting specifications.  These were strong sellers throughout the end of May until the end of the quarter.  NVIDIA recently unveiled their latest Pascal based Quadro cards, but we do not know how much of an impact those have had on this quarter.  NVIDIA has also been shipping, in very limited quantities, the Tesla P100 based units to select customers and outfits.

Click to read more about NVIDIA's latest quarterly results!

Manufacturer: NVIDIA

Is Enterprise Ascending Outside of Consumer Viability?

So a couple of weeks have gone by since the Quadro P6000 (update: was announced) and the new Titan X launched. With them, we received a new chip: GP102. Since Fermi, NVIDIA has labeled their GPU designs with a G, followed by a single letter for the architecture (F, K, M, or P for Fermi, Kepler, Maxwell, and Pascal, respectively), which is then followed by a three digit number. The last digit is the most relevant one, however, as it separates designs by their intended size.

nvidia-2016-Quadro_P6000_7440.jpg

Typically, 0 corresponds to a ~550-600mm2 design, which is about as larger of a design that fabrication labs can create without error-prone techniques, like multiple exposures (update for clarity: trying to precisely overlap multiple designs to form a larger integrated circuit). 4 corresponds to ~300mm2, although GM204 was pretty large at 398mm2, which was likely to increase the core count while remaining on a 28nm process. Higher numbers, like 6 or 7, fill back the lower-end SKUs until NVIDIA essentially stops caring for that generation. So when we moved to Pascal, jumping two whole process nodes, NVIDIA looked at their wristwatches and said “about time to make another 300mm2 part, I guess?”

The GTX 1080 and the GTX 1070 (GP104, 314mm2) were born.

nvidia-2016-gtc-pascal-banner.png

NVIDIA already announced a 600mm2 part, though. The GP100 had 3840 CUDA cores, HBM2 memory, and an ideal ratio of 1:2:4 between FP64:FP32:FP16 performance. (A 64-bit chunk of memory can store one 64-bit value, two 32-bit values, or four 16-bit values, unless the register is attached to logic circuits that, while smaller, don't know how to operate on the data.) This increased ratio, even over Kepler's 1:6 FP64:FP32, is great for GPU compute, but wasted die area for today's (and tomorrow's) games. I'm predicting that it takes the wind out of Intel's sales, as Xeon Phi's 1:2 FP64:FP32 performance ratio is one of its major selling points, leading to its inclusion in many supercomputers.

Despite the HBM2 memory controller supposedly being actually smaller than GDDR5(X), NVIDIA could still save die space while still providing 3840 CUDA cores (despite disabling a few on Titan X). The trade-off is that FP64 and FP16 performance had to decrease dramatically, from 1:2 and 2:1 relative to FP32, all the way down to 1:32 and 1:64. This new design comes in at 471mm2, although it's $200 more expensive than what the 600mm2 products, GK110 and GM200, launched at. Smaller dies provide more products per wafer, and, better, the number of defective chips should be relatively constant.

Anyway, that aside, it puts NVIDIA in an interesting position. Splitting the xx0-class chip into xx0 and xx2 designs allows NVIDIA to lower the cost of their high-end gaming parts, although it cuts out hobbyists who buy a Titan for double-precision compute. More interestingly, it leaves around 150mm2 for AMD to sneak in a design that's FP32-centric, leaving them a potential performance crown.

nvidia-2016-pascal-volta-roadmap-extremetech.png

Image Credit: ExtremeTech

On the other hand, as fabrication node changes are becoming less frequent, it's possible that NVIDIA could be leaving itself room for Volta, too. Last month, it was rumored that NVIDIA would release two architectures at 16nm, in the same way that Maxwell shared 28nm with Kepler. In this case, Volta, on top of whatever other architectural advancements NVIDIA rolls into that design, can also grow a little in size. At that time, TSMC would have better yields, making a 600mm2 design less costly in terms of waste and recovery.

If this is the case, we could see the GPGPU folks receiving a new architecture once every second gaming (and professional graphics) architecture. That is, unless you are a hobbyist. If you are? I would need to be wrong, or NVIDIA would need to somehow bring their enterprise SKU into an affordable price point. The xx0 class seems to have been pushed up and out of viability for consumers.

Or, again, I could just be wrong.

Gigabyte BRIX Gaming UHD Combines 2.6L Chassis with Discrete GPU

Subject: Systems | August 17, 2016 - 04:37 PM |
Tagged: UHD, SFF, IDF 2016, idf, gigabyte, gaming, brix

While wandering around the exhibit area at this year’s Intel Developers Forum, I ran into our friends at Gigabyte a brand new BRIX small form factor PC. The BRIX Gaming UHD takes the now-standard NUC/BRIX block shape and literally raises it up, extending the design vertically to allow for higher performance components and the added cooling capability to integrate them.

brixuhd06.jpg

The design of the BRIX Gaming UHD combines a brushed aluminum housing with a rubber base and bordering plastic sections to create a particularly stunning design that is both simple and interesting. Up top is a fan that pulls air through the entire chassis, running over the heatsink for the CPU and GPU. This is similar in function to the Mac Pro, though this is a much more compact device with a very different price point and performance target.

brixuhd08.jpg

Around the back you’ll find all the connections that the BRIX Gaming UHD supplies: three (!!) mini DisplayPort connections, a full size HDMI output, four USB 3.0 ports, a USB 3.1 connection, two wireless antennae ports, Gigabit Ethernet and audio input and output. That is a HUGE amount of connectivity options and is more than many consumer’s current large-scale desktops.

brixuhd07.jpg

The internals of the system are impressive and required some very custom design for cooling and layout.

brixuhd02.jpg

The discrete NVIDIA graphics chip (in this case the GTX 950) is on the left chamber while the Core i7-6500HQ Skylake processor is on the left side along with the memory slot and wireless card.

brixuhd04.jpg

Gigabyte measures the size of the BRIX Gaming UHD at 2.6 liters. Because of that compact space there is no room for hard drives: you get access to two M.2 2280 slots for storage instead. There are two SO-DIMM slots for DDR4 memory up to 2133 MHz, integrated 802.11ac support and support for quad displays.

Availability and pricing are still up in the air, though early reports are that starting cost will be $1300. Gigabyte updated me and tells me that the BRIX Gaming UHD will be available in October and that an accurage MSRP has not been set. It would not surprise me if this model never actually saw the light of day and instead Gigabyte waited for NVIDIA’s next low powered Pascal based GPU, likely dubbed the GTX 1050. We’ll keep an eye on the BRIX Gaming UHD from Gigabyte to see what else transpires, but it seems the trend of small form factor PCs that sacrifice less in terms of true gaming potential continues.

HP's Latest Omen Desktop Puts a Gaming System in a Cube

Subject: Systems | August 17, 2016 - 04:25 PM |
Tagged: PC, Omen 900, Omen, hp, gaming, desktop, cube, computer

HP has introduced a new pre-built gaming desktop, and while the Omen series has existed for a while the new Omen offers a very different chassis design.

omen_01.jpg

This Omen isn't just cube-like, it's actually a cube (Image credit: HP)

Inside the specifications look like the typical pre-built gaming rig, with processors up to an Intel Core i7 6700K and graphics options including AMD's Radeon RX 480 and the NVIDIA GeForce GTX 1080. Configurations on HP's online store start at $1799 for a version with a GTX 960, a 1TB spinning hard drive, and a single 8GB DIMM. (Curiously, though reported as the "Omen X", the current listing is for an "Omen 900".)

OMEN_X_Radeon_Performance.jpg

A look inside an AMD Crossfire configuration (Image credit: HP via The Verge)

HP is certainly no stranger to unusual chassis designs, as those who remember the Blackbird 002 (which Ryan stood on - and reviewed - here) and subsequent Firebird 803 systems will know. The Verge is reporting that HP will offer the chassis as a standalone product for $599, itself an unusual move for the company.

omen_2.jpg

(Image credit: HP)

The new Omen desktop goes on sale officially starting tomorrow.

Source: The Verge

Corsair's Dominator Platinum, 64GB of DDR4-3200

Subject: Memory | August 17, 2016 - 04:15 PM |
Tagged: Corsair Dominator Platinum, corsair, ddr4-3200

It will certainly cost you quite a bit to pick up but if you have a need for a huge pool of memory the 64GB Corsair Dominator Platinum DDR4-3200 kit is an option worth considering.  The default timings are 16-18-18-36 and the heat spreader and DHX cooling fins keep the DIMMs from heating up, even when Overclockers Club upped the voltage to 1.45V.  Part of the price premium is the testing which was done before these DIMMs left the factory, as well as the custom PCB and hand picked ICs which should translate to a minimum of issues running at their full speed or even when overclocked.  Pop by to see how this kit performed in OC's benchmarks.

3_thumb.jpg

"If I break it down, you get a set of modules that have been through an extensive binning process that hand selects the memory ICs being used on these modules. There is a custom designed, cooling optimized PCB that those memory IC's are mounted to so that we can enjoy a trouble free user experience. The DHX cooling solution on these modules is easily up to the task of keeping the modules cool with minimal airflow. The heat spreader and DHX cooling fins are designed to use convective cooling in the absence of any airflow over the modules."

Here are some more Memory articles from around the web:

Memory

Skyrim isn't done with you yet; full conversion Enderal arrives

Subject: General Tech | August 17, 2016 - 02:02 PM |
Tagged: Enderal, SureAI, mod, skyrim

 Enderal: The Shards of Order is a complete conversion of the Steam version of Skyrim into a completely new game in a brand new world.  The mod is 8 GB in size and requires a separate launcher, both available at Enderal.com and you can expect between 30 to 100 hours playtime.  You may remember this team from Nehrim, their previous total conversion mod for Oblivion.  Rock, Paper, SHOTGUN have covered this mod previously, however until now it was only available in German.  The full English version, including voice acting, is now complete and ready for you to dive into.  You might want to consider unmodding your Skyrim to install the mod, it does create a copy of the Skyrim installation so you can restore your Thomas the Tank Engine mod once you are set up. 

20131110214924-cf65c678-xl.png

"A final version of Enderal: The Shards of Order has been completed and can be downloaded for free now. While ‘Enderal’ sounds like it could be something made by a United States pharmaceutical company, it is actually a massive total conversion mod for Skyrim, not just adding new weapons or turning it into a survival game, but creating a whole new RPG using the raw materials of its parent."

Here is some more Tech News from around the web:

Gaming

Source: rock

Intel Larrabee Post-Mortem by Tom Forsyth

Subject: Graphics Cards, Processors | August 17, 2016 - 01:38 PM |
Tagged: Xeon Phi, larrabee, Intel

Tom Forsyth, who is currently at Oculus, was once on the core Larrabee team at Intel. Just prior to Intel's IDF conference in San Francisco, which Ryan is at and covering as I type this, Tom wrote a blog post that outlined the project and its design goals, including why it didn't hit market as a graphics device. He even goes into the details of the graphics architecture, which was almost entirely in software apart from texture units and video out. For instance, Larrabee was running FreeBSD with a program, called DirectXGfx, that gave it the DirectX 11 feature set -- and it worked on hundreds of titles, too.

Intel_Xeon_Phi_Family.jpg

Also, if you found the discussion interesting, then there is plenty of content from back in the day to browse. A good example is an Intel Developer Zone post from Michael Abrash that discussed software rasterization, doing so with several really interesting stories.

That old chestnut again? Intel compares their current gen hardware against older NVIDIA kit

Subject: General Tech | August 17, 2016 - 12:41 PM |
Tagged: nvidia, Intel, HPC, Xeon Phi, maxwell, pascal, dirty pool

There is a spat going on between Intel and NVIDIA over the slide below, as you can read about over at Ars Technica.  It seems that Intel have reached into the industries bag of dirty tricks and polished off an old standby, testing new hardware and software against older products from their competitors.  In this case it was high performance computing products which were tested, Intel's new Xeon Phi against NVIDIA's Maxwell, tested on an older version of the Caffe AlexNet benchmark.

NVIDIA points out that not only would they have done better than Intel if an up to date version of the benchmarking software was used, but that the comparison should have been against their current architecture, Pascal.  This is not quite as bad as putting undocumented flags into compilers to reduce the performance of competitors chips or predatory discount programs but it shows that the computer industry continues to have only a passing acquaintance with fair play and honest competition.

intel-xeon-phi-performance-claim.jpg

"At this juncture I should point out that juicing benchmarks is, rather sadly, par for the course. Whenever a chip maker provides its own performance figures, they are almost always tailored to the strength of a specific chip—or alternatively, structured in such a way as to exacerbate the weakness of a competitor's product."

Here is some more Tech News from around the web:

Tech Talk

Source: Ars Technica

Cooler Master Introduces MasterLiquid Maker 92 AIO Liquid CPU Cooler

Subject: Cases and Cooling | August 17, 2016 - 11:43 AM |
Tagged: cooler master, MasterLiquid Maker 92, AIO, liquid cooler, self contained, convertible

Cooler Master has introduced an unusual all-in-one liquid CPU cooler with their new MasterLiquid Maker 92, a design which places all of the components together on top of the CPU block.

main.jpg

We've seen a similar idea from Corsair with the cooler first found in the Bulldog system, and later introduced separately as the H5 SF mini-ITX liquid cooler. Cooler Master's design uses a different arrangement, with push-pull 92mm fans sandwiching a radiator that rotates 90º to permit either a verticle or horizontal setup. The latter position allows for better low-profile chassis compatibility, and also adds airflow to motherboard components.

main_2.jpg

Specifications:

  • Model: MLZ-H92M-A26PK-R1
  • CPU: Intel LGA 2011-v3/ 2011/ 1151/ 1150/ 1155/ 1156 socket
  • Power Connector    : SATA and 4-Pin
  • Radiator Material: Aluminum
  • Dimensions: 
  • Vertical: 99.9 x 81.6 x 167.5mm (3.9 x 3.2 x 6.6”)
  • Horizontal: 99.9 x 142 x 118.8 mm (3.9 x 5.6 x 4.7”)
  • Fan:
    • Dimension: Φ95x 25.4 mm (3.7 x 1”)
    • Airflow: 49.7 CFM (max)
    • Air Pressure: 6.4 mmH2O (max)
    • Noise Level: 30 dBA (max)
  • Pump:
  • Noise Level: <12 dBA (max)
  • MTTF: 175,000 hours
  • L-10 Life: 50,000 hours
  • Rated Voltage: 12VDC
  • Warranty: 5 Years

design.jpg

Cooler Master is offer pre-orders on a first-come, first-serve basis beginning August 30 from this page. Pricing is not listed.