Subject: General Tech | August 18, 2014 - 10:33 AM | Jeremy Hellstrom
Tagged: gigabyte, Extreme Overclocking Competition, overclocking
The weapons this year at Gigabyte's EOC were a Core i5-4690K and Core i7-4790K, Gigabyte Z97X-SOC FORCE LN2, Gigabyte HD7790, G.Skill TridentX F3-2933C12D-8GTXDG
and a Seasonic SSX-1200 Platinum PSU. Team Awardfabrik hit 6578MHz on the i7-4790K with a mix of luck and skill while Team Switzerland took top spot for memory at 2106.3MHz. Raw speed of one component is not enough to win this competition and when the nitrogen fog lifted it was Team HardwareLuxx with the overall win. Check out what benchmarks were run and pictures and video from the event on MadShrimps.
"Each year Gigabyte Germany organizes the Extreme Overclocking Competition. At the EOC the best overclocking teams of Germany have a chance to prove who is still king. The main organizer behind each event is Germany’s finest Roman Hartung also known as der8auer at HWBot.org. This year besides Gigabyte also G.Skill, Intel, Seasonic and Gelid solutions provided the required hardware and funds to allow this clash of the titans to take place at the Know Cube at the Heilbronn Tech University."
Here is some more Tech News from around the web:
- The TR Podcast 159: Kaveri returns, Shield delivers, and Brix gets game @ The Tech Report
- How to Encrypt Email in Linux @ Linux.com
- Microsoft cries UNINSTALL in the wake of Blue Screens of Death @ The Register
- HTC One W8 leak reveals Windows Phone 8.1, Blinkfeed app @ The Inquirer
- Boffins find hundreds of thousands of woefully insecure IoT devices @ The Register
- LUXA2 PL2 6000mAh Leather Power Bank Review @ NikKTech
- NikKTech & Thermalright Worldwide Summer Giveaway
- Gamescom 2014 Gallery @ Kitguru
Falcon Northwest Tiki-Z Special Edition Crams Titan Z And Liquid Cooled i7-4790K CPU Into A Stylish Micro Tower
Subject: General Tech, Systems | August 15, 2014 - 10:40 PM | Tim Verry
Tagged: titan z, tiki-z, gtx titan z, gk110, falcon northwest, dual gpu
The Tiki-Z Special Edition is the latest custom PC from boutique vendor Falcon Northwest. This high-end enthusiast system, which starts at $5,614 manages to pack a dual GPU graphics card, liquid cooled CPU, 600W power supply, and up to 6TB of storage into a stylish micro tower that measures a mere 4” wide and 13” tall.
Falcon Northwest has taken the original Tiki chassis and made several notable tweaks to accommodate NVIDIA’s latest dual GPU card: the GeForce GTX TITAN Z which we reviewed here. The case has a custom (partial) side window that shows off the graphics card. This window can be green glass or smoke tinted acrylic with customizable laser cut venting. A ducted intake feeds cool air to the graphics card and vents at the rear and front of the case exhaust hot air. The exterior of the case can be painted in any single color of automobile paint for free or with a fully customized paint scheme with artwork at an additional cost.
In addition to the Titan Z with its 5,760 CUDA cores, 12GB of memory, and 8.1 TFLOPS of peak compute power, Falcon Northwest has packed a modular small form factor 600W PSU from SilverStone, an ASUS Z97I Plus motherboard, Intel Core i7-4790K “Devil’s Canyon” CPU with liquid cooler, up to 16GB of DDR3 1866MHz memory from G.Skill, and up to 6TB of storage (two 1TB SSDs and one 4TB Western Digital Green hard drive). The i7-4970K comes stock clocked at 4GHz (4.4GHz max turbo), but can be overclocked by Falcon Northwest upon request.
Needless to say, that is a lot of hardware to cram into a PC that can easily sit next to your monitor at your desk or in your living room!
The engineering, artwork, and support of this high end system all come at a price, however. The new Titan Z powered boutique PC starts at $5,614 USD and is available now from Falcon Northwest. To sweeten the deal, for a limited time Falcon Northwest is including a free ASUS PB287Q 4K monitor (3820x2160, 60Hz, 1ms response time, see more specification in our review) with each Tiki-Z purchase.
This system is an impressive feat of engineering and it certainly looks sharp with the artwork, custom side panel, and compact form factor. My only concern from a usability standpoint would be noise from the cooling systems for the GPU, CPU radiator, and PSU. One also has to consider that the Titan Z graphics card by itself is priced at $3,000 which puts the Tiki Z pricing back into the somewhat sane world of boutique PC pricing (heh at about $2,600 for the system minus the GPU). No question, this is not going to be a system for everyone and will even be a niche product within the niche market of those enthusiasts interested in pre-built gaming systems. Even so, if noise levels can be held in check it will make for one powerful little gaming box!
Subject: General Tech, Graphics Cards, Shows and Expos | August 15, 2014 - 05:33 PM | Scott Michaud
Tagged: siggraph 2014, Siggraph, OpenGL Next, opengl 4.5, opengl, nvidia, Mantle, Khronos, Intel, DirectX 12, amd
Let's be clear: there are two stories here. The first is the release of OpenGL 4.5 and the second is the announcement of the "Next Generation OpenGL Initiative". They both occur on the same press release, but they are two, different statements.
OpenGL 4.5 Released
OpenGL 4.5 expands the core specification with a few extensions. Compatible hardware, with OpenGL 4.5 drivers, will be guaranteed to support these. This includes features like direct_state_access, which allows accessing objects in a context without binding to it, and support of OpenGL ES3.1 features that are traditionally missing from OpenGL 4, which allows easier porting of OpenGL ES3.1 applications to OpenGL.
It also adds a few new extensions as an option:
ARB_pipeline_statistics_query lets a developer ask the GPU what it has been doing. This could be useful for "profiling" an application (list completed work to identify optimization points).
ARB_sparse_buffer allows developers to perform calculations on pieces of generic buffers, without loading it all into memory. This is similar to ARB_sparse_textures... except that those are for textures. Buffers are useful for things like vertex data (and so forth).
ARB_transform_feedback_overflow_query is apparently designed to let developers choose whether or not to draw objects based on whether the buffer is overflowed. I might be wrong, but it seems like this would be useful for deciding whether or not to draw objects generated by geometry shaders.
KHR_blend_equation_advanced allows new blending equations between objects. If you use Photoshop, this would be "multiply", "screen", "darken", "lighten", "difference", and so forth. On NVIDIA's side, this will be directly supported on Maxwell and Tegra K1 (and later). Fermi and Kepler will support the functionality, but the driver will perform the calculations with shaders. AMD has yet to comment, as far as I can tell.
Image from NVIDIA GTC Presentation
If you are a developer, NVIDIA has launched 340.65 (340.23.01 for Linux) beta drivers for developers. If you are not looking to create OpenGL 4.5 applications, do not get this driver. You really should not have any use for it, at all.
Next Generation OpenGL Initiative Announced
The Khronos Group has also announced "a call for participation" to outline a new specification for graphics and compute. They want it to allow developers explicit control over CPU and GPU tasks, be multithreaded, have minimal overhead, have a common shader language, and "rigorous conformance testing". This sounds a lot like the design goals of Mantle (and what we know of DirectX 12).
And really, from what I hear and understand, that is what OpenGL needs at this point. Graphics cards look nothing like they did a decade ago (or over two decades ago). They each have very similar interfaces and data structures, even if their fundamental architectures vary greatly. If we can draw a line in the sand, legacy APIs can be supported but not optimized heavily by the drivers. After a short time, available performance for legacy applications would be so high that it wouldn't matter, as long as they continue to run.
Add to it, next-generation drivers should be significantly easier to develop, considering the reduced error checking (and other responsibilities). As I said on Intel's DirectX 12 story, it is still unclear whether it will lead to enough performance increase to make most optimizations, such as those which increase workload or developer effort in exchange for queuing fewer GPU commands, unnecessary. We will need to wait for game developers to use it for a bit before we know.
Subject: General Tech | August 15, 2014 - 10:09 AM | Jeremy Hellstrom
Tagged: amd, Mantle, opengl, OpenGL Next
Along with his announcements about FreeSync, Richard Huddy also discussed OpenGL Next and its relationship with Mantle and the role it played in DirectX 12's development. AMD has given Chronos Group, the developers of OpenGL, complete access to Mantle to help them integrate it into future versions of the API starting with OpenGL Next. He also discussed the advantages of Mantle over DirectX, citing AMD's ability to update it much more frequently than Intel has done with DX. With over 75 developers working on titles that take advantage of Mantle the interest is definitely there but it is uncertain if devs will actually benefit from an API which updates at a pace faster than a game can be developed. Read on at The Tech Report.
"At Siggraph yesterday, AMD's Richard Huddy gave us an update on Mantle, and he also revealed some interesting details about AMD's role in the development of the next-gen OpenGL API."
Here is some more Tech News from around the web:
- The Man Responsible For Pop-Up Ads On Building a Better Web @ Slashdot
- Skype stops working on older Android phones leaving Linux users in the dark @ The Inquirer
- ntel teams with 50 Cent's audio firm to launch heart-rate monitoring headphones @ The Inquirer
- TSMC 4Q14 production capacity almost fully booked @ DigiTimes
- Lenovo posts an INCREASE in desktop PC and notebook sales @ The Register
- Boffins brew TCP tuned to perform on lossy links like Wi-Fi networks @ The Register
Subject: General Tech, Displays | August 14, 2014 - 01:59 PM | Scott Michaud
Tagged: amd, freesync, g-sync, Siggraph, siggraph 2014
At SIGGRAPH, Richard Huddy of AMD announced the release windows of FreeSync, their adaptive refresh rate technology, to The Tech Report. Compatible monitors will begin sampling "as early as" September. Actual products are expected to ship to consumers in early 2015. Apparently, more than one display vendor is working on support, although names and vendor-specific release windows are unannounced.
As for cost of implementation, Richard Huddy believes that the added cost should be no more than $10-20 USD (to the manufacturer). Of course, the final price to end-users cannot be derived from this - that depends on how quickly the display vendor expects to sell product, profit margins, their willingness to push new technology, competition, and so forth.
If you want to take full advantage of FreeSync, you will need a compatible GPU (look for "gaming" support in AMD's official FreeSync compatibility list). All future AMD GPUs are expected to support the technology.
Subject: General Tech | August 14, 2014 - 12:30 PM | Ken Addison
Tagged: video, ssd, ROG Swift, ROG, podcast, ocz, nvidia, Kaveri, Intel, g-sync, FMS 2014, crossblade ranger, core m, Broadwell, asus, ARC 100, amd, A6-7400K, A10-7800, 14nm
PC Perspective Podcast #313 - 08/14/2014
Join us this week as we discuss new Kaveri APUs, ASUS ROG Swift G-Sync Monitor, Intel Core M Processors and more!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the Store
- RSS - Subscribe through your regular RSS reader
- MP3 - Direct download link to the MP3 file
Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath, and Allyn Malventano
Subject: General Tech | August 14, 2014 - 12:00 PM | Jeremy Hellstrom
Tagged: audio, diamond multimedia, Xtreme Sound XS71HDU, usb sound card, DAC
The Diamond Xtreme Sound XS71HDU could be a versitile $60 solution for those with high end audio equipment that would benefit from a proper DAC. With both optical in and out it is capable of more than an onboard solution, not to mention the six 3.5-mm jacks for stereo headphones, 7.1 surround support with rear, sub, side, mic, and line in. The design and features are impressive however the performance failed to please The Tech Report who felt that there were similar solutions with much higher quality sound reproduction.
"We love sound cards here at TR, but they don't fit in every kind of PC. Diamond's Xtreme Sound XS71HDU serves up the same kinds of features in a tiny USB package suitable for mini-PCs and ultrabooks. We took it for a spin to see if it's as good as it looks."
Here is some more Tech News from around the web:
- TDK A12 TREK Micro Wireless Speaker Review @ NikKTech
- Wavemaster Moody 2.1 Rev 2 Speaker @ eTeknix
- IK Multimedia iLoud Studio-Quality Portable Speaker Review @ NikKTech
- LUXA2 GroovyW bluetooth speaker @ Kitguru
- BitFenix Flo Premium PC Headset Review @ NikKTech
- Tt eSPORTS Sybaris Wired & Wireless Bluetooth NFC Enabled Headset @ eTeknix
- Tt eSports Level 10 M Gaming Headset @ TechwareLabs
- GAMDIAS Hephaestus GHS2000 Headset @ Benchmark Reviews
- Tt eSPORTS Level 10M Gaming Headset Review @ Techgage
- CM Storm Resonar Gaming Earphones @ eTeknix
Subject: General Tech | August 14, 2014 - 10:31 AM | Jeremy Hellstrom
For many Linux is a mysterious thing that is either dead or about to die because no one uses it. Linux.com has put together an overview of what Linux is and where to find it being used. Much of what they describe in the beginning applies to all operating systems as they share similar features, it is only in the details that they differ. If you have only thought about Linux as that OS that you can't game on then it is worth taking a look through the descriptions of the distributions and why people choose to use Linux. You may never build a box which runs Linux but if you are considering buying a Steambox when they arrive on the market you will find yourself using a type of Linux and having a basic understanding of the parts of the OS for troubleshooting and optimization. If you already use Linux then fire up Steam and take a break.
"For those in the know, you understand that Linux is actually everywhere. It's in your phones, in your cars, in your refrigerators, your Roku devices. It runs most of the Internet, the supercomputers making scientific breakthroughs, and the world's stock exchanges."
Here is some more Tech News from around the web:
- The internet just BROKE under its own weight – we explain how @ The Register
- Intel snaps up Axxia to bolster its wireless networking credentials @ The Inquirer
- The Biggest iPhone Security Risk Could Be Connecting One To a Computer @ Slashdot
- CHIL PowerShare Reactor 5.1 Amp Multi-Device Charger Review @ NikKTech
Subject: General Tech, Graphics Cards, Processors, Mobile, Shows and Expos | August 13, 2014 - 06:55 PM | Scott Michaud
Tagged: siggraph 2014, Siggraph, microsoft, Intel, DirectX 12, directx 11, DirectX
Along with GDC Europe and Gamescom, Siggraph 2014 is going on in Vancouver, BC. At it, Intel had a DirectX 12 demo at their booth. This scene, containing 50,000 asteroids, each in its own draw call, was developed on both Direct3D 11 and Direct3D 12 code paths and could apparently be switched while the demo is running. Intel claims to have measured both power as well as frame rate.
Variable power to hit a desired frame rate, DX11 and DX12.
The test system is a Surface Pro 3 with an Intel HD 4400 GPU. Doing a bit of digging, this would make it the i5-based Surface Pro 3. Removing another shovel-load of mystery, this would be the Intel Core i5-4300U with two cores, four threads, 1.9 GHz base clock, up-to 2.9 GHz turbo clock, 3MB of cache, and (of course) based on the Haswell architecture.
While not top-of-the-line, it is also not bottom-of-the-barrel. It is a respectable CPU.
Intel's demo on this processor shows a significant power reduction in the CPU, and even a slight decrease in GPU power, for the same target frame rate. If power was not throttled, Intel's demo goes from 19 FPS all the way up to a playable 33 FPS.
Intel will discuss more during a video interview, tomorrow (Thursday) at 5pm EDT.
Maximum power in DirectX 11 mode.
For my contribution to the story, I would like to address the first comment on the MSDN article. It claims that this is just an "ideal scenario" of a scene that is bottlenecked by draw calls. The thing is: that is the point. Sure, a game developer could optimize the scene to (maybe) instance objects together, and so forth, but that is unnecessary work. Why should programmers, or worse, artists, need to spend so much of their time developing art so that it could be batch together into fewer, bigger commands? Would it not be much easier, and all-around better, if the content could be developed as it most naturally comes together?
That, of course, depends on how much performance improvement we will see from DirectX 12, compared to theoretical max efficiency. If pushing two workloads through a DX12 GPU takes about the same time as pushing one, double-sized workload, then it allows developers to, literally, perform whatever solution is most direct.
Maximum power when switching to DirectX 12 mode.
If, on the other hand, pushing two workloads is 1000x slower than pushing a single, double-sized one, but DirectX 11 was 10,000x slower, then it could be less relevant because developers will still need to do their tricks in those situations. The closer it gets, the fewer occasions that strict optimization is necessary.
If there are any DirectX 11 game developers, artists, and producers out there, we would like to hear from you. How much would a (let's say) 90% reduction in draw call latency (which is around what Mantle claims) give you, in terms of fewer required optimizations? Can you afford to solve problems "the naive way" now? Some of the time? Most of the time? Would it still be worth it to do things like object instancing and fewer, larger materials and shaders? How often?
Subject: General Tech | August 13, 2014 - 02:15 PM | Ryan Shrout
Tagged: supernova, podcast, giveaway, evga, contest
A big THANK YOU goes to our friends at EVGA for hooking us up with another item to give away for our podcast listeners and viewers this week. If you watch tonight's LIVE recording of Podcast #313 (10pm ET / 7pm PT at http://pcper.com/live) or download our podcast after the fact (at http://pcper.com/podcast) then you'll have the tools needed to win an EVGA SuperNOVA 1000 G2 Power Supply!! (Valued at $165 based on Amazon current selling price.) See review of our 750/850G2 SuperNOVA units.
How do you enter? Well, on the live stream (or in the downloaded version) we'll give out a special keyword during our discussion of the contest for you to input in the form below. That's it!
We'll draw a random winner next week, anyone can enter from anywhere in the world - we'll cover the shipping. We'll draw a winner on August 20th and announce it on the next episode of the podcast! Good luck, and once again, thanks goes out to EVGA for supplying the prize!
Get notified when we go live!