Diamond's Xtreme Sound XS71HDU looks good but how does it sound?

Subject: General Tech | August 14, 2014 - 12:00 PM |
Tagged: audio, diamond multimedia, Xtreme Sound XS71HDU, usb sound card, DAC

The Diamond Xtreme Sound XS71HDU could be a versitile $60 solution for those with high end audio equipment that would benefit from a proper DAC.  With both optical in and out it is capable of more than an onboard solution, not to mention the six 3.5-mm jacks for stereo headphones, 7.1 surround support with rear, sub, side, mic, and line in.  The design and features are impressive however the performance failed to please The Tech Report who felt that there were similar solutions with much higher quality sound reproduction.

back.jpg

"We love sound cards here at TR, but they don't fit in every kind of PC. Diamond's Xtreme Sound XS71HDU serves up the same kinds of features in a tiny USB package suitable for mini-PCs and ultrabooks. We took it for a spin to see if it's as good as it looks."

Here is some more Tech News from around the web:

Audio Corner

That Linux thing that nobody uses

Subject: General Tech | August 14, 2014 - 10:31 AM |
Tagged: linux

For many Linux is a mysterious thing that is either dead or about to die because no one uses it.  Linux.com has put together an overview of what Linux is and where to find it being used.  Much of what they describe in the beginning applies to all operating systems as they share similar features, it is only in the details that they differ.  If you have only thought about Linux as that OS that you can't game on then it is worth taking a look through the descriptions of the distributions and why people choose to use Linux.  You may never build a box which runs Linux but if you are considering buying a Steambox when they arrive on the market you will find yourself using a type of Linux and having a basic understanding of the parts of the OS for troubleshooting and optimization.   If you already use Linux then fire up Steam and take a break.

software-center.png

"For those in the know, you understand that Linux is actually everywhere. It's in your phones, in your cars, in your refrigerators, your Roku devices. It runs most of the Internet, the supercomputers making scientific breakthroughs, and the world's stock exchanges."

Here is some more Tech News from around the web:

Tech Talk

Source: Linux.com

Intel and Microsoft Show DirectX 12 Demo and Benchmark

Subject: General Tech, Graphics Cards, Processors, Mobile, Shows and Expos | August 13, 2014 - 06:55 PM |
Tagged: siggraph 2014, Siggraph, microsoft, Intel, DirectX 12, directx 11, DirectX

Along with GDC Europe and Gamescom, Siggraph 2014 is going on in Vancouver, BC. At it, Intel had a DirectX 12 demo at their booth. This scene, containing 50,000 asteroids, each in its own draw call, was developed on both Direct3D 11 and Direct3D 12 code paths and could apparently be switched while the demo is running. Intel claims to have measured both power as well as frame rate.

intel-dx12-LockedFPS.png

Variable power to hit a desired frame rate, DX11 and DX12.

The test system is a Surface Pro 3 with an Intel HD 4400 GPU. Doing a bit of digging, this would make it the i5-based Surface Pro 3. Removing another shovel-load of mystery, this would be the Intel Core i5-4300U with two cores, four threads, 1.9 GHz base clock, up-to 2.9 GHz turbo clock, 3MB of cache, and (of course) based on the Haswell architecture.

While not top-of-the-line, it is also not bottom-of-the-barrel. It is a respectable CPU.

Intel's demo on this processor shows a significant power reduction in the CPU, and even a slight decrease in GPU power, for the same target frame rate. If power was not throttled, Intel's demo goes from 19 FPS all the way up to a playable 33 FPS.

Intel will discuss more during a video interview, tomorrow (Thursday) at 5pm EDT.

intel-dx12-unlockedFPS-1.jpg

Maximum power in DirectX 11 mode.

For my contribution to the story, I would like to address the first comment on the MSDN article. It claims that this is just an "ideal scenario" of a scene that is bottlenecked by draw calls. The thing is: that is the point. Sure, a game developer could optimize the scene to (maybe) instance objects together, and so forth, but that is unnecessary work. Why should programmers, or worse, artists, need to spend so much of their time developing art so that it could be batch together into fewer, bigger commands? Would it not be much easier, and all-around better, if the content could be developed as it most naturally comes together?

That, of course, depends on how much performance improvement we will see from DirectX 12, compared to theoretical max efficiency. If pushing two workloads through a DX12 GPU takes about the same time as pushing one, double-sized workload, then it allows developers to, literally, perform whatever solution is most direct.

intel-dx12-unlockedFPS-2.jpg

Maximum power when switching to DirectX 12 mode.

If, on the other hand, pushing two workloads is 1000x slower than pushing a single, double-sized one, but DirectX 11 was 10,000x slower, then it could be less relevant because developers will still need to do their tricks in those situations. The closer it gets, the fewer occasions that strict optimization is necessary.

If there are any DirectX 11 game developers, artists, and producers out there, we would like to hear from you. How much would a (let's say) 90% reduction in draw call latency (which is around what Mantle claims) give you, in terms of fewer required optimizations? Can you afford to solve problems "the naive way" now? Some of the time? Most of the time? Would it still be worth it to do things like object instancing and fewer, larger materials and shaders? How often?

To boldy go where no 290X has gone before?

Subject: Graphics Cards | August 13, 2014 - 03:11 PM |
Tagged: factory overclocked, sapphire, R9 290X, Vapor-X R9 290X TRI-X OC

As far as factory overclocks go, the 1080MHz core and 5.64GHz RAM on the new Sapphire Vapor-X 290X is impressive and takes the prize for the highest factory overclock on this card [H]ard|OCP has seen yet.  That didn't stop them from pushing it to 1180MHz and 5.9GHz after a little work which is even more impressive.  At both the factory and manual overclocks the card handily beat the reference model and the manually overclocked benchmarks could meet or beat the overclocked MSI GTX 780 Ti GAMING 3G OC card.  The speed is not the only good feature, Intelligent Fan Control keeps two of the three fans from spinning when the GPU is under 60C which vastly reduces the noise produced by this card.  It is currently selling for $646, lower than the $710 that the GeForce is currently selling for as well.

1406869221rJVdvhdB2o_1_6_l.jpg

"We take a look at the SAPPHIRE Vapor-X R9 290X TRI-X OC video card which has the highest factory overclock we've ever encountered on any AMD R9 290X video card. This video card is feature rich and very fast. We'll overclock it to the highest GPU clocks we've seen yet on R9 290X and compare it to the competition."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

Podcast Listeners and Viewers: Win an EVGA SuperNOVA 1000 G2 Power Supply

Subject: General Tech | August 13, 2014 - 02:15 PM |
Tagged: supernova, podcast, giveaway, evga, contest

A big THANK YOU goes to our friends at EVGA for hooking us up with another item to give away for our podcast listeners and viewers this week. If you watch tonight's LIVE recording of Podcast #313 (10pm ET / 7pm PT at http://pcper.com/live) or download our podcast after the fact (at http://pcper.com/podcast) then you'll have the tools needed to win an EVGA SuperNOVA 1000 G2 Power Supply!! (Valued at $165 based on Amazon current selling price.) See review of our 750/850G2 SuperNOVA units.

120-G2-1000-XR_XL_1.jpg

How do you enter? Well, on the live stream (or in the downloaded version) we'll give out a special keyword during our discussion of the contest for you to input in the form below. That's it! 

We'll draw a random winner next week, anyone can enter from anywhere in the world - we'll cover the shipping. We'll draw a winner on August 20th and announce it on the next episode of the podcast! Good luck, and once again, thanks goes out to EVGA for supplying the prize!

The downwards Arc of flash prices; OCZ releases an SSD at $0.50/GB

Subject: Storage | August 13, 2014 - 11:38 AM |
Tagged: toshiba, ssd, sata, ocz, barefoot 3, ARC

Before even looking at the performance the real selling point of the new OCZ ARC 100 is the MSRP, the 240GB and 480GB models are slated to be released at $0.50/GB and will likely follow the usual trend of SSD prices and drop from there.  The drives use the Barefoot 3 controller, this one clocked slightly lower than the Vertex 460 but still capable of accelerating encryption.  Once The Tech Report set the drive up in their test bed the performance was almost on par with the Vertex 460 and other mid to high end SSDs, especially in comparison to the Crucial MX100.

Make sure to read Al's review as well, not just for the performance numbers but also an explanation of OCZ's warranty on this drive.

DSC04576.JPG

"OCZ's latest value SSD is priced at just $0.50 per gig, but it hangs with mid-range and even high-end drives in real-world and demanding workloads. It's also backed by an upgraded warranty and some impressive internal reliability data provided by OCZ. We take a closer look:"

Here are some more Storage reviews from around the web:

Storage

Unreal Tournament's Training Day, get in on the Pre-Pre-Alpha right now!

Subject: General Tech | August 13, 2014 - 11:02 AM |
Tagged: Unreal Tournament, gaming, Alpha

Feel like (Pre-Pre-)Alpha testing Unreal Tournament without forking money over for early access?  No problems thanks to Epic and Unreal Forums member ‘raxxy’ who is compiling and updating the (pre)Alpha version of the next Unreal Tournament.  Sure there may not be many textures but there is a Flak Cannon so what could you possible have to complain about?  There are frequent updates and a major part of participating is to give feedback to the devs so please be sure to check into the #beyondunreal IRC channel to get tips and offer feedback.  Rock, Paper, SHOTGUN reports that the severs are massively packed now so you may not be able to immediately join in but it is worth trying.

raxxy would like you to understand "These are PRE-ALPHA Prototype Builds. Seriously. Super early testing. So early it's technically not even pre alpha, it's debug code!"

You can be guaranteed that the Fragging Frogs will be taking advantage of this, as well as revisiting the much beloved UT2K4 so if you haven't joined up yet ... what are you waiting for?

Check out Fatal1ty playing if you can't get on

"Want to play the new Unreal Tournament for free, right this very second? Cor blimey and OMG you totes can! Hero of the people ‘raxxy’ on the Unreal Forums is compiling Epic’s builds and releasing them as small, playable packages that anyone can run, with multiple updates per week. The maps are untextured, the weapons unbalanced, and things change rapidly as everything’s still “pre-alpha” but it’s playable and – more importantly – fun."

Here is some more Tech News from around the web:

Gaming

Meet Tonga, soon to be your new Radeon

Subject: General Tech | August 13, 2014 - 09:58 AM |
Tagged: tonga, radeon, FirePro W7100, amd

A little secret popped out with the release of AMD's FirePro W7100, a new family of GPU that goes by the name of Tonga, which is very likely to replace the aging Tahiti chip that has been used since the HD 7900 series.  The stats that The Tech Report saw show interesting changes from Tahiti including a reduction of the memory interface to 256-bit which is in line with NVIDIA's current offerings.  The number of stream processors might be reduced to 1792 from 2048 but that is based on the W7100 and it the GPUs may be released with the full 32 GCN compute units.  Many other features have seen increases, the number of Asynchronous Compute Engines goes from 2 to 8, the number of rasterized triangles per clock doubles to 4 and it adds support for the new TrueAudio DSP and CrossFire XDMA.

IMG0045305.jpg

"The bottom line is that Tonga joins the Hawaii (Radeon R9 290X) and Bonaire (R7 260X) chips as the only members of AMD' s GCN 1.1 series of graphics processors. Tonga looks to be a mid-sized GPU and is expected to supplant the venerable Tahiti chip used in everything from the original Radeon HD 7970 to the current Radeon R9 280."

Here is some more Tech News from around the web:

Tech Talk

Select GeForce GTX GPUs Now Include Borderlands: The Pre-Sequel

Subject: General Tech | August 13, 2014 - 09:26 AM |
Tagged: borderlands, nvidia, geforce

 

Santa Clara, CA — August 12, 2014 — Get ready to shoot ‘n’ loot your way through Pandora’s moon. Starting today, gamers who purchase select NVIDIA GeForce GTX TITAN, 780 Ti, 780, and 770 desktop GPUs will receive a free copy of Borderlands: The Pre-Sequel, the hotly anticipated new chapter to the multi-award winning Borderlands franchise from 2K and Gearbox Software.

Discover the story behind Borderlands 2’s villain, Handsome Jack, and his rise to power. Taking place between the original Borderlands and Borderlands 2, Borderlands: The Pre-Sequel offers players a whole lotta new gameplay in low gravity.

“If you have a high-end NVIDIA GPU, Borderlands: The Pre-Sequel will offer higher fidelity and higher performance hardware-driven special effects including awesome weapon impacts, moon-shatteringly cool cryo explosions and ice particles, and cloth and fluid simulation that blows me away every time I see it," said Randy Pitchford, CEO and president of Gearbox Software.

With NVIDIA PhysX technology, you will feel deep space like never before. Get high in low gravity and use new ice and laser weapons to experience destructible levels of mayhem. Check out the latest trailer here: http://youtu.be/c9a4wr4I1hk that just went live this morning!

Borderlands: The Pre-Sequel will also stream to your NVIDIA SHIELD tablet or portable. For the first time ever, you can play Claptrap anywhere by using NVIDIA Gamestream technologies. You can even livestream and record every fist punch with GeForce Shadowplay

Borderlands: The Pre-Sequel will be available on October 14, 2014 in North America and on October 17, 2014 internationally. Borderlands: The Pre-Sequel is not yet rated by the ESRB.

The GeForce GTX and Borderlands: The Pre-Sequel bundle is available starting today from leading e-tailers including Amazon, NCIX, Newegg, and Tiger Direct and system builders including Canada Computers, Digital Storm, Falcon Northwest, Maingear, Memory Express, Origin PC, V3 Gaming, and Velocity Micro. For a full list of participating partners, please visit: www.GeForce.com/GetBorderlands.

Source: NVIDIA

DOTA 2 May Be Running Source Engine 2. Now.

Subject: General Tech | August 12, 2014 - 06:00 PM |
Tagged: valve, source engine, Source 2, DOTA 2

While it may not seem like it in North America, we are in a busy week for videogame development. GDC Europe, which stands for Game Developers Conference Europe, is just wrapping up to make room for Gamescom, which will take up the rest of the week. Valve will be there and people are reading tea leaves to find out why. SteamOS seems likely, but what about their next generation gaming engine, Source 2? Maybe it already happened?

valve-dota2-workshop.jpg

Valve is the most secretive company with values of openness that I know. They are pretty good at preventing leaks from escaping their walls. Recently, Dota 2 was updated to receive new features and development tools for user-generated maps and gametypes. The tools currently require 64-bit Windows and a DirectX 11-compatible GPU.

Those don't sound like Source requirements...

And the editor doesn't look like Valve's old tools.

Video Credit: "Valve News Network".

Leaks also point to things like "tf_imported", "left4dead2_source2", and "left4dead2_imported". This is interesting. Valve is pushing Dota 2, their most popular, free-to-play game into Source 2. Also, because it is listed as "tf" rather than "tf2", like "dota" is not registered as "dota2" but "left4dead2" keeps its number, this might mean that the free-to-play Team Fortress 2 could be in a perpetual-development mode, like Dota 2. Eventually, it could be pushed to the new engine and given more content.

As for Left4Dead2? I am wondering if it is intended to be a product, rather than an internal (or external) Source 2 tech demo.

Was this what brought Valve to Gamescom, or will be be surprised by other announcements (or nothing at all)?

Source: Polygon

G-SYNC is sweet but far from free

Subject: Displays | August 12, 2014 - 12:36 PM |
Tagged: asus, g-sync, geforce, gsync, nvidia, pg278q, Republic of Gamers, ROG, swift, video

Ryan was not the only one to test the ASUS ROG Swift PG278Q G-Sync monitor, Overclockers Club also received a model to test out.  Their impressions of the 27" 2560 x 1440 TN panel were very similar, once they saw this monitor in action going back to their 30-inch 60Hz IPS monitor was not as enjoyable as once it was.  The only bad thing they could say about the display was the MSRP, $800 is steep for any monitor and makes it rather difficult to even consider getting two or more of them for a multiple display system.

2.jpg

”When you get down to it, the facts are that even with a TN panel being used for the high refresh rate, the ASUS ROG Swift PG278Q G-Sync monitor delivers great picture quality and truly impressive gaming. I could go on all day long about how smooth each of the games played while testing this monitor, but ultimately not be able to show you without having you sit at the desk with me. No stuttering, no tearing, no lag; it's like getting that new car and having all the sales hype end up being right on the money. When I flip back and forth between my 60Hz monitor and the PC278Q, its like a night and day experience.”

Here are some more Display articles from around the web:

Displays

Let's hope DDR4 is less expensive than DDR3 3100MHz

Subject: Memory | August 12, 2014 - 11:09 AM |
Tagged: XPG V3, DDR3 3100, adata

Currently available for a mere $870 the 8GB DDR3-3100 dual channel kit from ADATA with timings of 12-14-14-36 has to be among the most expensive consumer RAM available on the market.   We can only hope that DDR4 does not arrive at a similar speed and price point but instead with slower clocked DIMMs at a more reasonable price and with improvements to performance.  Legit Reviews' testing showed that these DIMMs offer almost no benefit over DDR3-1600 with tighter timings in real usage but you can get higher scores on synthetic benchmarks.  If benchmarking better than the competition and swap-able heatspreaders with different colours is attractive to you then you could pick up these DIMMs, otherwise you really won't be getting value for your money.

adata-xpg-v3-3100mhz-memory.jpg

"Gone are the days of being on the cutting edge of memory with DDR3 running at 2133MHz! These days running 2133MHz memory is pretty much considered the norm for a high end gaming rig. If you’re looking to be on the bleeding edge of memory speeds you’re going to be limited to only one or two kits. Today we have one of the fastest kits available on the market to put through the paces, the ADATA XPG V3 DDR3 3100MHz 8GB memory kit. Read on to see if this big dollar kit is worth nearly a thousand dollars."

Here are some more Memory articles from around the web:

Memory



Intel is disabling TSX in Haswell due to software failures

Subject: General Tech | August 12, 2014 - 10:07 AM |
Tagged: Intel, haswell, tsx, errata

Transactional Synchronization Extensions, aka TSX, are a backwards compatible set of instructions which first appeared in some Haswell chips as a method to improve concurrency and multi-threadedness with as little work for the programmer as possible.  It was intended to improve the scaling of multi-threaded apps running on multi-core processors and has not yet been widely adopted.  The adoption has run into another hurdle, in some cases the use of TSX can cause critical software failures and as a result Intel will be disabling the instruction set via new BIOS/UEFI updates which will be pushed out soon.  If your software uses the new instruction set and you wish it to continue to do so you should avoid updating your motherboard BIOS/UEFI and ask your users to do the same.  You can read more about this bug/errata and other famous problems over at The Tech Report.

intel2.jpg

"The TSX instructions built into Intel's Haswell CPU cores haven't become widely used by everyday software just yet, but they promise to make certain types of multithreaded applications run much faster than they can today. Some of the savviest software developers are likely building TSX-enabled software right about now."

Here is some more Tech News from around the web:

Tech Talk

NVIDIA Reveals 64-bit Denver CPU Core Details, Headed to New Tegra K1 Powered Devices Later This Year

Subject: Processors | August 11, 2014 - 10:06 PM |
Tagged: tegra k1, project denver, nvidia, Denver, ARMv8, arm, Android, 64-bit

During GTC 2014 NVIDIA launched the Tegra K1, a new mobile SoC that contains a powerful Kepler-based GPU. Initial processors (and the resultant design wins such as the Acer Chromebook 13 and Xiaomi Mi Pad) utilized four ARM Cortex-A15 cores for the CPU side of things, but later this year NVIDIA is deploying a variant of the Tegra K1 SoC that switches out the four A15 cores for two custom (NVIDIA developed) Denver CPU cores.

Today at the Hot Chips conference, NVIDIA revealed most of the juicy details on those new custom cores announced in January which will be used in devices later this year.

The custom 64-bit Denver CPU cores use a 7-way superscalar design and run a custom instruction set. Denver is a wide but in-order architecture that allows up to seven operations per clock cycle. NVIDIA is using a custom ISA and on-the-fly binary translation to convert ARMv8 instructions to microcode before execution. A software layer and 128MB cache enhance the Dynamic Code Optimization technology by allowing the processor to examine and optimize the ARM code, convert it to the custom instruction set, and further cache the converted microcode of frequently used applications in a cache (which can be bypassed for infrequently processed code). Using the wider execution engine and Dynamic Code Optimization (which is transparent to ARM developers and does not require updated applications), NVIDIA touts the dual Denver core Tegra K1 as being at least as powerful as the quad and octo-core packing competition.

Further, NVIDIA has claimed at at peak throughput (and in specific situations where application code and DCO can take full advantage of the 7-way execution engine) the Denver-based mobile SoC handily outpaces Intel’s Bay Trail, Apple’s A7 Cyclone, and Qualcomm’s Krait 400 CPU cores. In the results of a synthetic benchmark test provided to The Tech Report, the Denver cores were even challenging Intel’s Haswell-based Celeron 2955U processor. Keeping in mind that these are NVIDIA-provided numbers and likely the best results one can expect, Denver is still quite a bit more capable than existing cores. (Note that the Haswell chips would likely pull much farther ahead when presented with applications that cannot be easily executed in-order with limited instruction parallelism).

NVIDIA Denver CPU Core 64bit ARMv8 Tegra K1.png

NVIDIA is ratcheting up mobile CPU performance with its Denver cores, but it is also aiming for an efficient chip and has implemented several power saving tweaks. Beyond the decision to go with an in-order execution engine (with DCO hopefully mostly making up for that), the beefy Denver cores reportedly feature low latency power state transitions (e.g. between active and idle states), power gating, dynamic voltage, and dynamic clock scaling. The company claims that “Denver's performance will rival some mainstream PC-class CPUs at significantly reduced power consumption.” In real terms this should mean that the two Denver cores in place of the quad core A15 design in the Tegra K1 should not result in significantly lower battery life. The two K1 variants are said to be pin compatible such that OEMs and developers can easily bring upgraded models to market with the faster Denver cores.

NVIDIA Denver CPU cores in Tegra K1.png

For those curious, In the Tegra K1, the two Denver cores (clocked at up to 2.5GHz) share a 16-way L2 cache and each have 128KB instruction and 64KB data L1 caches to themselves. The 128MB Dynamic Code Optimization cache is held in system memory.

Denver is the first (custom) 64-bit ARM processor for Android (with Apple’s A7 being the first 64-bit smartphone chip), and NVIDIA is working on supporting the next generation Android OS known as Android L.

The dual Denver core Tegra K1 is coming later this year and I am excited to see how it performs. The current K1 chip already has a powerful fully CUDA compliant Kepler-based GPU which has enabled awesome projects such as computer vision and even prototype self-driving cars. With the new Kepler GPU and Denver CPU pairing, I’m looking forward to seeing how NVIDIA’s latest chip is put to work and the kinds of devices it enables.

Are you excited for the new Tegra K1 SoC with NVIDIA’s first fully custom cores?

Source: NVIDIA

Aftermath of the Fragging Frogs VLAN #7

Subject: General Tech | August 11, 2014 - 03:52 PM |
Tagged: VLAN party, pcper, kick ass, fragging frogs

Several rowboats worth of snacks, a couple of canoe-fulls of assorted beverages and boatloads of fun were had this weekend in the highly successful 7th Fragging Frogs VLAN; if you missed it there will be another chance some day but you really missed an epic event.  There were over 120 Teamspeak connections in a variety of channels and an estimated peak of 78 active participants.  Thanks to AMD there is a new game for the Frogs as well as Plants vs. Zombies Garden Warfare was a hit both to the players and to those watching iamApropos' live stream.  We also gained some ARMA 2 fans which will not only appear again next VLAN but is also in danger of becoming a frequent activity for some members. 

FraggingFrogs.jpg

Once again there was quite a bit of valuable hardware and software given away, the list includes:

AMD

  • AMD Fan Kit (headset, 16 GB USB drive, mouse)
  • AMD Gaming Series RAM - 8 GB of 2133 Mhz
  • MSI Military Class 4 A88XM-E35 FM2+ motherboard *and* A10-7850K APU
  • AMD FX-8350 Processor
  • XFX R9 290 Double D graphics card
  • Several Plants vs Zombies: Garden Warfare Origin codes

AMD Red Team+

  • Murdered Soul Suspect game codes
  • Sniper Elite 3 game codes  

Please stop by this thread to offer your thanks and support for all the hard work put into these events by Lenny, iamApropos, Spazster, Brandito, Cannonaire, AMD and the Frogs in general.

SilverStone's Fanless Nightjar 520W PSU keeps picking up awards

Subject: Cases and Cooling | August 11, 2014 - 02:23 PM |
Tagged: SST-NJ520, SilverStone Tech, Nightjar, Fanless Power Supply, 80 Plus Platinum

Just over a month ago Lee took a look at Silverstone's Fanless Nightjar PSU, giving it a Gold Award.  It has now arrived in [H]ard|OCP's torture chamber so you can see how well it works with a different system.  At the lowest load level of 25% of max the efficiency was a mere 89.79% but at 50%, 75% and 100% loads it stayed well above 90% and the temperature difference was 9C between the lowest load and the highest, impressive for a fanless PSU.  In the end not only was this the best fanless 500W PSU [H] has tested, it was the overall best 500W PSU they've seen, justifying the high asking price.

1406084192F3XTun5qzm_2_9_l.jpg

"What is more quiet than a computer power supply with a fan? You guessed it, a PSU with no fan. This unit has small footprint builds in mind featuring excellent Platinum efficiency, flexible flat cables, and of course no sound profile to speak of. Does the 520 watt rated SilverStone Nightjar hold up when we put it in the incubator?"

Here are some more Cases & Cooling reviews from around the web:

CASES & COOLING

Source: [H]ard|OCP

Kaveri on Linux

Subject: Processors | August 11, 2014 - 12:40 PM |
Tagged: A10-7800, A6-7400K, linux, amd, ubuntu 14.04, Kaveri

Linux support for AMD's GPUs has not been progressing at the pace many users would like, though it is improving over time but that is not the same with their APUs.  Phoronix just tested the A10-7800 and A6-7400K on Ubuntu 14.04 with kernel 3.13 and the latest Catalyst 14.6 Beta.  This preview just covers the raw performance, you can expect to see more published in the near future that will cover new features such as the configurable TDP which exists on these chips.  The tests show that the new 7800 can keep pace with the previous 7850K and while the A6-7400K is certainly slower it will be able to handle a Linux machine with relatively light duties.  You can see the numbers here.

image.php_.jpg

"At the end of July AMD launched new Kaveri APU models: the A10-7800, A8-7600, and A6-7400K. AMD graciously sent over review samples on their A10-7800 and A6-7400K Kaveri APUs, which we've been benchmarking and have some of the initial Linux performance results to share today."

Here are some more Processor articles from around the web:

Processors

Source: Phoronix

It's not just Broadwell today, we also have Seattle news

Subject: General Tech | August 11, 2014 - 10:47 AM |
Tagged: amd, seattle, hot chips

AMD has been showing off a reference Seattle-based server at Hot Chips and The Tech Report had an opportunity to see it.  Eight 64-bit Cortex-A57 chips are set up in pairs, each pair sharing 1MB of L2 cache while the 8MB of L3 cache is accessible by all eight chips as well as the coprocessors, memory controller, and I/O subsystems.  The system can address up to 128GB of DDR3 or DDR4, and you get support fot 8 SATA 6Gbps ports and 8 lanes of PCIe 3.0 to apportion between the slots.  There is a secure System Control Processor, a partitioned Cortex-A5 core with its own ROM, RAM, and I/O to control power, boot and configuration control with support for TrustZone as well as a Cryptographic Coprocessor which accelerates all encryption processes as you might well expect.  Read on for more information about AMD's unique new take on server technology.

seattle.png

"For some time now, the features of AMD's Seattle server processor have been painted in broad brush strokes. This morning, at the Hot Chips symposium, AMD is filling in most of the missing details. We were treated to an advance briefing last week, where AMD provided previously confidential information about Seattle's cache network, memory controller, I/O features, and coprocessors."

Here is some more Tech News from around the web:

Tech Talk

Acer Unveils Chromebook 13 Powered By NVIDIA Tegra K1 SoC

Subject: General Tech, Mobile | August 11, 2014 - 05:00 AM |
Tagged: webgl, tegra k1, nvidia, geforce, Chromebook, Bay Trail, acer

Today Acer unveiled a new Chromebook powered by an NVIDIA Tegra K1 processor. The aptly-named Chromebook 13 is 13-inch thin and light notebook running Google’s Chrome OS with up to 13 hours of battery life and three times the graphical performance of existing Chromebooks using Intel Bay Trail and Samsung Exynos processors.

Acer Chromebook 13 CB5-311_AcerWP_app-02.jpg

The Chromebook 13 is 18mm thick and comes in a white plastic fanless chassis that hosts a 13.3” display, full size keyboard, trackpad, and HD webcam. The Chromebook 13 will be available with a 1366x768 or 1920x1080 resolution panel depending on the particular model (more on that below).

Beyond the usual laptop fixtures, external I/O includes two USB 3.0 ports, HDMI video output, a SD card reader, and a combo headphone/mic jack. Acer has placed one USB port on the left side along with the card reader and one USB port next to the HDMI port on the rear of the laptop. Personally, I welcome the HDMI port placement as it means connecting a second display will not result in a cable invading the mousing area should i wish to use a mouse (and it’s even south paw friendly Scott!).

The Chromebook 13 looks decent from the outside, but it is the internals where the device gets really interesting. Instead of going with an Intel Bay Trail (or even Celeron/Core i3), Acer has opted to team up with NVIDIA to deliver the world’s first NVIDIA-powered Chromebook.

Specifically, the Chromebook 13 uses a NVIDIA Tegra K1 SoC, up to 4GB RAM, and up to 32GB of flash storage. The K1 offers up four A15 CPU cores clocked at 2.1GHz, and a graphics unit with 192 Kepler-based CUDA cores. Acer rates the Chromebook 13 at 11 hours with the 1080p panel or 13 hours when equipped with the 1366x768 resolution display. Even being conservative, the Chromebook 13 looks to be the new leader in Chromebook battery life (with the previous leader claiming 11 hours).

acer chromebook 13 tegra k1 quad core multitasking benchmark.jpg

A graph comparing WebGL performance between the NVIDIA Tegra K1, Intel (Bay Trail) Celeron N2830, Samsung Exynos 5800, and Samsung Exynos 5250. Results courtesy NVIDIA.

The Tegra K1 is a powerful little chip, and it is nice to see NVIDIA get a design win here. NVIDIA claims that the Tegra K1, which is rated at 326 GFLOPS of compute performance, offers up to three times the graphics performance of the Bay Trail N2830 and Exynos 5800 SoCs. Additionally, the K1 reportedly uses slightly less power and delivers higher multi-tasking performance. I’m looking forward to seeing independent reviews in this laptop formfactor and hoping that the chip lives up to its promises.

The Chromebook 13 is currently up for pre-order and will be available in September starting at $279. The Tegra K1-powered laptop will hit the United States and Europe first, with other countries to follow. Initially, the Europe roll-out will include “UK, Netherlands, Belgium, Denmark, Sweden, Finland, Norway, France, Germany, Russia, Italy, Spain, South Africa and Switzerland.”

Acer Chromebook 13 CB5-311_closed 2.jpg

Acer is offering three consumer SKUs and one education SKU that will be exclusively offering through a re-seller. Please see the chart below for the specifications and pricing.

Acer Chromebook 13 Models System Memory (RAM) Storage (flash) Display Price MSRP
CB5-311-T9B0 2GB 16GB 1920 x 1080 $299.99
CB5-311-T1UU 4GB 32GB 1920 x 1080 $379.99
CB5-311-T7NN - Base Model 2GB 16GB 1366 x 768 $279.99
Educational SKU (Reseller Only) 4GB 16GB 1366 x 768 $329.99

Intel made some waves in the Chromebook market earlier this year with the announcement of several new Intel-powered Chrome devices and the addition of conflict-free Haswell Core i3 options. It seems that it is now time for the ARM(ed) response. I’m interested to see how NVIDIA’s newest model chip stacks up to the current and upcoming Intel x86 competition in terms of graphics power and battery usage.

As far as Chromebooks go, if the performance is at the point Acer and NVIDIA claim, this one definitely looks like a decent option considering the price. I think a head-to-head between the ASUS C200 (Bay Trail N2830, 2GB RAM, 16GB eMMC, and 1366x768 display at $249.99 MSRP) and Acer Chromebook 13 would be interesting as the real differentiator (beyond aesthetics) is the underlying SoC. I do wish there was a 4GB/16GB/1080p option in the Chromebook 13 lineup though considering the big price jump to get 4GB RAM (mostly as a result of the doubling of flash) in the $379.99 model at, say, $320 MSRP.

Read more about Chromebooks at PC Perspective!

Source: Acer

Only 1 more sleep to go before the Fragging Frog's VLAN #7

Subject: General Tech | August 8, 2014 - 04:25 PM |
Tagged: VLAN party, kick ass, gaming, fragging frogs

If you haven't yet signed up in the official thread, stocked up on snacks and beverages and reserved all of the weekend for gaming then maybe this will excite you enough to change your plans.

By the way, Play Battlefield 4 Free for a Week. Origin Game Time is On!  It is a rather popular choice with the Frogs so if you don't have it that is no excuse!

You should also consider subscribing to TornTV where you can find a lot of Fragging Frog and PC Perspective action.   There will also be a live stream where you can show off your skills, or lack thereof, to the whole internet!