Subject: General Tech | August 12, 2014 - 09:00 PM | Scott Michaud
Tagged: valve, source engine, Source 2, DOTA 2
While it may not seem like it in North America, we are in a busy week for videogame development. GDC Europe, which stands for Game Developers Conference Europe, is just wrapping up to make room for Gamescom, which will take up the rest of the week. Valve will be there and people are reading tea leaves to find out why. SteamOS seems likely, but what about their next generation gaming engine, Source 2? Maybe it already happened?
Valve is the most secretive company with values of openness that I know. They are pretty good at preventing leaks from escaping their walls. Recently, Dota 2 was updated to receive new features and development tools for user-generated maps and gametypes. The tools currently require 64-bit Windows and a DirectX 11-compatible GPU.
Those don't sound like Source requirements...
And the editor doesn't look like Valve's old tools.
Video Credit: "Valve News Network".
Leaks also point to things like "tf_imported", "left4dead2_source2", and "left4dead2_imported". This is interesting. Valve is pushing Dota 2, their most popular, free-to-play game into Source 2. Also, because it is listed as "tf" rather than "tf2", like "dota" is not registered as "dota2" but "left4dead2" keeps its number, this might mean that the free-to-play Team Fortress 2 could be in a perpetual-development mode, like Dota 2. Eventually, it could be pushed to the new engine and given more content.
As for Left4Dead2? I am wondering if it is intended to be a product, rather than an internal (or external) Source 2 tech demo.
Was this what brought Valve to Gamescom, or will be be surprised by other announcements (or nothing at all)?
Subject: Displays | August 12, 2014 - 03:36 PM | Jeremy Hellstrom
Tagged: asus, g-sync, geforce, gsync, nvidia, pg278q, Republic of Gamers, ROG, swift, video
Ryan was not the only one to test the ASUS ROG Swift PG278Q G-Sync monitor, Overclockers Club also received a model to test out. Their impressions of the 27" 2560 x 1440 TN panel were very similar, once they saw this monitor in action going back to their 30-inch 60Hz IPS monitor was not as enjoyable as once it was. The only bad thing they could say about the display was the MSRP, $800 is steep for any monitor and makes it rather difficult to even consider getting two or more of them for a multiple display system.
”When you get down to it, the facts are that even with a TN panel being used for the high refresh rate, the ASUS ROG Swift PG278Q G-Sync monitor delivers great picture quality and truly impressive gaming. I could go on all day long about how smooth each of the games played while testing this monitor, but ultimately not be able to show you without having you sit at the desk with me. No stuttering, no tearing, no lag; it's like getting that new car and having all the sales hype end up being right on the money. When I flip back and forth between my 60Hz monitor and the PC278Q, its like a night and day experience.”
Here are some more Display articles from around the web:
- AOC G2460PG G-Sync 144Hz 1ms Gaming Monitor @ Kitguru
- Asus ROG Swift PG278Q 144hz G-Sync Monitor @ Kitguru
- 6400×1080: Testing Mixed-Resolution AMD Eyefinity @ eTeknix
- Demystifying NTSC Color And Progressive Scan @ Hack a Day
Subject: Memory | August 12, 2014 - 02:09 PM | Jeremy Hellstrom
Tagged: XPG V3, DDR3 3100, adata
Currently available for a mere $870 the 8GB DDR3-3100 dual channel kit from ADATA with timings of 12-14-14-36 has to be among the most expensive consumer RAM available on the market. We can only hope that DDR4 does not arrive at a similar speed and price point but instead with slower clocked DIMMs at a more reasonable price and with improvements to performance. Legit Reviews' testing showed that these DIMMs offer almost no benefit over DDR3-1600 with tighter timings in real usage but you can get higher scores on synthetic benchmarks. If benchmarking better than the competition and swap-able heatspreaders with different colours is attractive to you then you could pick up these DIMMs, otherwise you really won't be getting value for your money.
"Gone are the days of being on the cutting edge of memory with DDR3 running at 2133MHz! These days running 2133MHz memory is pretty much considered the norm for a high end gaming rig. If you’re looking to be on the bleeding edge of memory speeds you’re going to be limited to only one or two kits. Today we have one of the fastest kits available on the market to put through the paces, the ADATA XPG V3 DDR3 3100MHz 8GB memory kit. Read on to see if this big dollar kit is worth nearly a thousand dollars."
Here are some more Memory articles from around the web:
- GeIL EVO POTENZA 16GB 2400MHz @ eTeknix
- Kingston HyperX Fury 8GB 1866MHz @ eTeknix
- Patriot Memory 8GB DDR3 1600MHz Viper 3 LP Memory Kit (PVL38G160C9K) Review @ Madshrimps
- GeIL DRAGON RAM 8GB 1600MHz @ eTeknix
Subject: General Tech | August 12, 2014 - 01:07 PM | Jeremy Hellstrom
Tagged: Intel, haswell, tsx, errata
Transactional Synchronization Extensions, aka TSX, are a backwards compatible set of instructions which first appeared in some Haswell chips as a method to improve concurrency and multi-threadedness with as little work for the programmer as possible. It was intended to improve the scaling of multi-threaded apps running on multi-core processors and has not yet been widely adopted. The adoption has run into another hurdle, in some cases the use of TSX can cause critical software failures and as a result Intel will be disabling the instruction set via new BIOS/UEFI updates which will be pushed out soon. If your software uses the new instruction set and you wish it to continue to do so you should avoid updating your motherboard BIOS/UEFI and ask your users to do the same. You can read more about this bug/errata and other famous problems over at The Tech Report.
"The TSX instructions built into Intel's Haswell CPU cores haven't become widely used by everyday software just yet, but they promise to make certain types of multithreaded applications run much faster than they can today. Some of the savviest software developers are likely building TSX-enabled software right about now."
Here is some more Tech News from around the web:
- Nvidia claims Haswell-class performance for Denver CPU core
- Microsoft integrates Cortana into Windows Threshold @ The Inquirer
- AMD launches Firepro graphics updates for CAD workstations @ The Inquirer
- VicoVation Marcus 3 XHD 1296p Car Dash Camera Review @ NikKTech
- Ancient pager tech SMS: It works, it's fab, but wow, get a load of that incoming SPAM @ The Register
NVIDIA Reveals 64-bit Denver CPU Core Details, Headed to New Tegra K1 Powered Devices Later This Year
Subject: Processors | August 12, 2014 - 01:06 AM | Tim Verry
Tagged: tegra k1, project denver, nvidia, Denver, ARMv8, arm, Android, 64-bit
During GTC 2014 NVIDIA launched the Tegra K1, a new mobile SoC that contains a powerful Kepler-based GPU. Initial processors (and the resultant design wins such as the Acer Chromebook 13 and Xiaomi Mi Pad) utilized four ARM Cortex-A15 cores for the CPU side of things, but later this year NVIDIA is deploying a variant of the Tegra K1 SoC that switches out the four A15 cores for two custom (NVIDIA developed) Denver CPU cores.
The custom 64-bit Denver CPU cores use a 7-way superscalar design and run a custom instruction set. Denver is a wide but in-order architecture that allows up to seven operations per clock cycle. NVIDIA is using a custom ISA and on-the-fly binary translation to convert ARMv8 instructions to microcode before execution. A software layer and 128MB cache enhance the Dynamic Code Optimization technology by allowing the processor to examine and optimize the ARM code, convert it to the custom instruction set, and further cache the converted microcode of frequently used applications in a cache (which can be bypassed for infrequently processed code). Using the wider execution engine and Dynamic Code Optimization (which is transparent to ARM developers and does not require updated applications), NVIDIA touts the dual Denver core Tegra K1 as being at least as powerful as the quad and octo-core packing competition.
Further, NVIDIA has claimed at at peak throughput (and in specific situations where application code and DCO can take full advantage of the 7-way execution engine) the Denver-based mobile SoC handily outpaces Intel’s Bay Trail, Apple’s A7 Cyclone, and Qualcomm’s Krait 400 CPU cores. In the results of a synthetic benchmark test provided to The Tech Report, the Denver cores were even challenging Intel’s Haswell-based Celeron 2955U processor. Keeping in mind that these are NVIDIA-provided numbers and likely the best results one can expect, Denver is still quite a bit more capable than existing cores. (Note that the Haswell chips would likely pull much farther ahead when presented with applications that cannot be easily executed in-order with limited instruction parallelism).
NVIDIA is ratcheting up mobile CPU performance with its Denver cores, but it is also aiming for an efficient chip and has implemented several power saving tweaks. Beyond the decision to go with an in-order execution engine (with DCO hopefully mostly making up for that), the beefy Denver cores reportedly feature low latency power state transitions (e.g. between active and idle states), power gating, dynamic voltage, and dynamic clock scaling. The company claims that “Denver's performance will rival some mainstream PC-class CPUs at significantly reduced power consumption.” In real terms this should mean that the two Denver cores in place of the quad core A15 design in the Tegra K1 should not result in significantly lower battery life. The two K1 variants are said to be pin compatible such that OEMs and developers can easily bring upgraded models to market with the faster Denver cores.
For those curious, In the Tegra K1, the two Denver cores (clocked at up to 2.5GHz) share a 16-way L2 cache and each have 128KB instruction and 64KB data L1 caches to themselves. The 128MB Dynamic Code Optimization cache is held in system memory.
Denver is the first (custom) 64-bit ARM processor for Android (with Apple’s A7 being the first 64-bit smartphone chip), and NVIDIA is working on supporting the next generation Android OS known as Android L.
The dual Denver core Tegra K1 is coming later this year and I am excited to see how it performs. The current K1 chip already has a powerful fully CUDA compliant Kepler-based GPU which has enabled awesome projects such as computer vision and even prototype self-driving cars. With the new Kepler GPU and Denver CPU pairing, I’m looking forward to seeing how NVIDIA’s latest chip is put to work and the kinds of devices it enables.
Are you excited for the new Tegra K1 SoC with NVIDIA’s first fully custom cores?
Subject: General Tech | August 11, 2014 - 06:52 PM | Jeremy Hellstrom
Tagged: VLAN party, pcper, kick ass, fragging frogs
Several rowboats worth of snacks, a couple of canoe-fulls of assorted beverages and boatloads of fun were had this weekend in the highly successful 7th Fragging Frogs VLAN; if you missed it there will be another chance some day but you really missed an epic event. There were over 120 Teamspeak connections in a variety of channels and an estimated peak of 78 active participants. Thanks to AMD there is a new game for the Frogs as well as Plants vs. Zombies Garden Warfare was a hit both to the players and to those watching iamApropos' live stream. We also gained some ARMA 2 fans which will not only appear again next VLAN but is also in danger of becoming a frequent activity for some members.
Once again there was quite a bit of valuable hardware and software given away, the list includes:
- AMD Fan Kit (headset, 16 GB USB drive, mouse)
- AMD Gaming Series RAM - 8 GB of 2133 Mhz
- MSI Military Class 4 A88XM-E35 FM2+ motherboard *and* A10-7850K APU
- AMD FX-8350 Processor
- XFX R9 290 Double D graphics card
- Several Plants vs Zombies: Garden Warfare Origin codes
AMD Red Team+
- Murdered Soul Suspect game codes
- Sniper Elite 3 game codes
Please stop by this thread to offer your thanks and support for all the hard work put into these events by Lenny, iamApropos, Spazster, Brandito, Cannonaire, AMD and the Frogs in general.
Subject: Cases and Cooling | August 11, 2014 - 05:23 PM | Jeremy Hellstrom
Tagged: SST-NJ520, SilverStone Tech, Nightjar, Fanless Power Supply, 80 Plus Platinum
Just over a month ago Lee took a look at Silverstone's Fanless Nightjar PSU, giving it a Gold Award. It has now arrived in [H]ard|OCP's torture chamber so you can see how well it works with a different system. At the lowest load level of 25% of max the efficiency was a mere 89.79% but at 50%, 75% and 100% loads it stayed well above 90% and the temperature difference was 9C between the lowest load and the highest, impressive for a fanless PSU. In the end not only was this the best fanless 500W PSU [H] has tested, it was the overall best 500W PSU they've seen, justifying the high asking price.
"What is more quiet than a computer power supply with a fan? You guessed it, a PSU with no fan. This unit has small footprint builds in mind featuring excellent Platinum efficiency, flexible flat cables, and of course no sound profile to speak of. Does the 520 watt rated SilverStone Nightjar hold up when we put it in the incubator?"
Here are some more Cases & Cooling reviews from around the web:
- PSU deathmatch: Cooler Master V750 vs. Rosewill Capstone-750-M @ The Tech Report
- EVGA SuperNOVA P2 1200 W @ techPowerUp
- Super Flower Leadex Platinum 1000W Modular @ eTeknix
- Cooler Master V750 Semi-Modular Power Supply @ eTeknix
- Be Quiet! Power Zone 750 Watt Power Supply Review @HiTech Legion
- Cooler Master V750 Semi-Modular 750 W @ techPowerUp
- Seasonic SS-760XP2 760W Power Supply Review @ NikKTech
- Seasonic Platinum Series 660W Modular @ eTeknix
Subject: Processors | August 11, 2014 - 03:40 PM | Jeremy Hellstrom
Tagged: A10-7800, A6-7400K, linux, amd, ubuntu 14.04, Kaveri
Linux support for AMD's GPUs has not been progressing at the pace many users would like, though it is improving over time but that is not the same with their APUs. Phoronix just tested the A10-7800 and A6-7400K on Ubuntu 14.04 with kernel 3.13 and the latest Catalyst 14.6 Beta. This preview just covers the raw performance, you can expect to see more published in the near future that will cover new features such as the configurable TDP which exists on these chips. The tests show that the new 7800 can keep pace with the previous 7850K and while the A6-7400K is certainly slower it will be able to handle a Linux machine with relatively light duties. You can see the numbers here.
"At the end of July AMD launched new Kaveri APU models: the A10-7800, A8-7600, and A6-7400K. AMD graciously sent over review samples on their A10-7800 and A6-7400K Kaveri APUs, which we've been benchmarking and have some of the initial Linux performance results to share today."
Here are some more Processor articles from around the web:
- AMD's A10-7800 @ The Tech Report
- AMD A10-7800 APU @ Benchmark Reviews
- AMD A10-7800 @ Kitguru
- AMD Kaveri A8-7600 and A10-7800 APU Review @ Legit Reviews
- AMD A10-7800 “Kaveri” APU @ eTeknix
- AMD A10-7800 Kaveri APU Review @ Hardware Canucks
- Core i7-4790K "Devil's Canyon" overclocking revisited @ The Tech Report
- Intel Core i5 4690K processor @ Hardwareoverclock
Subject: General Tech | August 11, 2014 - 01:47 PM | Jeremy Hellstrom
Tagged: amd, seattle, hot chips
AMD has been showing off a reference Seattle-based server at Hot Chips and The Tech Report had an opportunity to see it. Eight 64-bit Cortex-A57 chips are set up in pairs, each pair sharing 1MB of L2 cache while the 8MB of L3 cache is accessible by all eight chips as well as the coprocessors, memory controller, and I/O subsystems. The system can address up to 128GB of DDR3 or DDR4, and you get support fot 8 SATA 6Gbps ports and 8 lanes of PCIe 3.0 to apportion between the slots. There is a secure System Control Processor, a partitioned Cortex-A5 core with its own ROM, RAM, and I/O to control power, boot and configuration control with support for TrustZone as well as a Cryptographic Coprocessor which accelerates all encryption processes as you might well expect. Read on for more information about AMD's unique new take on server technology.
"For some time now, the features of AMD's Seattle server processor have been painted in broad brush strokes. This morning, at the Hot Chips symposium, AMD is filling in most of the missing details. We were treated to an advance briefing last week, where AMD provided previously confidential information about Seattle's cache network, memory controller, I/O features, and coprocessors."
Here is some more Tech News from around the web:
- AMD to release A68 chipsets in September, sources say @ DigiTimes
- Intel's Broadwell processor revealed @ The Tech Report
- Intel Broadwell Architecture Preview @ Legit Reviews
- 4 Generations Of The AMD APU: How Much Progress Has Been Made? @ eTeknix
- Intruder alert: Cyber thugs are using steganography to slip in malware badness @ The Register
- Hackers root Google's Nest thermostat in 15 seconds @ The Inquirer
- Struggling PC market to push Chromebook sales to 5.2 million in 2014 @ The Inquirer
- Sumo Omni Reloaded @ Phoronix
- Win 3x BioStar A68N-5000 Motherboards @ Kitguru
Coming in 2014: Intel Core M
The era of Broadwell begins in late 2014 and based on what Intel has disclosed to us today, the processor architecture appears to be impressive in nearly every aspect. Coming off the success of the Haswell design in 2013 built on 22nm, the Broadwell-Y architecture will not only be the first to market with a new microarchitecture, but will be the flagship product on Intel’s new 14nm tri-gate process technology.
The Intel Core M processor, as Broadwell-Y has been dubbed, includes impressive technological improvements over previous low power Intel processors that result in lower power, thinner form factors, and longer battery life designs. Broadwell-Y will stretch into even lower TDPs enabling 9mm or small fanless designs that maintain current battery lifespans. A new 2nd generation FIVR with modified power delivery design allows for even thinner packaging and a wider range of dynamic frequencies than before. And of course, along with the shift comes an updated converged core design and improved graphics performance.
All of these changes are in service to what Intel claims is a re-invention of the notebook. Compared to 2010 when the company introduced the original Intel Core processor, thus redirecting Intel’s direction almost completely, Intel Core M and the Broadwell-Y changes will allow for some dramatic platform changes.
Notebook thickness will go from 26mm (~1.02 inches) down to a small as 7mm (~0.27 inches) as Intel has proven with its Llama Mountain reference platform. Reductions in total thermal dissipation of 4x while improving core performance by 2x and graphics performance by 7x are something no other company has been able to do over the same time span. And in the end, one of the most important features for the consumer, is getting double the useful battery life with a smaller (and lighter) battery required for it.
But these kinds of advancements just don’t happen by chance – ask any other semiconductor company that is either trying to keep ahead of or catch up to Intel. It takes countless engineers and endless hours to build a platform like this. Today Intel is sharing some key details on how it was able to make this jump including the move to a 14nm FinFET / tri-gate transistor technology and impressive packaging and core design changes to the Broadwell architecture.
Intel 14nm Technology Advancement
Intel consistently creates and builds the most impressive manufacturing and production processes in the world and it has helped it maintain a market leadership over rivals in the CPU space. It is also one of the key tenants that Intel hopes will help them deliver on the world of mobile including tablets and smartphones. At the 22nm node Intel was the first offer 3D transistors, what they called tri-gate and others refer to as FinFET. By focusing on power consumption rather than top level performance Intel was able to build the Haswell design (as well as Silvermont for the Atom line) with impressive performance and power scaling, allowing thinner and less power hungry designs than with previous generations. Some enthusiasts might think that Intel has done this at the expense of high performance components, and there is some truth to that. But Intel believes that by committing to this space it builds the best future for the company.
The Waiting Game
NVIDIA G-Sync was announced at a media event held in Montreal way back in October, and promised to revolutionize the way the display and graphics card worked together to present images on the screen. It was designed to remove hitching, stutter, and tearing -- almost completely. Since that fateful day in October of 2013, we have been waiting. Patiently waiting. We were waiting for NVIDIA and its partners to actually release a monitor that utilizes the technology and that can, you know, be purchased.
In December of 2013 we took a look at the ASUS VG248QE monitor, the display for which NVIDIA released a mod kit to allow users that already had this monitor to upgrade to G-Sync compatibility. It worked, and I even came away impressed. I noted in my conclusion that, “there isn't a single doubt that I want a G-Sync monitor on my desk” and, “my short time with the NVIDIA G-Sync prototype display has been truly impressive…”. That was nearly 7 months ago and I don’t think anyone at that time really believed it would be THIS LONG before the real monitors began to show in the hands of gamers around the world.
Since NVIDIA’s October announcement, AMD has been on a marketing path with a technology they call “FreeSync” that claims to be a cheaper, standards-based alternative to NVIDIA G-Sync. They first previewed the idea of FreeSync on a notebook device during CES in January and then showed off a prototype monitor in June during Computex. Even more recently, AMD has posted a public FAQ that gives more details on the FreeSync technology and how it differs from NVIDIA’s creation; it has raised something of a stir with its claims on performance and cost advantages.
That doesn’t change the product that we are reviewing today of course. The ASUS ROG Swift PG278Q 27-in WQHD display with a 144 Hz refresh rate is truly an awesome monitor. What did change is the landscape, from NVIDIA's original announcement until now.
Subject: General Tech, Mobile | August 11, 2014 - 08:00 AM | Tim Verry
Tagged: webgl, tegra k1, nvidia, geforce, Chromebook, Bay Trail, acer
Today Acer unveiled a new Chromebook powered by an NVIDIA Tegra K1 processor. The aptly-named Chromebook 13 is 13-inch thin and light notebook running Google’s Chrome OS with up to 13 hours of battery life and three times the graphical performance of existing Chromebooks using Intel Bay Trail and Samsung Exynos processors.
The Chromebook 13 is 18mm thick and comes in a white plastic fanless chassis that hosts a 13.3” display, full size keyboard, trackpad, and HD webcam. The Chromebook 13 will be available with a 1366x768 or 1920x1080 resolution panel depending on the particular model (more on that below).
Beyond the usual laptop fixtures, external I/O includes two USB 3.0 ports, HDMI video output, a SD card reader, and a combo headphone/mic jack. Acer has placed one USB port on the left side along with the card reader and one USB port next to the HDMI port on the rear of the laptop. Personally, I welcome the HDMI port placement as it means connecting a second display will not result in a cable invading the mousing area should i wish to use a mouse (and it’s even south paw friendly Scott!).
The Chromebook 13 looks decent from the outside, but it is the internals where the device gets really interesting. Instead of going with an Intel Bay Trail (or even Celeron/Core i3), Acer has opted to team up with NVIDIA to deliver the world’s first NVIDIA-powered Chromebook.
Specifically, the Chromebook 13 uses a NVIDIA Tegra K1 SoC, up to 4GB RAM, and up to 32GB of flash storage. The K1 offers up four A15 CPU cores clocked at 2.1GHz, and a graphics unit with 192 Kepler-based CUDA cores. Acer rates the Chromebook 13 at 11 hours with the 1080p panel or 13 hours when equipped with the 1366x768 resolution display. Even being conservative, the Chromebook 13 looks to be the new leader in Chromebook battery life (with the previous leader claiming 11 hours).
A graph comparing WebGL performance between the NVIDIA Tegra K1, Intel (Bay Trail) Celeron N2830, Samsung Exynos 5800, and Samsung Exynos 5250. Results courtesy NVIDIA.
The Tegra K1 is a powerful little chip, and it is nice to see NVIDIA get a design win here. NVIDIA claims that the Tegra K1, which is rated at 326 GFLOPS of compute performance, offers up to three times the graphics performance of the Bay Trail N2830 and Exynos 5800 SoCs. Additionally, the K1 reportedly uses slightly less power and delivers higher multi-tasking performance. I’m looking forward to seeing independent reviews in this laptop formfactor and hoping that the chip lives up to its promises.
The Chromebook 13 is currently up for pre-order and will be available in September starting at $279. The Tegra K1-powered laptop will hit the United States and Europe first, with other countries to follow. Initially, the Europe roll-out will include “UK, Netherlands, Belgium, Denmark, Sweden, Finland, Norway, France, Germany, Russia, Italy, Spain, South Africa and Switzerland.”
Acer is offering three consumer SKUs and one education SKU that will be exclusively offering through a re-seller. Please see the chart below for the specifications and pricing.
|Acer Chromebook 13 Models||System Memory (RAM)||Storage (flash)||Display||Price MSRP|
|CB5-311-T9B0||2GB||16GB||1920 x 1080||$299.99|
|CB5-311-T1UU||4GB||32GB||1920 x 1080||$379.99|
|CB5-311-T7NN - Base Model||2GB||16GB||1366 x 768||$279.99|
|Educational SKU (Reseller Only)||4GB||16GB||1366 x 768||$329.99|
Intel made some waves in the Chromebook market earlier this year with the announcement of several new Intel-powered Chrome devices and the addition of conflict-free Haswell Core i3 options. It seems that it is now time for the ARM(ed) response. I’m interested to see how NVIDIA’s newest model chip stacks up to the current and upcoming Intel x86 competition in terms of graphics power and battery usage.
As far as Chromebooks go, if the performance is at the point Acer and NVIDIA claim, this one definitely looks like a decent option considering the price. I think a head-to-head between the ASUS C200 (Bay Trail N2830, 2GB RAM, 16GB eMMC, and 1366x768 display at $249.99 MSRP) and Acer Chromebook 13 would be interesting as the real differentiator (beyond aesthetics) is the underlying SoC. I do wish there was a 4GB/16GB/1080p option in the Chromebook 13 lineup though considering the big price jump to get 4GB RAM (mostly as a result of the doubling of flash) in the $379.99 model at, say, $320 MSRP.
Read more about Chromebooks at PC Perspective!
Subject: General Tech | August 8, 2014 - 07:25 PM | Jeremy Hellstrom
Tagged: VLAN party, kick ass, gaming, fragging frogs
If you haven't yet signed up in the official thread, stocked up on snacks and beverages and reserved all of the weekend for gaming then maybe this will excite you enough to change your plans.
By the way, Play Battlefield 4 Free for a Week. Origin Game Time is On! It is a rather popular choice with the Frogs so if you don't have it that is no excuse!
You should also consider subscribing to TornTV where you can find a lot of Fragging Frog and PC Perspective action. There will also be a live stream where you can show off your skills, or lack thereof, to the whole internet!
Subject: Systems | August 8, 2014 - 05:17 PM | Jeremy Hellstrom
Tagged: gigabyte, gigabyte brix, brix, BXi5G-760
This particular Brix is a lot more powerful than most with an i5-4200H and what they refer to as a GeForce GTX 760 with 6GB of GDDR5. The GTX 760 is not quite the same as the desktop version, with 1344 shaders as opposed to 1152 and a slightly lower clock at 967MHz Boost for the GPU and 1250MHz for the RAM. The storage and RAM are left up to you, with the assumption that an SSD will be installed like it was in The Tech Report's review model. The small system was capable of 1080p gaming at medium to high resolution which is rather impressive considering the heat constraints.
"Gigabyte's Brix Gaming BXi5G-760 is a mini-PC on steroids, with a discrete Nvidia GPU and a dual-core Haswell CPU inside. Can it hang with traditional gaming PCs? We put it through some tough tests to find out."
Here are some more Systems articles from around the web:
- DinoPC Slayer 15.6″ GTX 870M and Magma Wrath GTX 770 @ Kitguru
- PC Specialist Vanquish 270X System @ eTeknix
- PCSpecialist Optimus V X13 @ Kitguru
- Cube Raptor Gaming PC @ eTeknix
- MSI Nightblade Z97 Barebones System Review @HiTech Legion
- Armari Magnetar M16E-AW1200-GPU Workstation @ Kitguru
- TR's July 2014 System Guide
Subject: Storage | August 8, 2014 - 02:16 PM | Jeremy Hellstrom
Tagged: ultrastar, hgst, enterprise ssd, 20nm
HGST has refreshed their 12Gbit/s SAS series of Ultrastar SSDs with denser 20nm which has upped the read speeds though the writes do suffer somewhat. As they are enterprise drives they have rather impressive lifespans, the 800GB is rated at 25 full drives writes/day for the length of the 5 year warranty. They also offer encryption and erasure tools that are superior to enthusiast drives, along with a much higher price tag. The Register also offers information on the new Ultrastar HDDs and a link to the spec sheets but as of yet we do not have any benchmarks.
"HGST has refreshed its Ultrastar enterprise SSD line, using denser 20nm NAND to replace the previous 25nm flash, doubling capacity, upping read performance but lowering write performance a tad in the process."
Here are some more Storage reviews from around the web:
- Toshiba THNSNJ HG6 256 GB @ techPowerUp
- Crucial MX100 512GB SSD Review @ Hardware Canucks
- Hynix SH920 128GB SSD @ Kitguru
- Kingston SSDNow V310 960GB SSD Review @ Legit Reviews
- Silicon Power Slim S60 240GB SSD Review @ NikKTech
- Kingston HyperX Fury 240GB @ eTeknix
- Samsung 845DC PRO @ The SSD Review
- Leef Supra 3.0 and Ice 3.0 16GB Flash Drive Review @ Legit Reviews
- Netgear ReadyNAS 516 @ Legion Hardware
- Synology DS414slim @ techPowerUp
- Thecus N7710-G NAS Server Review @ NikKTech
- Synology Embedded DataStation EDS14 @ Legion Hardware
- Seagate Enterprise Capacity 3.5 HDD v4 6TB Review @ Neoseeker
Subject: General Tech | August 8, 2014 - 12:39 PM | Jeremy Hellstrom
Tagged: ssd, radeon r7, confusing, amd, 19nm
In a branding move that has no possibility of causing confusion, AMD has announce the name of their new SSD line and it seems that the next Radeon R7 240 you buy might be a GPU or then again it might not be. Brand confusion aside, the drives will use 19nm Toshiba NAND fabbed at SanDisk and are predicted to perform similar to other drives with the same NAND, reads of 550MBps and 530MBps write. However as we well know the key to performance lies in the controller and the number of channels so it will be interesting to see the first benchmarks. As The Inquirer points out this could lead to the release of AMD branded machines, containing AMD made APU, RAM, SSD and discrete GPU.
"The Radeon R7 range consisting of 120GB, 240GB and 480GB flavours and is designed to appeal to the gaming market, putting it in direct competition with Micron's Crucial range which expanded to include the MX100, which premiered earlier this year claiming 89 percent performance improvement over a standard hard drive."
Here is some more Tech News from around the web:
- Intel's office of the future will be completely wireless @ The Inquirer
- Microsoft throws old versions of Internet Explorer under the bus @ The Register
- How to Image and Clone Hard Drives with Clonezilla @ Linux.com
- Supermicro adorns servers with bright and shiny ULLtraDIMMs @ The Register
- A Do-It-Yourself Air Conditioner with Evaporative Cooling 5 Gallon Bucket @ Hack a Day
Subject: Storage, Shows and Expos | August 7, 2014 - 05:37 PM | Allyn Malventano
Tagged: ssd, SM2256, silicon motion, sata, FMS 2014, FMS
Silicon Motion has announced their SM2256 controller. We caught a glimpse of this new controller on the Flash Memory Summit show floor:
The big deal here is the fact that this controller is a complete drop-in solution that can drive multiple different types of flash, as seen below:
The SM2256 can drive all variants of TLC flash.
The controller itself looks to have decent specs, considering it is meant to drive 1xnm TLC flash. Just under 100k random 4k IOPS. Writes are understandably below the max saturation of SATA 6Gb/sec at 400MB/sec (writing to TLC is tricky!). There is also mention of Silicon Motion's NANDXtend Technology, which claims to add some extra ECC and DSP tech towards the end of increasing the ability to correct for bit errors in the flash (more likely as you venture into 8 bit per cell territory).
Subject: Storage, Shows and Expos | August 7, 2014 - 05:25 PM | Allyn Malventano
Tagged: ssd, sata, PS5007, PS3110, phison, pcie, FMS 2014, FMS
At the Flash Memory Summit, Phison has updated their SSD controller lineup with a new quad-core SSD controller.
The PS3110 is capable of handling TLC as well as MLC flash, and the added horsepower lets it push as high as 100k IOPS.
Also seen was an upcoming PS5007 controller, capable of pushing PCIe 3.0 x4 SSDs at 300k IOPS and close to 3GB/sec sequential throughputs. While there were no actual devices on display of this new controller, we did spot the full specs:
Full press blast on the PS3110 appears after the break:
Subject: General Tech, Storage, Shows and Expos | August 7, 2014 - 02:17 PM | Scott Michaud
Tagged: ssd, phase change memory, PCM, hgst, FMS 2014, FMS
According to an HGST press release, the company will bring an SSD based on phase change memory to the 2014 Flash Memory Summit in Santa Clara, California. They claim that it will actually be at their booth, on the show floor, for two days (August 6th and 7th).
The device, which is not branded, connects via PCIe 2.0 x4. It is designed for speed. It is allegedly capable of 3 million IOPS, with just 1.5 microseconds required for a single access. For comparison, the 800GB Intel SSD DC P3700, recently reviewed by Allyn, had a dominating lead over the competitors that he tested. It was just shy of 250 thousand IOPS. This is, supposedly, about twelve times faster.
While it is based on a different technology than NAND, and thus not directly comparable, the PCM chips are apparently manufactured at 45nm. Regardless, that is significantly larger lithography than competing products. Intel is manufacturing their flash at 20nm, while Samsung managed to use a 30nm process for their recent V-NAND launch.
What does concern me is the capacity per chip. According to the press release, it is 1Gb per chip. That is about two orders of magnitude smaller than what NAND is pushing. That is, also, the only reference to capacity in the entire press release. It makes me wonder how small the total drive capacity will be, especially compared to RAM drives.
Of course, because it does not seem to be a marketed product yet, nothing about pricing or availability. It will almost definitely be aimed at the enterprise market, though (especially given HGST's track record).
*** Update from Allyn ***
I'm hijacking Scott's news post with photos of the actual PCM SSD, from the FMS show floor:
In case you all are wondering, yes, it does in fact work:
One of the advantages of PCM is that it is addressed at smaller sections as compared to typical flash memory. This means you can see ~700k *single sector* random IOPS at QD=1. You can only pull off that sort of figure with extremely low IO latency. They only showed this output at their display, but ramping up QD > 1 should reasonably lead to the 3 million figure claimed in their release.
Subject: Motherboards | August 7, 2014 - 02:14 PM | Jeremy Hellstrom
Tagged: SupremeFX 2014, LANGuard, KeyBot, GameFirst III, FM2+, crossblade ranger, ASUS ROG, asus, amd
ASUS Republic of Gamers have just released their first FM2+ board, the Crossblade Ranger which should be available for ~$160 in the next few days, a perfect base for the A10-7800 and A6-7400K which Josh just reviewed. There is a host of features on this board, from an updated SupremeFX 2014 audio system to eliminate interference and adjust impedance to the LANGuard and GameFirst III networking enhancements. KeyBot is also new, it allows you to program macros on any USB keyboard regardless of the capabilities of the keyboard itself. Check out the full release for a breakdown of the features.
Fremont, CA (6th August, 2014) - ASUS Republic of Gamers (ROG) today announced the Crossblade Ranger, the first AMD FM2+ motherboard to carry the revered ROG brand name, packed with game-boosting features for an AMD-based gaming powerhouse that is beyond compare. The Crossblade Ranger’s core benefits include the best gaming networking with Intel Gigabit Ethernet, the best gaming audio from SupremeFX 2014, the best gaming interface with KeyBot and the best gaming performance.
Best gaming networking
The Crossblade Ranger is fitted with state-of-the-art Intel Gigabit Ethernet that delivers better throughput and lower power consumption than competing solutions from other vendors.
The new motherboard’s networking capabilities additionally benefit from ROG-exclusive GameFirst III technology for optimal online gameplay. This advanced network-optimization software assigns top priority to game-data packets, allocating them more bandwidth to ensure the best online-gaming experience and clear, stutter-free online team-chat — all controlled with ROG’s usual intuitive flair.
These features are coupled with LANGuard Ethernet socket technology. LANGuard works by employing advanced filtering components with low impedance capacitors to reduce noise and improve throughput and also includes ESD and surge-protection to prevent damage from lightning strikes and static-electricity discharges.
Best gaming audio
Immersive audio is essential for gaming, so the Crossblade Ranger is engineered with SupremeFX 2014. At its core the SupremeFX 2014 solution uses PCB isolation techniques to minimize electromagnetic interference (EMI) and premium ELNA audio capacitors to provide precise 7.1 channel audio that’s on par with the best soundcards.
SupremeFX 2014 features dedicated hardware for features such as Sonic SenseAmp and Sonic SoundStage. Sonic SenseAmp automatically detects analog-audio front-panel (AAFP) headphone impedance and adjusts the amp gain to provide the best volume control range - taking the guesswork and hassle out of setting gain manually.
Sonic Soundstage is a hardware based solution that features preset audio profiles for a variety of gaming genres. First-person shooter (FPS), racing, combat and sports games presets are available via an onboard hardware switch or via the included (Windows) driver package. The benefit of including an onboard hardware switch is that the Sonic Soundstage presets can be applied without needing a driver – so works with any operating system. For Windows users, the presets are fully customizable via software, allowing one to tailor sound to personal preference.
SupremeFX 2014 also includes additional software features to provide a competitive gaming edge and improve immersion.
Designed for first-person shooters (FPS), Sonic Radar II displays a stealthy overlay that shows what opponents and teammates are up to. Players see the precise direction and origin of in-game sounds such as gunshots, footsteps and call-outs, enabling them to hone enemy-pinpointing skills.
Sonic Studio can be used to create virtual surround modes for stereo headsets, provides EQ controls to tune various parts of the audio spectrum and includes noise reduction algorithms for microphones – this all adds up to make SupremeFX 2014 the complete gaming audio solution.
Best gaming interface
The Crossblade Ranger includes KeyBot, a clever tool that lets users instantly ‘upgrade’ an existing keyboard simply by attaching it to the dedicated USB socket.
Once connected, the KeyBot microprocessor is activated and the user is able to use their current keyboard to control multimedia playback, launch favorite applications or assign macros to specific keys —perfect for automating complicated in-game key sequences without the need for an expensive gaming keyboard.
Best gaming performance and experience
Being an ROG motherboard, the Crossblade Ranger is infused with core ROG DNA.
ROG’s Auto-Tuning technology enables the Crossblade Ranger to unleash the true power of AMD APUs with just few mouse clicks. Thanks to the TPU microprocessor (Turbo Processing Unit), the Auto-Tuning routine applies a CPU overclock without need to enter UEFI - perfect for users that are new to the platform.
For users that prefer manual control, the TPU microprocessor and bundled Turbo-V application allows real-time voltage adjustments within the Windows operating system to simplify the process of overclocking a system. Naturally, the ROG UEFI is also chock-full of overclocking functions that help squeeze every ounce of performance from the AMD FM2+ platform.
To keep things cool and quiet, we’ve included five onboard fan headers – each with PWM (4-pin) or DC (3-pin) control. Extensive fan control options are available within UEFI or the automated Fan Xpert 3 calibration utility. Using either method, anyone can customize fan profiles in order to maximize cooling efficiency and eliminate unnecessary fan noise. The level of control on offer here sets a new standard for the FM2+ platform and completely negates the need for using a dedicated and expensive fan controller.
The Crossblade Ranger is also compatible with the ROG Front Base dual-bay gaming panel. The ROG Front Base enables one-click performance boosting, fan-controls, shielded front audio input/outputs, audio profile selection, volume control and real-time system monitoring to provide everything a gamer needs within a single unit.
AVAILABILITY & PRICING
The ASUS ROG Crossblade Ranger lands with an MSRP of $159.99 and will be available at all major online retailers in August.
Get notified when we go live!