All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Subject: Graphics Cards | June 22, 2016 - 04:18 PM | Jeremy Hellstrom
Tagged: xfx, sapphire, Radeon RX 480, powercolor, gigabyte, asus, amd
An astute reader spotted several more RX 480's on Newegg, lacking clock speeds but providing physical dimensions, albeit with what looks to be a stock image. All three cards seem to be dual slot designs, XFX's card measuring 10" x 5", ASUS' at 11.8" x 5.4" and Sapphire's a wide bodied 11.8" x 6.5". This could indicate a custom cooler or merely that the cards have rough dimensions listed as opposed to the exact size.
Unfortunately the comparison and details page is unavailable so we don't have a way to see the listed clock speeds but we can be sure that they will have three DP 1.2 ports and an HDMI out. We will keep an eye out for any more leaks we can share with you.
Subject: General Tech | June 22, 2016 - 02:44 PM | Jeremy Hellstrom
Tagged: gaming, dawn of war III, warhammer 40k
Relic was showing off what DoW III will look like in the usual E3 tradition, with an enhanced 'game play' video. Heroes are somewhat different than in the previous game, instead of leading a squad they operate on their own, however Gabriel does have an impressive upgrade to his Thunder Hammer. Also featured is something which is totally not a warjack; an Imperial Knight which is a scaled down Titan with a single pilot. Generally found guarding Ag. worlds they are the first of the Super Units to be revealed. Heroes will be Elite Units, faster but somewhat squishier than Super Units which will be much slower, vulnerable to anti-vehicle attacks but able to shrug off most other attacks. They can be chosen at the beginning of a mission and then deployed with Elite Points which you gain during the mission. The quotes over at Rock, Paper, SHOTGUN don't have a lot of detail about how the game will play but the video sure is pretty.
"For the benefit of good warboys and wargirls, here’s the not-really-gameplay-despite-what-Relic-say look at a grizzled Gabriel Angelos duffing up some Eldar with the help of his Space Marine chums and a 14-metre mech named Imperial Knight Solaria"
Here is some more Tech News from around the web:
- Map Modes, Nomads & Wargoals: Stellaris Patch @ Rock, Paper, SHOTGUN
- It’s All Relative – An In-depth Look At Paradox’s Stellaris @ Techgage
- Fallout 4’s Contraptions Workshop DLC Released @ Rock, Paper, SHOTGUN
- Homefront: The Revolution Tweaking Guide @ OCC
- Civilization VI Trailer Demonstrates Unstacked Cities @ Rock, Paper, SHOTGUN
- Ars’ favorite games of E3: From dueling VR wizards to calm underwater dives @ Ars Technica
- Best Total War: Warhammer Mods @ Rock, Paper, SHOTGUN
- Humble 25th Anniversary Sonic the Hedgehog Bundle launched @ HEXUS
- Frags For The Memories: Quake Is Twenty Today @ Rock, Paper, SHOTGUN
Subject: General Tech | June 22, 2016 - 01:33 PM | Jeremy Hellstrom
Tagged: Intel, amd, antitrust
This is a saga for the ages and a snit worthy of any 2 year old child. 11 years ago AMD filed suit against Intel citing questionable business tactics Intel had been using worldwide. Intel was offering discounted parts to retailers if they would use Intel chips exclusively. For instance, if a company like Dell offered an AMD alternative then Intel would raise the price of every Intel component sold to Dell across the board. This is, of course, illegal.
The court cases were settled in 2009, in the US Intel agreed to pay AMD $1.25 billion USD to settle all outstanding court cases in the US and several overseas. In the UK there was a seperate court case which also went against Intel, the courts there requiring Intel to pay AMD €1.06bn, the largest ever fine in the UK. Since then Intel has been fighting tooth and nail to find a way not to pay the fine and while they have not succeeded in their legal battle they have succeeded in not paying AMD one single cent. Their initial appeal was dismissed in 2014 but that has not stopped Intel from delaying the payment and as of today that fine still remains unpaid. The Inquirer posted today about their latest challenge to the ruling, Intel's legal team claims that it somehow unfair to be punished for unfair business practices.
Six years on and over 1 billion dollars that should be AMDs is still under a couch cushion in Intel's offices somewhere.
"CHIPMAKER Intel ain't giving up and continues to fight the €1.06bn (around £815m) antitrust fine levied on the firm six years ago."
Here is some more Tech News from around the web:
- Intel's Knights Landing: Fresh x86 Xeon Phi lineup for HPC and AI @ The Register
- Supercomputers in 2030: Lots of exaflops and LOTS of DRAM @ The Register
- Online Backup Firm Carbonite Tells Users To Change Their Passwords Now @ Slashdot
- 7 new-generation programming languages you should get to know @ The Inquirer
Subject: Graphics Cards | June 22, 2016 - 02:40 AM | Tim Verry
Tagged: SLI HB, nvidia, EVGA SLI HB
Earlier this month we reported that EVGA would be producing its own version of Nvidia's SLI High Bandwidth bridges (aka SLI HB). Today, the company unveiled all the details on its new bridges that we did not know previously, particularly pricing and what the connectors look like.
EVGA is calling the new SLI HB bridges the EVGA Pro SLI HB Bridge and it will be available in several sizes to accommodate your particular card spacing. Note that the 0 slot, 1 slot, 2 slot, and 4 slot spacing bridges are all for two graphics card setups; you will not be able to use these bridges for Tri SLI or Quad SLI setups. While Nvidia did not show the underside of the HB bridges when it first announced them alongside the GTX 1080 graphics card, thanks to EVGA you can finally see what the connectors look like.
As many surmised, the new high bandwidth bridges use both fingers of the SLI connectors on each card to connect the two cards together. Previously (using the old-style SLI bridges), it was possible to connect card A to card B using one set of connectors and Card B to Card C using the second set of connectors for example. Now, you are limited to two card multi-GPU setups. That is the downside; however, the upside is that the HB bridges promise to deliver all of the necessary bandwidth to allow for high speed 4K and NVIDIA Surround display setups. While you will not necessarily see higher frame rates, the HB bridges should allow for improved frame times which will mean smoother gameplay on those very high resolution monitors!
The new SLI bridges are all black with an EVGA logo in the middle that is backlit by an LED. Users are able to use a switch along the bottom edge of the pcb to select from red, green, blue, and white LED colors. In my opinion these bridges look a lot better than the Nvidia SLI HB bridge renders from our computex story (hehe).
Now, as for pricing: EVGA is pricing its SLI HB bridges at $39.99 with the 2 slot spacing and 4 slot spacing bridges available now and the 0 slot and 1 slot spaced bridges set to be available soon (you can sign up to be notified when they are available for purchase). Hopefully reviews will be updated shortly around the net with the new bridges to see what impact they really have on multi-GPU gaming performance (or if they will just be better looking alternatives to the older LED bridges or ribbon bridges)!
- GeForce GTX 1080 and 1070 3-Way and 4-Way SLI will not be enabled for games
- EVGA Forum Discussion on Pro SLI HB Bridges
Subject: Processors | June 21, 2016 - 10:00 PM | Scott Michaud
Update (June 22nd @ 12:36 AM): Errrr. Right. Accidentally referred to the CPU in terms of TFLOPs. That's incorrect -- it's not a floating-point decimal processor. Should be trillions of operations per second (teraops). Whoops! Also, it has a die area of 64sq.mm, compared to 520sq.mm of something like GF110.
So this is an interesting news post. Graduate students at UCDavis have designed and produced a thousand-core CPU at IBM's facilities. The processor is manufactured on their 32nm process, which is quite old -- about half-way between NVIDIA's Fermi and Kepler if viewed from a GPU perspective. Its die area is not listed, though, but we've reached out to their press contact for more information. The chip can be clocked up to 1.78 GHz, yielding 1.78 teraops of theoretical performance.
These numbers tell us quite a bit.
The first thing that stands out to me is that the processor is clocked at 1.78 GHz, has 1000 cores, and is rated at 1.78 teraops. This is interesting because modern GPUs (note that this is not a GPU -- more on that later) are rated at twice the clock rate times the number of cores. The factor of two comes in with fused multiply-add (FMA), a*b + c, which can be easily implemented as a single instruction and are widely used in real-world calculations. Two mathematical operations in a single instruction yields a theoretical max of 2 times clock times core count. Since this processor does not count the factor of two, it seems like its instruction set is massively reduced compared to commercial processors.
If they even cut out FMA, what else did they remove from the instruction set? This would at least partially explain why the CPU has such a high theoretical throughput per transistor compared to, say, NVIDIA's GF110, which has a slightly lower TFLOP rating with about five times the transistor count -- and that's ignoring all of the complexity-saving tricks that GPUs play, that this chip does not. Update (June 22nd @ 12:36 AM): Again, none of this makes sense, because it's not a floating-point processor.
"Big Fermi" uses 3 billion transistors to achieve 1.5 TFLOPs when operating on 32 pieces of data simultaneously (see below). This processor does 1.78 teraops with 0.621 billion transistors.
On the other hand, this chip is different from GPUs in that it doesn't use their complexity-saving tricks. GPUs save die space by tying multiple threads together and forcing them to behave in lockstep. On NVIDIA hardware, 32 instructions are bound into a “warp”. On AMD, 64 make up a “wavefront”. On Intel's Xeon Phi, AVX-512 packs 16, 32-bit instructions together into a vector and operates them at once. GPUs use this architecture because, if you have a really big workload, you, chances are, have very related tasks; neighbouring pixels on a screen will be operating on the same material with slightly offset geometry, multiple vertexes of the same object will be deformed by the same process, and so forth.
This processor, on the other hand, has a thousand cores that are independent. Again, this is wasteful for tasks that map easily to single-instruction-multiple-data (SIMD) architectures, but the reverse (not wasteful in highly parallel tasks that SIMD is wasteful on) is also true. SIMD makes an assumption about your data and tries to optimize how it maps to the real-world -- it's either a valid assumption, or it's not. If it isn't? A chip like this would have multi-fold performance benefits, FLOP for
Subject: Graphics Cards | June 21, 2016 - 08:49 PM | Scott Michaud
Tagged: rx 480, Radeon RX 480, polaris 10, Polaris, amd
The AMD Radeon RX 480 is set to launch on June 29th, but a VisionTek model was published a little early (now unpublished -- thanks to our long-time reader, Arbiter, for the heads up). Basically all specifications were already shared, and Ryan wrote about them on June 1st, but the final clock rates were unknown. The VisionTek one, on the other hand, has it listed as 1120 MHz (5.16 TFLOPs) with a boost of 1266 MHz (5.83 TFLOPs).
Granted, it's possible that the VisionTek model could be overclocked, even though the box and product page doesn't mark it as a factory-overclocked SKU. Also, 5.16 TFLOPs and 5.83 TFLOPs align pretty close to AMD's “>5 TFLOPs” rating, so it's unlikely that the canonical specifications slide underneath this one. Also, TFLOP ratings are basically a theoretical maximum performance, so real-world benchmarks need to be considered for a true measure of performance. That said, this would put the stock RX 480 in the range of a GTX 980 (somewhere above its listed boost clock, and slightly below its expected TFLOP rating when overclocked).
There is no price listed for the 8GB model, but the 4GB version will be $199 USD.
Subject: Mobile | June 21, 2016 - 06:50 PM | Jeremy Hellstrom
Tagged: asus, Chromebook, Chromebook Flip
ASUS' new Chromebook Flip convertible laptop can be yours for about ~$250, not too shabby for a tablet, let alone a laptop. However for this price a few sacrifices must be made, including the use of Chrome OS as it is a Chromebook after all. The hardware is a quad-core, 32-bit ARM chip from Rockchip called the RK3288C which can reach up to 1.8GHz. It also has 4GB of RAM and 16GB of local storage using eMMC flash and a two year subscription to Google drive to give you 100GB of additional storage. The Tech Report were quite enamoured of this little 10.1", 1280x800 IPS touch screen device, it may not be the fastest machine out there but for the price they felt it to be quiet impressive.
"Asus' Chromebook Flip is an all-aluminum convertible PC that runs Google's Chrome OS. Its $240-ish price tag puts it in contention with the budget Windows PCs we usually suggest in our mobile staff picks. We put the Flip to the test to see whether it's a worthy Windows alternative."
Here are some more Mobile articles from around the web:
More Mobile Articles
Subject: General Tech | June 21, 2016 - 06:33 PM | Jeremy Hellstrom
Tagged: amd, asmedia, Zen, usb 3.1
DigiTimes has heard rumours of a possible defect with the ASMedia USB 3.1 controller which will appear on motherboards for AMD's upcoming Zen, which ASMedia have denied and AMD ignored. The supposed issue stems from increased degradation of transmission speeds over distance which requires the inclusion of additional retimer and redriver chips. If the issue does exist the worst repercussion will be an increase in manufacturing costs of $2 to $5 per board; even when that charge is passed on to the consumer it will have a very small impact on MSRP and is not likely to raise prices to the realm of Intel motherboards. As with all rumours take this with a grain of salt, even if it is true it is unlikely to have any major effect on pricing.
"Commenting on the news, AMD said it is pleased that Zen is on track and will not comment on customer specific board-level solutions., while ASMedia clarified that this is purely a market rumor and its product's signal, stability and compatibility have all passed certification."
Here is some more Tech News from around the web:
- HP warns users to check laptop battery as it may be on fire @ The Register
- All aboard the PCIe bus for Nvidia's Tesla P100 supercomputer grunt @ The Register
- Oculus Rift vs HTC Vive, which should you buy? @ Kitguru
- Inotera dismisses report about Micron seeking to lower acquisition price @ DigiTimes
- Microsoft: Nearly One In Three Azure Virtual Machines Now Are Running Linux @ Slashdot
- Brutal Water Cannon Defeats Summer Heat; Kills it on Documentation @ Hack a Day
- Intel-supported Open HPC stack to land in Q4 @ The Register
- AORUS Computex 2016 Tech Overview @ TechARP
Subject: Graphics Cards | June 21, 2016 - 05:22 PM | Scott Michaud
Tagged: nvidia, fermi, kepler, maxwell, pascal, gf100, gf110, GK104, gk110, GM204, gm200, GP104
Techspot published an article that compared eight GPUs across six, high-end dies in NVIDIA's last four architectures: Fermi to Pascal. Average frame rates were listed across nine games, each measured at three resolutions:1366x768 (~720p HD), 1920x1080 (1080p FHD), and 2560x1600 (~1440p QHD).
The results are interesting. Comparing GP104 to GF100, mainstream Pascal is typically on the order of four times faster than big Fermi. Over that time, we've had three full generational leaps in fabrication technology, leading to over twice the number of transistors packed into a die that is almost half the size. It does, however, show that prices have remained relatively constant, except that the GTX 1080 is sort-of priced in the x80 Ti category despite the die size placing it in the non-Ti class. (They list the 1080 at $600, but you can't really find anything outside the $650-700 USD range).
It would be interesting to see this data set compared against AMD. It's informative for an NVIDIA-only article, though.
Subject: Storage | June 21, 2016 - 04:02 PM | Allyn Malventano
Tagged: V-NAND, SM961, Samsung, PM961, 960 PRO, 960 EVO, 48-layer
We've known Samsung was working on OEM-series SSDs using their new 48-layer V-NAND, and it appears they are getting closer to shipping in volume, so here's a peek at what is to come:
First up are the SM961 and PM961. The SM and PM appear to be converging into OEM equivalents of the Samsung 'PRO' and 'EVO' retail product lines, with MLC flash present in the SM and TLC (possibly with SLC TurboWrite cache) in the PM. The SM961 has already been spotted for pre-order over at Ram City. Note that they currently list the 1TB, 512GB, and 256GB models, but at the time of this writing, all three product titles (incorrectly) state 1TB. That said, pricing appears to be well below the current 950 PRO retail for equivalent capacities.
These new parts certainly have impressive specs on paper, with the SM961 claiming a 25-50% gain over the 950 PRO in nearly all metrics thanks to 48-layer V-NAND and an updated 'Polaris' controller. We've looked at plenty of Samsung OEM units in the past, and sometimes specs differ between OEM and retail parts, but it is starting to make sense for Samsung to simply relabel a given OEM / retail part at this point (minus any vendor-requested firmware detuning, like reduced write speeds in favor of increased battery life, etc).
With that are the other two upcoming parts that do not appear on the above chart. Those will be the 960 PRO and EVO, barring any last second renaming by Samsung. Originally we were expecting Samsung to add a 1TB SKU to their 950 PRO line, but it appears they have changed gears and will now shift their 48-layer parts to the 960 series. The other big bonus here is that we should also be getting an EVO, which would mark Samsung's first retail M.2 PCIe 3.0 x4 part sporting TLC flash. That product should come in a lot closer to 850 EVO pricing, but offer significantly greater performance over the faster interface. While we don't have specs on these upcoming products, the safe bet is that they will come in very close (if not identical) to that of the aforementioned SM961 and PM961.
All of these upcoming products are based on Samsung's 48-layer V-NAND. Announced late last year, this flash has measurably reduced latency (thanks to our exclusive Latency Percentile testing) as compared to the older 32-layer parts. Given the performance improvements noted above, it seems that even more can be extracted from this new flash when connected to a sufficiently performant controller. Previous controllers may have been channel bandwidth limited on the newest flash, where Polaris can likely open up the interface to higher speed grades.
We await these upcoming launches with baited breath. It's nice to see these parts inching closer to the saturation point of quad lane PCIe 3.0. Naturally there will be more to follow here, so stay tuned!
Subject: Storage | June 21, 2016 - 02:43 PM | Allyn Malventano
Tagged: usb 3.0, Thunderbolt 2, raid, hdd, drobo, DAS, BeyondRAID, 5Dt, 5D
Today Drobo updated their 5D, shifting to Thunderbolt 2, an included mSATA caching SSD, and faster internals:
The new 5Dt (t for Turbo Edition) builds on the strengths of the 5D, which launched three years ago. The distinguishing features remain the same, as this is still a 5-bay model with USB 3.0, but the processor has been upgraded, as well as the USB 3.0 chipset, which was a bit finicky with some earlier implementations of the technology.
The changes present themselves at the rear, as we now have a pair of Thunderbolt 2 (20 Gb/s) ports which support display pass-through (up to 4k). Rates speeds climb to 540 MB/s read and 250 MB/s write when using HDDs. SSDs bump those figures up to 545 / 285 MB/s, respectively.
Another feature that has remained was their Hot Data Cache technology, but while the mSATA part was optional on the 5D, a 128GB unit comes standard and pre-installed on the 5Dt.
The Drobo 5Dt is available today starting at $899. That price is a premium over the 5D, but the increased performance specs, included SSD, and Thunderbolt connectivity come at a price.
The current (updated) Drobo product lineup.
Full press blast after the break.
Subject: General Tech | June 21, 2016 - 02:32 PM | Ryan Shrout
We don't usually do this, but I have been getting a lot of emails and messages on social media from gamers looking to build their first PC this summer. With the release of the high end GeForce GTX 1080 and GTX 1070 cards this month, and the pending release of the Radeon RX 480 for more budget-minded gamers, there will likely never be a better time to get into PC gaming than now!
Back in February my nephew wanted to undertake building his own gaming PC for the first time. I took that opportunity to build an article and three video series for enthusiasts and DIYers that were either new to the game or needed a refresher on how to put screws to PCB, so to speak. With the numerous emails and messages I've been getting, I thought now would be a great time to bump our story back here and showcase to everyone how easy it can be to build your own PC, whether it be for gaming, VR, productivity or anything else.
You can find the original story right here, sponsored by Gigabyte, but I have also re-embedded the videos below. Yes, the component selections we used in February could use some updating on the graphics card, monitor and maybe power supply, but the rest of the build summary is spot on and the build process remains unchanged.
Good luck to all the budding enthusiasts out there!
Subject: General Tech | June 20, 2016 - 05:40 PM | Jeremy Hellstrom
Tagged: RGB, mouse, lapdog, keyboard, gaming control center, couchmaster, Couch, corsair
The Tech Report would like to back Al up in saying that gaming on a TV from the comfort of your couch is not as weird as some would think. In their case it was Star Wars Battlefront and Civilization V which were tested out, Battlefront as it is a console game often played on a TV and Civ5 as it is not a twitch game and the extra screen real estate is useful. They also like the device although they might like a smaller version so that keyboards without a numpad did not leave as much room ... perhaps a PocketDog? Check out their quick review if Al's review almost sold you on the idea.
"Corsair's Lapdog keyboard tray is built to bridge the gap between the desk and the den by giving gamers a way to put a keyboard and mouse right on their laps. We invited the Lapdog into our living room to see whether it's a good boy."
Here is some more Tech News from around the web:
Subject: Graphics Cards | June 20, 2016 - 04:11 PM | Jeremy Hellstrom
Tagged: windows 10, ubuntu, R9 Fury, nvidia, linux, GTX1070, amd
Phoronix wanted to test out how the new GTX 1070 and the R9 Fury compare on Ubuntu with new drivers and patches, as well as contrasting how they perform on Windows 10. There are two separate articles as the focus is not old silicon versus new but the performance comparison between the two operating systems. AMD was tested with the Crimson Edition 16.6.1 driver, AMDGPU-PRO Beta 2 (16.20.3) driver as well as Mesa 12.1-dev. There were interesting differences between the tested games as some would only support one of the two Linux drivers. The performance also varies based on the game engine, with some coming out in ties, others seeing Windows 10 pull ahead and even some cases where your performance on Linux was significantly better.
NVIDIA's GTX 1080 and 1070 were tested using the 368.39 driver release for Windows and the 367.27 driver for Ubuntu. Again we see mixed results, depending on the game Linux performance might actually beat out Windows, especially if OpenGL is an option.
Check out both reviews to see what performance you can expect from your GPU when gaming under Linux.
"Yesterday I published some Windows 10 vs. Ubuntu 16.04 Linux gaming benchmarks using the GeForce GTX 1070 and GTX 1080 graphics cards. Those numbers were interesting with the NVIDIA proprietary driver but for benchmarking this weekend are Windows 10 results with Radeon Software compared to Ubuntu 16.04 running the new AMDGPU-PRO hybrid driver as well as the latest Git code for a pure open-source driver stack."
Here are some more Graphics Card articles from around the web:
- GeForce GTX 1070 and GTX 1080 FE Overclocking @ [H]ard|OCP
- DX11 vs DX12 Intel 4770K vs 5960X Framerate Scaling @ [H]ard|OCP
- MSI GTX 1080 & GTX 1070 Gaming X 8G Overclocking Review @ OCC
- EVGA GeForce GTX 1080 FTW Gaming ACX 3.0 Review @HiTech Legion
- Gigabyte GTX 1080 G1 Gaming RGB @ Kitguru
- ASUS GTX 1080 Strix Gaming 8 GB @ techPowerUp
- HIS Radeon R7 360 GREEN iCooler OC 2GB Graphics Card Review @ NikKTech
Subject: Graphics Cards | June 20, 2016 - 01:57 PM | Scott Michaud
Tagged: tesla, pascal, nvidia, GP100
GP100, the “Big Pascal” chip that was announced at GTC, will be coming to PCIe for enterprise and supercomputer customers in Q4 2016. Previously, it was only announced using NVIDIA's proprietary connection. In fact, they also gave themselves some lead time with their first-party DGX-1 system, which retails for $129,000 USD, although we expect that was more for yield reasons. Josh calculated that each GPU in that system is worth more than the full wafer that its die was manufactured on.
This brings us to the PCIe versions. Interestingly, they have been down-binned from the NVLink version. The boost clock has been dropped to 1300 MHz, from 1480 MHz, although that is matched with a slightly lower TDP (250W versus the NVLink's 300W). This lowers the FP16 performance to 18.7 TFLOPs, down from 21.2, FP32 performance to 9.3 TFLOPs, down from 10.6, and FP64 performance to 4.7 TFLOPs, down from 5.3. This is where we get to the question: did NVIDIA reduce the clocks to hit a 250W TDP and be compatible with the passive cooling technology that previous Tesla cards utilize, or were the clocks dropped to increase yield?
They are also providing a 12GB version of the PCIe Tesla P100. I didn't realize that GPU vendors could selectively disable HBM2 stacks, but NVIDIA disabled 4GB of memory, which also dropped the bus width to 3072-bit. You would think that the simplicity of the circuit would want to divide work in a power-of-two fashion, but, knowing that they can, it makes me wonder why they did. Again, my first reaction is to question GP100 yield, but you wouldn't think that HBM, being such a small part of the die, is something that they can reclaim a lot of chips by disabling a chunk, right? That is, unless the HBM2 stacks themselves have yield issues -- which would be interesting.
There is also still no word on a 32GB version. Samsung claimed the memory technology, 8GB stacks of HBM2, would be ready for products in Q4 2016 or early 2017. We'll need to wait and see where, when, and why it will appear.
Subject: General Tech | June 20, 2016 - 01:21 PM | Jeremy Hellstrom
Tagged: acer, security
North American customers of Acer who bought directly from them between May 12, 2015 and April 28, 2016 may have had their credit card numbers compromised. Their less than secure customer database contained customer names, addresses, card numbers, and three-digit security verification codes all of which have been siphoned off at least once. If this breach effected your account Acer will be sending a notification to you, you can see an example at The Register if you want to be sure you are receiving a valid notification. For those who have seen fraudulent charges already this will be too late to mitigate their pain but anyone who used Acer's online shop during that time period would do well to get their cards changed.
"Acer's insecure customer database spilled people's personal information – including full payment card numbers – into hackers' hands for more than a year."
Here is some more Tech News from around the web:
- Microsoft to evict apps from Windows Store if devs don't generate a PEGI rating @ The Inquirer
- TP-LINK Smart Plug @ Hardware Secrets
- Win a NZXT S340 case @ Kitguru
Subject: Graphics Cards | June 18, 2016 - 10:37 PM | Scott Michaud
Tagged: nvidia, graphics drivers
GeForce Hotfix 368.51 drivers have been released by NVIDIA through their support website. This version only officially addresses flickering at high refresh rates, although its number has been incremented quite a bit since the last official release (368.39) so it's possible that it rolls in other changes, too. That said, I haven't heard too many specific issues with 368.39, so I'm not quite sure what that would be.
As always with a hotfix driver, NVIDIA pushed it out with minimal testing. It should pretty much only be installed if you have a specific issue (particularly the listed one(s)) and you don't want to wait until one is released that both NVIDIA and Microsoft looked over (although Microsoft's WHQL certification has been pretty lax since Windows 10).
Oddly enough, they only seem to list 64-bit links for Windows 8.1 and Windows 10. I'm not sure whether this issue doesn't affect Windows 7 and 32-bit versions of 8.1 and 10, or if they just didn't want to push the hotfix out to them for some reason.
Subject: Cases and Cooling | June 17, 2016 - 12:52 PM | Ryan Shrout
Tagged: asus, GTX 1080, strix, vbios
Yesterday, there were several news stories posted on TechpowerUp and others claiming that ASUS and MSI were sending out review samples of GTX 1080 and GTX 1070 graphics cards with higher clock speeds than retail parts. The insinuation of course is that ASUS was cheating, overclocking the cards going to media for reviews in order to artificially represent performance.
Image source: Techpowerup
MSI and ASUS have been sending us review samples for their graphics cards with higher clock speeds out of the box, than what consumers get out of the box. The cards TechPowerUp has been receiving run at a higher software-defined clock speed profile than what consumers get out of the box. Consumers have access to the higher clock speed profile, too, but only if they install a custom app by the companies, and enable that profile. This, we feel, is not 100% representative of retail cards, and is questionable tactics by the two companies. This BIOS tweaking could also open the door to more elaborate changes like a quieter fan profile or different power management.
There was, and should be, a legitimate concern about these types of moves. Vendor one-up-manship could lead to an arms race of stupidity, similar to what we saw on motherboards and base frequencies years ago, where CPUs would run at 101.5 MHz base clock rather than 100 MHz (resulting in a 40-50 MHz total clock speed change) giving that board a slight performance advantage. However, the differences we are talking about with the GTX 1080 scandal are very small.
- Retail VBIOS base clock: 1683 MHz
- Media VBIOS base clock: 1709 MHz
- Delta: 1.5%
And in reality, that 1.5% clock speed difference (along with the 1% memory clock rate difference) MIGHT result in ~1% of real-world performance changes. Those higher clock speeds are easily accessible to consumers by enabling the "OC Mode" in the ASUS GPU Tweak II software shipped with the graphics card. And the review sample cards can also be adjusted down to the shipping clock speeds through the same channel.
ASUS sent along its official statement on the issue.
ASUS ROG Strix GeForce GTX 1080 and GTX 1070 graphics cards come with exclusive GPU Tweak II software, which provides silent, gaming, and OC modes allowing users to select a performance profile that suits their requirements. Users can apply these modes easily from within GPU Tweak II.The press samples for the ASUS ROG Strix GeForce GTX 1080 OC and ASUS ROG Strix GeForce GTX 1070 OC cards are set to “OC Mode” by default. To save media time and effort, OC mode is enabled by default as we are well aware our graphics cards will be reviewed primarily on maximum performance. And when in OC mode, we can showcase both the maximum performance and the effectiveness of our cooling solution.Retail products are in “Gaming Mode” by default, which allows gamers to experience the optimal balance between performance and silent operation. We encourage end-users to try GPU Tweak II and adjust between the available modes, to find the best mode according to personal needs or preferences.For both the press samples and retail cards, all these modes can be selected through the GPU Tweak II software. There are no differences between the samples we sent out to media and the retail channels in terms of hardware and performance.Sincerely,ASUSTeK COMPUTER INC.
While I don't believe that ASUS' intentions were entirely to save me time in my review, and I think that the majority of gamers paying $600+ for a graphics card would be willing to enable the OC mode through software, it's clearly a bad move on ASUS' part to have done this. Having a process in place at all to create a deviation from retail cards on press hardware is questionable, other than checking for functionality to avoid shipping DOA hardware to someone on a deadline.
As of today I have been sent updated VBIOS for the GTX 1080 and GTX 1070 that put them into exact same mode as the retail cards consumers can purchase.
We are still waiting for a direct response from MSI on the issue as well.
Hopefully this debacle will keep other vendors from attempting to do anything like this in the future. We don't need any kind of "quake/quack" in our lives today.
Subject: Storage | June 16, 2016 - 02:54 PM | Jeremy Hellstrom
Tagged: SK Hynix, enterprise ssd, SE3010
SK Hynix's SE3010 uses their own controller, the eight channel SH87910AA Pearl and in the case of the 960GB model, eight 16nm 128Gb MLC NAND chips with a mysterious H27Q18YEB9a label and four capacitors to prevent data loss in the case of unexpected power loss. The drive is optimized for read speeds and Kitguru's testing certainly shows that they were effective in their implementation. Check out the write speed and overall conclusions in the full review.
"When we last looked at an SSD from SK hynix it was from their consumer portfolio. This time around we are looking at a drive from the other part of their storage business in the shape of the SE3010, a read intensive drive for the Enterprise market space."
Here are some more Storage reviews from around the web:
- Crucial's MX300 SSD @ The Tech Report
- Crucial MX300 750GB Limited Edition @ Kitguru
- Crucial MX300 @ The SSD Review
- Samsung 750 EVO 500GB SSD @ Guru of 3D
- Samsung Portable SSD T3 (1TB) @ Bjorn3d
Subject: General Tech | June 16, 2016 - 12:30 PM | Jeremy Hellstrom
Tagged: computex 2016, gx700, avalon, asus
The Tech Report must have been a little worn down by Computex, which is not uncommon as that week long show will take out even the heartiest of individuals. Nevertheless they have managed to compose both themselves and a roundup article of everything they officially witnessed during the show. They did pick up some unique photos such as the innards of the ASUS Avalon modular desktop PC as you can see below. They also snapped a photo of the twin 330W power supplies required for the watercooled ASUS GX700 gaming laptop. There are seven pages in total so grab a beverage and peruse at your leisure.
"A couple weeks ago, we trekked all over Taipei to take in everything that Computex 2016 had to offer. Come with us and see the state of the PC in 2016, as interpreted by dozens of companies both small and large."
Here is some more Tech News from around the web:
- Google kills off Swiffy Flash conversion tool as it moves all ads to HTML5 @ The Register
- Spam King sent down for 30 months @ The Register
- Admins in outcry as Microsoft fix borks Group Policy @ The Register
- Jide's Remix Pro is a Surface killer based on Android 6.0 Marshmallow @ The Register
- Samsung Reveals 2016 Galaxy J Series Smartphones @ TechARP
- Yes 4G LTE Network With VoLTE Capability Revealed @ TechARP