All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
NVIDIA's Ansel Technology
“In-game photography” is an interesting concept. Not too long ago, it was difficult to just capture the user's direct experience with a title. Print screen could only hold a single screenshot at a time, which allowed Steam and FRAPS to provide a better user experience. FRAPS also made video more accessible to the end-user, but it output huge files and, while it wasn't too expensive, it needed to be purchased online, which was a big issue ten-or-so years ago.
Seeing that their audience would enjoy video captures, NVIDIA introduced ShadowPlay a couple of years ago. The feature allowed users to, not only record video, but also capture the last few minutes. It did this with hardware acceleration, and it did this for free (for compatible GPUs). While I don't use ShadowPlay, preferring the control of OBS, it's a good example of how NVIDIA wants to support their users. They see these features as a value-add, which draw people to their hardware.
Introduction, Specifications, and Packaging
ICY DOCK has made themselves into a sort of Swiss Army knife of dockable and hot-swappable storage solutions. From multi-bay desktop external devices to internal hot-swap enclosures, these guys have just about every conceivable way to convert storage form factors covered. We’ve looked at some of their other offerings in the past, but this week we will focus on a pair of their ToughArmor series products.
As you can no doubt see here, these two enclosures aim to cram as many 2.5” x 7mm form factor devices into the smallest space possible. They also offer hot swap capability and feature front panel power + activity LEDs. As the name would imply, these are built to be extremely durable, with ICY DOCK proudly running them over with a truck in some of their product photos.
Read on for our full review of the ICY DOCK ToughArmor MB998SP-B and MB993SK-B!
Lower Power, Same Performance
AMD is in a strange position in that there is a lot of excitement about their upcoming Zen architecture, but we are still many months away from that introduction. AMD obviously needs to keep the dollars flowing in, and part of that means that we get refreshes now and then of current products. The “Kaveri” products that have been powering the latest APUs from AMD have received one of those refreshes. AMD has done some redesigning of the chip and tweaked the process technology used to manufacture them. The resulting product is the “Godavari” refresh that offers slightly higher clockspeeds as well as better overall power efficiency as compared to the previous “Kaveri” products.
One of the first refreshes was the A8-7670K that hit the ground in November of 2015. This is a slightly cut down part that features 6 GPU compute units vs. the 8 that a fully enabled Godavari chip has. This continues to be a FM2+ based chip with a 95 watt TDP. The clockspeed of this part goes from 3.6 GHz to 3.9 GHz. The GPU portion runs at the same 757 MHz that the original A10-7850K ran at. It is interesting to note that it is still a 95 watt TDP part with essentially the same clockspeeds as the 7850K, but with two fewer GPU compute units.
The other product being covered here is a bit more interesting. The A10-7860K looks to be a larger improvement from the previous 7850K in terms of power and performance. It shares the same CPU clockspeed range as the 7850K (3.6 GHz to 3.9 GHz), but improves upon the GPU clockspeed by hitting around 800 MHz. At first this seems underwhelming until we realize that AMD has lowered the TDP from 95 watts down to 65 watts. Less power consumed and less heat produced for the same performance from the CPU side and improved performance from the GPU seems like a nice advance.
AMD continues to utilize GLOBALFOUNDRIES 28 nm Bulk/HKMG process for their latest APUs and will continue to do so until Zen is released late this year. This is not the same 28 nm process that we were introduced to over four years ago. Over that time improvements have been made to improve yields and bins, as well as optimize power and clockspeed. GF also can adjust the process on a per batch basis to improve certain aspects of a design (higher speed, more leakage, lower power, etc.). They cannot produce miracles though. Do not expect 22 nm FinFET performance or density with these latest AMD products. Those kinds of improvements will show up with Samsung/GF’s 14nm LPP and TSMC’s 16nm FF+ lines. While AMD will be introducing GPUs on 14nm LPP this summer, the Zen launch in late 2016 will be the first AMD CPU to utilize that advanced process.
Introduction and Technical Specifications
Courtesy of ECS
The ECS Z170-Claymore motherboard is the newest offering in ECS' L337 product line with support for the Intel Z170 Express chipset. The Z170-Claymore is a more enthusiast-friendly design then some of their previous offerings with a slew of features sure to entice gamers and power users alike. ECS priced this board competitively with an MSRP of $159.99, a price point sure to appeal to a wide swath of users given the board's integrated feature set.
Courtesy of ECS
Courtesy of ECS
ECS took out all of the stops with the Z170-Claymore, integrating a host of features together with high quality components for a compelling product. The board was designed with a 12-phase digital power delivery system, using high efficiency chokes and MOSFETs, as well as solid core capacitors for optimal board performance under any operating conditions. ECS integrated the following features into the Z170-Claymore board: four SATA 3 ports; one SATA-Express port; a PCIe X2 M.2 port; a Realtek GigE NIC; five PCI-Express x16 slots; 2-digit diagnostic LED display; on-board power and reset buttons; Realtek audio solution; integrated DisplayPort and HDMI video port support; and USB 2.0, 3.0, and 3.1 Gen2 port support.
History and Specifications
The Radeon Pro Duo had an interesting history. Originally shown as an unbranded, dual-GPU PCB during E3 2015, which took place last June, AMD touted it as the ultimate graphics card for both gamers and professionals. At that time, the company thought that an October launch was feasible, but that clearly didn’t work out. When pressed for information in the Oct/Nov timeframe, AMD said that they had delayed the product into Q2 2016 to better correlate with the launch of the VR systems from Oculus and HTC/Valve.
During a GDC press event in March, AMD finally unveiled the Radeon Pro Duo brand, but they were also walking back on the idea of the dual-Fiji beast being aimed at the gaming crowd, even partially. Instead, the company talked up the benefits for game developers and content creators, such as its 8192 stream processors for offline rendering, or even to aid game devs in the implementation and improvement of multi-GPU for upcoming games.
Anyone that pays attention to the graphics card market can see why AMD would make the positional shift with the Radeon Pro Duo. The Fiji architecture is on the way out, with Polaris due out in June by AMD’s own proclamation. At $1500, the Radeon Pro Duo will be a stark contrast to the prices of the Polaris GPUs this summer, and it is well above any NVIDIA-priced part in the GeForce line. And, though CrossFire has made drastic improvements over the last several years thanks to new testing techniques, the ecosystem for multi-GPU is going through a major shift with both DX12 and VR bearing down on it.
So yes, the Radeon Pro Duo has both RADEON and PRO right there in the name. What’s a respectable PC Perspective graphics reviewer supposed to do with a card like that if it finds its way into your office? Test it of course! I’ll take a look at a handful of recent games as well as a new feature that AMD has integrated with 3DS Max called FireRender to showcase some of the professional chops of the new card.
Introduction and First Impressions
The NZXT Manta is a mini-ITX enclosure that boasts better than average room for components and cooling, and is packaged in a rather unusual, rounded design.
There is a reason for the Manta's somewhat bulbous appearance, and it's part of a recent trend in mini-ITX enclosure design; bigger is better. While you might think that mITX is all about fitting components into the smallest enclosure possible, there have been some recent examples of cases which expand the chassis to micro-ATX sizes (or above).
The Manta from NZXT is actually large enough to be a micro-ATX case, and its total volume exceeds their S340 enclosure; a full ATX design (!). So why on earth would you want a mini-ITX enclosure with that much volume? Three words: cooling, cooling, and cooling.
As you can see from NZXT's graphic above, the Manta's protruding top and front panels provide a the additional space needed to allow for thicker cooling setups.
Before we dive in for a closer look at the new Manta enclosure, let's take a look at the full specs from NZXT:
- Motherboard Support: mini-ITX
- Expansion Slots: 2
- Drive Bays
- Internal 3.5”: 2
- Internal 2.5”: 3
- Cooling System
- Front: 2 x 140/120mm (2 x 120mm included)
- Top: 2 x 140/120mm
- Rear: 1 x 120mm (Included)
- Radiator Support
- Front: Up to 280mm
- Top: Up to 280mm
- Rear: 120mm
- CPU Clearance: 160mm
- GPU Clearance: 363mm
- PSU Length: 363mm
- Power Supply Support: ATX
- External Electronics:
- I/O Panel LED On/Off
- 1x Audio/Mic
- USB 3.0
- Dimensions (WxHxD): 245 x 426 x 450mm (9.65 x 16.77 x 17.72 inches)
- Weight: 7.2 kg (15.87 lbs)
Our thanks to NZXT for providing the Manta enclosure for our review.
At first glance the Manta is a departure from the typical enclosure design. The rounded panels are built around a standard rectangular frame, so it's really quite conventional underneath.
The look from the front of the enclosure really shows off the rounded sides, and this will certainly not be everyone's favorite look - but anything beyond the norm tends be divisive in this market.
Introduction, Features and Specifications
The German manufacturer be quiet! is best known for their attention to silence. They are currently introducing four new models in their entry-level Pure Power series, which includes the Pure Power 9 400W, 500W, 600W, and 700W power supplies. Be quiet! is targeting these power supplies to budget-minded users for use in silent PC builds, office applications, multimedia and home theater systems. The new Pure Power 9 series will replace the Pure Power L8 series.
All of the Pure Power 9 series power supplies are 80 Plus Silver certified for high efficiency (the L8 series is Bronze certified) and feature modular cables and a 120mm cooling fan. The power supplies are designed with dual +12V rails and incorporate a new active clamp and synchronous rectifier technology with zero voltage and zero current switching for increased efficiency. We will be taking a detailed look at the Pure Power 9 600W power supply in this review.
be quiet! Pure Power 9 600W PSU Key Features:
• Exceptionally quiet operation (be quiet! silence-optimized 120mm fan)
• 80 Plus Silver certified with up to 91% power conversion efficiency
• Two +12V rails and four PCI-E connectors for multi- GPU systems
• Active Clamp + Synchronous Rectification circuit design
• Modular cable management with flat ribbon-style peripheral cables
• Meets latest Intel C6/C7, ErP and Energy Star guidelines
• Active Power Factor correction (0.99) with Universal AC input
• Product conception, design and QC in Germany, manufactured in China
• 3-Year warranty
The Dual-Fiji Card Finally Arrives
This weekend, leaks of information on both WCCFTech and VideoCardz.com have revealed all the information about the pending release of AMD’s dual-GPU giant, the Radeon Pro Duo. While no one at PC Perspective has been briefed on the product officially, all of the interesting data surrounding the product is clearly outlined in the slides on those websites, minus some independent benchmark testing that we are hoping to get to next week. Based on the report from both sites, the Radeon Pro Duo will be released on April 26th.
AMD actually revealed the product and branding for the Radeon Pro Duo back in March, during its live streamed Capsaicin event surrounding GDC. At that point we were given the following information:
- Dual Fiji XT GPUs
- 8GB of total HBM memory
- 4x DisplayPort (this has since been modified)
- 16 TFLOPS of compute
- $1499 price tag
The design of the card follows the same industrial design as the reference designs of the Radeon Fury X, and integrates a dual-pump cooler and external fan/radiator to keep both GPUs running cool.
Based on the slides leaked out today, AMD has revised the Radeon Pro Duo design to include a set of three DisplayPort connections and one HDMI port. This was a necessary change as the Oculus Rift requires an HDMI port to work; only the HTC Vive has built in support for a DisplayPort connection and even in that case you would need a full-size to mini-DisplayPort cable.
The 8GB of HBM (high bandwidth memory) on the card is split between the two Fiji XT GPUs on the card, just like other multi-GPU options on the market. The 350 watts power draw mark is exceptionally high, exceeded only by AMD’s previous dual-GPU beast, the Radeon 295X2 that used 500+ watts and the NVIDIA GeForce GTX Titan Z that draws 375 watts!
Here is the specification breakdown of the Radeon Pro Duo. The card has 8192 total stream processors and 128 Compute Units, split evenly between the two GPUs. You are getting two full Fiji XT GPUs in this card, an impressive feat made possible in part by the use of High Bandwidth Memory and its smaller physical footprint.
|Radeon Pro Duo||R9 Nano||R9 Fury||R9 Fury X||GTX 980 Ti||TITAN X||GTX 980||R9 290X|
|GPU||Fiji XT x 2||Fiji XT||Fiji Pro||Fiji XT||GM200||GM200||GM204||Hawaii XT|
|Rated Clock||up to 1000 MHz||up to 1000 MHz||1000 MHz||1050 MHz||1000 MHz||1000 MHz||1126 MHz||1000 MHz|
|Memory||8GB (4GB x 2)||4GB||4GB||4GB||6GB||12GB||4GB||4GB|
|Memory Clock||500 MHz||500 MHz||500 MHz||500 MHz||7000 MHz||7000 MHz||7000 MHz||5000 MHz|
|Memory Interface||4096-bit (HMB) x 2||4096-bit (HBM)||4096-bit (HBM)||4096-bit (HBM)||384-bit||384-bit||256-bit||512-bit|
|Memory Bandwidth||1024 GB/s||512 GB/s||512 GB/s||512 GB/s||336 GB/s||336 GB/s||224 GB/s||320 GB/s|
|TDP||350 watts||175 watts||275 watts||275 watts||250 watts||250 watts||165 watts||290 watts|
|Peak Compute||16.38 TFLOPS||8.19 TFLOPS||7.20 TFLOPS||8.60 TFLOPS||5.63 TFLOPS||6.14 TFLOPS||4.61 TFLOPS||5.63 TFLOPS|
|Transistor Count||8.9B x 2||8.9B||8.9B||8.9B||8.0B||8.0B||5.2B||6.2B|
The Radeon Pro Duo has a rated clock speed of up to 1000 MHz. That’s the same clock speed as the R9 Fury and the rated “up to” frequency on the R9 Nano. It’s worth noting that we did see a handful of instances where the R9 Nano’s power limiting capability resulted in some extremely variable clock speeds in practice. AMD recently added a feature to its Crimson driver to disable power metering on the Nano, at the expense of more power draw, and I would assume the same option would work for the Pro Duo.
Intro and Xbox One
Introduction to Remote Streaming
The ability to play console games on the PC is certainly nothing new. A wide range of emulators have long offered PC owners access to thousands of classic games. But the recent advent of personal game streaming gives users the ability to legally enjoy current generation console games on their PCs.
Both Microsoft and Sony now offer streaming from their respective current generation consoles to the PC, but via quite different approaches. For PC owners contemplating console streaming, we set out to discover how each platform works and compares, what level of quality discerning PC gamers can expect, and what limitations and caveats console streaming brings. Read on for our comparison of Xbox One Streaming in Windows 10 and PS4 Remote Play for the PC and Mac.
Xbox One Streaming in Windows 10
Xbox One Streaming was introduced alongside the launch of Windows 10 last summer, and the feature is limited to Microsoft's latest (and last?) operating system via its built-in Xbox app. To get started, you first need to enable the Game Streaming option in your Xbox One console's settings (Settings > Preferences > Game DVR & Streaming > Allow Game Streaming to Other Devices).
Once that's done, head to your Windows 10 PC, launch the Xbox app, and sign in with the same Microsoft account you use on your Xbox One. By default, the app will offer to sign you in with the same Microsoft account you're currently using for Windows 10. If your Xbox gamertag profile is associated with a different Microsoft account, just click Microsoft account instead of your current Windows 10 account name to sign in with the correct credentials.
Note, however, that as part of Microsoft's relentless efforts to get everyone in the Virgo Supercluster to join the online Microsoft family, the Xbox app will ask those using a local Windows 10 account if they want to "sign in to this device" using the account associated with their Xbox gamertag, thereby creating a new "online" account on your Windows 10 PC tied to your Xbox account.
If that's what you want, just type your current local account's password and click Next. If, like most users, you intentionally created your local Windows 10 account and have no plans to change it, click "Sign in to just this app instead," which will allow you to continue using your local account while still having access to the Xbox app via your gamertag-associated online Microsoft account.
Once you're logged in to the Xbox app, find and click on the "Connect" button in the sidebar on the left side of the window, which will let you add your Xbox One console as a device in your Windows 10 Xbox app.
A Very Familiar Look and Feel
Released alongside the launch of Windows 8 in October 2012, the original Lenovo IdeaPad Yoga 13 was a revolutionary device. While Microsoft's initial vision for a touch-enabled Windows may have not panned out exactly as they wanted it to, people still found utility in 2-in-1 devices like the Yoga. In the proceeding years, similar devices from companies like HP and Dell have arose, but consumers ultimately migrated towards Lenovo's offerings.
The Yoga line has seen several drastic changes since it's inception in 2012. Industrial design changes like the Watchband Hinge introduced in the Yoga 3 Pro, and the spinning off of Yoga out of the IdeaPad brand into it's own family this generation with the Yoga 900 point towards the longevity of this 2-in-1 design.
Today we are taking a look at the most affordable option in the Yoga family, the Lenovo Yoga 700.
27 notebooks can't be wrong
A month or so back, I had a friend come to me asking for advice on which gaming notebook he should purchase. He had specific needs that were tailored to a portable gaming machine: he wanted to have a single machine for home and mobile use, he wanted to be able to game while traveling and he had a pretty reasonable budget. As the "guy that runs the gaming hardware website" I was expected to have an answer...immediately. But I didn't. As it turns out, dissecting and digesting the gaming notebook field is pretty complex.
I sent a note to MSI, offering to build a video and a short story around its products if they sent me one of each of line of gaming notebooks they sold. Honestly, I didn't expect them to be able to pull it together, but just a couple of weeks later, a handful of large boxes arrived and we were staring at a set of six powerful gaming notebooks to analyze.
|GE62 Apache Pro-014||GS40 Phantom-001||GS60 Ghost Pro-002||GS72 Stealth Pro 4K-202||GT72S Dominator Pro G-220||GT80S Titan SLI-002|
|Screen||15.6-in 1080p||14-in 1080p||15.6-in 1080p||17.3-in 4K||17.3-in 1080p G-Sync||18.4-in 1080p|
|CPU||Core i7-6700HQ||Core i7-6700HQ||Core i7-6700HQ||Core i7-6700HQ||Core i7-6820HK||Core i7-6820HK|
|GPU||GTX 960M 2GB||GTX 970M 3GB||GTX 970M 6GB||GTX 970M 3GB||GTX 980M 8GB||GTX 980M 8GB SLI|
|Storage||128GB M.2 SATA
|128GB PCIE SSD
|128GB PCIE SSD
|256GB PCIE SSD
|256GB PCIE RAID SSD
|256GB PCIE RAID SSD
|Optical||DVD Super-multi||None||None||None||Blu-ray Burner||Blu-ray Burner|
|Display Output||HDMI 1.4
|Connectivity||USB 3.1 Type-C
USB 3.0 x 2
USB 2.0 x 1
USB 3.0 x 2
USB 3.0 x 2
|USB 3.1 x 2
USB 3.0 x 2
USB 3.0 x 6
USB 3.0 x 5
|Dimensions||15.07-in x 10.23-in x 1.06-in||13.58-in x 9.65-in x 0.87-in||15.35-in x 10.47-in x 0.78-in||16.47-in x 11.39-in x 0.78-in||16.85-in x 11.57-in x 1.89-in||17.95-in x 13.02-in x 1.93-in|
|Weight||5.29 pounds||3.75 pounds||4.2 pounds||5.7 pounds||8.4 pounds||9.9 pounds|
MSI sent this collection along as it appears to match closely with entire range of available options in its own gaming notebook line, without actually sending us ALL 27 OF THE AVAILABLE SKUs! Yes, twenty-seven.
MSI GS40 Phantom
In the video below, I'll walk through the discussion of each of the series of notebooks that MSI offers for gamers, what the prevailing characteristics are for each and what kind of consumer should be most interested in it. I also discuss the specifics of each of the models we received for the project as well as getting into the performance deltas between them.
MSI GS72 Stealth Pro 4K
- MSI GE Series
- The entry level of gaming notebooks, available in both 15.6 and 17.3-in 1080p screens, limited to GTX 970M or GTX 960M GPUs. You still get 16GB of memory, SSDs in MOST systems, Killer Networking hardware, Steel Series keyboards and weights range from 5.29 to 5.95 pounds.
- MSI GS Series
- Varies in screen size from 14-in to 17.3-in but the focus here is on slimmer designs. Both 1080p and 4K screens are available, though you are still maxing out at a GTX 970M graphics solution. 16GB of RAM, NVMe PCIe SSDs are standard, with available models as thin as 0.78-inches and as light as 3.75 pounds.
- MSI GT72 Series
- These focus on performance per dollar, getting maximum single GPU performance in the chassis. They all have 17-in screens with available G-Sync integration, and GPUs from the GTX 970M to the GTX 980 (full). 16-32GB of memory, all using SSDs, optical drives, Thunderbolt, six USB 3.0 ports but GT72 systems are bigger and heavier to compensate for all this.
- MSI GT80 Series
- These are for the crazy enthusiasts only, all of which include SLI configurations or GTX 970M, 980M or 980. An 18.3-in 1080p screen is the only option for your display, but you get 16-64GB of memory, RAID enabled SSD configurations, Blu-ray burners, Thunderbolt, five USB 3.0 ports and a friggin Cherry Brown mechanical keyboard!
After going through this project, here are a few recommendations I would have for users looking to pick up an MSI gaming notebook.
- Best Gaming Value
- GT72 Dominator G-831 - This combines the larger form factor with a GTX 970M GPU, 17.3-in 1080p screen, 16GB of memory, 128GB SSD and priced at $1599. I think this is a good balance of cost and GPU horsepower.
- Looking for a Slimmer Design
- GS70 Stealth Pro-006 - For $1699 you lose the optical drive from the above GT72, but get a lighter and thinner design. You have the same technical horsepower, GTX 970M, Core i7 processor, etc., but the integrated fans will likely be noticeably louder to expel the heat from the more narrow chassis.
- If you need more performance
- GT72 Dominator Pro G-034 - With a jump from the $1599 GT72 above to $2099, this model gets you a GTX 980M and a 256GB SSD. Based on the performance metrics I ran, that should net you another 40-50% of GPU horsepower.
Let me know if you have any questions or comments about these machines and I'll do my best to answer them!
Introduction, Features and Specifications
Phanteks currently sells cases, CPU coolers, cooling fans, and PC accessories. They recently added the Eclipse Series to their case lineup, which includes two models, the Eclipse P400 and the Eclipse P400S (silent edition). Both the P400 and P400S are available with black, white, or gray finishes and can be purchased with or without a side window. We will be taking a detailed look at the Phanteks Eclipse P400S ATX mid-tower, closed-panel case (no side window) in this review.
Phanteks Eclipse P400S Silent Edition ATX Mid-Tower Case
Satin Black, Glacier White, or Anthracite Gray
with, or without a side window
The Eclipse P400S silent edition case comes with an integrated 3-speed fan controller and two quiet 120mm fans (one intake and one exhaust). The P400S incorporates sound dampening panels on the front, top and both sides. And in addition to quiet cooling, the Eclipse P400 and P400S cases feature selectable, 10-color RGB LED lights at the bottom of the front panel for some interesting lighting effects. Two internal 2.5” SSD bays and two internal 3.5” HDD bays are included but there are no 5.25” external drive bays.
Phanteks Eclipse P400S ATX Mid-Tower Case Key Features:
• Mid-Tower ATX enclosure (WxHxD, 210x465x470mm, 8.3x18.3x18.5”)
• Supports E-ATX, ATX, Micro-ATX and Mini-ITX motherboards
• Very quiet case for noise sensitive applications
• Sound dampening panels on front, sides, and top
• Easily removed dust filters on front, top and bottom panels
• Two included case fans (120mm intake and 120mm exhaust)
• Three-speed fan controller included
• (2) USB 3.0, mic and headphone jacks on the top I/O panel
• Two internal 3.5” HDD / 2.5” SSD trays
• Two internal 2.5” SSD mounting brackets behind mobo tray
• Tool-free mounting for 3.5” HDDs
• Up to 395mm (15.2”) for long graphic cards
• Up to 280mm (11.0”) clearance (with optional 3.5” HDD cages installed)
• Up to 160mm (6.3”) of space for tall CPU coolers
• Price: $79.99 USD
Introduction and First Impressions
Edifier might not be a household name, but the maker of speakers and headphones has been around for 20 years now; formed in 1996 in Beijing, China. More recently (2011), Edifier made news by purchasing Stax, the famous Japanese electrostatic headphone maker. This move was made to 'improve Edifier's position' in the headphone market, and with the Stax name attached it could only raise awareness for the brand in the high-end audio community.
But Edifier does not play in the same market as Stax, whose least expensive current offering (the SR-003MK2) is still $350. Edifier's products range from earbuds starting at $19 (the H210) to their larger over-ear headphones (H850) at $79. In between rests the smaller over-ear H840, a closed-back monitor headphone 'tuned by Phil Jones of Pure Sound' that Edifier claims offers a 'natural' audio experience. The price? MSRP is $59.99 but Edifier sells the H840 for only $39.99 on Amazon.
"Developed with an electro-acoustic unit on the basis of the coil, these Hi-Fi headphones provide life like sound. The carefully calibrated balance between treble and bass makes Edifier H840 the perfect entry level monitor earphones."
At the price, these could be a compelling option for music, movies, and gaming - depending on how they sound. In this review I'll attempt to describe my experience with these headphones, as well as one can using text. (I will also attempt not to write a book in the process!)
Introduction and First Impressions
Today we’re looking at an enclosure from VIVO, a new company on the scene who has created their new Titan mid-tower enclosure to enter the enthusiast case market. We’ll see how it stacks up in an already crowded market.
A search on Amazon for enclosures will turn up the usual suspects, from Antec to Thermaltake (with BitFenix, Corsair, In Win, NZXT, Lian Li, Phanteks, SilverStone, and others in between). And right there in those search results is VIVO. Their Athena mid-tower is a nice-looking budget enclosure that sells for only $54.99, and with the Titan VIVO offering a more understated design, and some modern conveniences.
The Titan is spacious, with an open internal layout that places drive storage behind and below the motherboard tray, a common trend (Corsair’s Carbide 400C and the NZXT H440 have similar layouts). The cost of such a design (as with the aforementioned competitors) is a reduction in drive support, as only two 3.5-inch and a single 2.5-inch drive bay are included (with support for an addition pair of SSDs inside the case). This trend has its detractors, to be sure, but if your needs are limited to an SSD and a pair of hard drives, you’ll be just fine - and the Titan offers a pair of 5.25-inch bays, if desired.
93% of a GP100 at least...
NVIDIA has announced the Tesla P100, the company's newest (and most powerful) accelerator for HPC. Based on the Pascal GP100 GPU, the Tesla P100 is built on 16nm FinFET and uses HBM2.
NVIDIA provided a comparison table, which we added what we know about a full GP100 to:
|Tesla K40||Tesla M40||Tesla P100||Full GP100|
|GPU||GK110 (Kepler)||GM200 (Maxwell)||GP100 (Pascal)||GP100 (Pascal)|
|FP32 CUDA Cores / SM||192||128||64||64|
|FP32 CUDA Cores / GPU||2880||3072||3584||3840|
|FP64 CUDA Cores / SM||64||4||32||32|
|FP64 CUDA Cores / GPU||960||96||1792||1920|
|Base Clock||745 MHz||948 MHz||1328 MHz||TBD|
|GPU Boost Clock||810/875 MHz||1114 MHz||1480 MHz||TBD|
|Memory Interface||384-bit GDDR5||384-bit GDDR5||4096-bit HBM2||4096-bit HBM2|
|Memory Size||Up to 12 GB||Up to 24 GB||16 GB||TBD|
|L2 Cache Size||1536 KB||3072 KB||4096 KB||TBD|
|Register File Size / SM||256 KB||256 KB||256 KB||256 KB|
|Register File Size / GPU||3840 KB||6144 KB||14336 KB||15360 KB|
|TDP||235 W||250 W||300 W||TBD|
|Transistors||7.1 billion||8 billion||15.3 billion||15.3 billion|
|GPU Die Size||551 mm2||601 mm2||610 mm2||610mm2|
|Manufacturing Process||28 nm||28 nm||16 nm||16nm|
This table is designed for developers that are interested in GPU compute, so a few variables (like ROPs) are still unknown, but it still gives us a huge insight into the “big Pascal” architecture. The jump to 16nm allows for about twice the number of transistors, 15.3 billion, up from 8 billion with GM200, with roughly the same die area, 610 mm2, up from 601 mm2.
A full GP100 processor will have 60 shader modules, compared to GM200's 24, although Pascal stores half of the shaders per SM. The GP100 part that is listed in the table above is actually partially disabled, cutting off four of the sixty total. This leads to 3584 single-precision (32-bit) CUDA cores, which is up from 3072 in GM200. (The full GP100 architecture will have 3840 of these FP32 CUDA cores -- but we don't know when or where we'll see that.) The base clock is also significantly higher than Maxwell, 1328 MHz versus ~1000 MHz for the Titan X and 980 Ti, although Ryan has overclocked those GPUs to ~1390 MHz with relative ease. This is interesting, because even though 10.6 TeraFLOPs is amazing, it's only about 20% more than what GM200 could pull off with an overclock.
Introduction and Background
VR is rapidly gaining steam lately with the recent launch of several capable platforms. I’ve briefly sampled the various iterations of development kits and pre-release units coming through our office, and understanding how they tracked the headset position was relatively easy. Then we got to play with an HTC Vive, and things got a bit more interesting. The Vive is a ‘whole room’ VR experience. You’re not sitting at a desk with a game controller. Instead, you are holding a pair of controllers that behave more like extensions of yourself (once you get used to them, that is). Making all of this work took some extra pieces included with the kit, and the electronics technician in me was dying to know just what made this thing tick. I’d imagine other readers of this site might feel the same, so I thought it appropriate to do some digging and report my findings here.
Before diving straight into the HTC Vive, a brief history lesson of game system positional tracking is in order.
I'll start with the Wii Remote controllers, which had a front mounted IR camera that ‘saw’ a pair of IR LED banks mounted in the ‘Sensor Bar’ – an ironic naming as the ‘sensor’ was actually in the Remotes. This setup lets you point a Wii Remote at the television and use it as a mouse. Due to the limited number of points in use, the system could not tell the Wii Remote location within the room. Instead, it could only get a vector relative to the Sensor Bar itself. Wii Remotes also contained accelerometers, but those were typically not used to assist in the accuracy of the pointing (but were used to determine if the remote was inverted, as the Sensor Bar had only two light sources).
The Oculus Rift was essentially a reversing of the technology used in the old Nintendo Wii Remotes. The headset position and orientation are determined by a desk-mounted IR camera which ‘looks’ at IR LEDs mounted to the headset. The system dubbed ‘Constellation’, can decode the pattern (seen faintly in the above photo) and determine the headset position and orientation in space.
Even the sides and rear of the headset have a specific LED pattern to help the camera lock on to someone looking away from it. If the IR camera sees the triangular pattern on the headset strap, it can conclude that the viewer us looking behind them.
The HTC Vive takes a different approach here. Since it was launching with a headset and two controllers that would all need to be tracked in space simultaneously. The Wii Remote style idea would only work with a much larger grid of sensor bars (or QR codes) peppered all over the room, so that idea was out. The Rift’s constellation system might have a hard time identifying unique light patterns on multiple devices that could be far away and possibly occluding each other. So if having cameras on the headset and controllers is out, and having a camera on the desk is out, what’s left?
Why things are different in VR performance testing
It has been an interesting past several weeks and I find myself in an interesting spot. Clearly, and without a shred of doubt, virtual reality, more than any other gaming platform that has come before it, needs an accurate measure of performance and experience. With traditional PC gaming, if you dropped a couple of frames, or saw a slightly out of sync animation, you might notice and get annoyed. But in VR, with a head-mounted display just inches from your face taking up your entire field of view, a hitch in frame or a stutter in motion can completely ruin the immersive experience that the game developer is aiming to provide. Even worse, it could cause dizziness, nausea and define your VR experience negatively, likely killing the excitement of the platform.
My conundrum, and the one that I think most of our industry rests in, is that we don’t yet have the tools and ability to properly quantify the performance of VR. In a market and a platform that so desperately needs to get this RIGHT, we are at a point where we are just trying to get it AT ALL. I have read and seen some other glances at performance of VR headsets like the Oculus Rift and the HTC Vive released today, but honest all are missing the mark at some level. Using tools built for traditional PC gaming environments just doesn’t work, and experiential reviews talk about what the gamer can expect to “feel” but lack the data and analysis to back it up and to help point the industry in the right direction to improve in the long run.
With final hardware from both Oculus and HTC / Valve in my hands for the last three weeks, I have, with the help of Ken and Allyn, been diving into the important question of HOW do we properly test VR? I will be upfront: we don’t have a final answer yet. But we have a direction. And we have some interesting results to show you that should prove we are on the right track. But we’ll need help from the likes of Valve, Oculus, AMD, NVIDIA, Intel and Microsoft to get it right. Based on a lot of discussion I’ve had in just the last 2-3 days, I think we are moving in the correct direction.
Why things are different in VR performance testing
So why don’t our existing tools work for testing performance in VR? Things like Fraps, Frame Rating and FCAT have revolutionized performance evaluation for PCs – so why not VR? The short answer is that the gaming pipeline changes in VR with the introduction of two new SDKs: Oculus and OpenVR.
Though both have differences, the key is that they are intercepting the draw ability from the GPU to the screen. When you attach an Oculus Rift or an HTC Vive to your PC it does not show up as a display in your system; this is a change from the first developer kits from Oculus years ago. Now they are driven by what’s known as “direct mode.” This mode offers improved user experiences and the ability for the Oculus an OpenVR systems to help with quite a bit of functionality for game developers. It also means there are actions being taken on the rendered frames after we can last monitor them. At least for today.
Introduction and Technical Specifications
In this follow-up discussion on Thermaltake's Core X9 E-ATX Cube Chassis, we look at advanced setup and configuration features, and just how much stuff you can cram into this massive case. For an in-depth overview of the case and a walk through of its features, please see our original review of the case here.
Courtesy of Thermaltake
The Thermaltake Core X9 E-ATX Cube Chassis is one of the largest and most configurable they've developed. The case is roughly cube shaped with a steel and plastic construction. The height and depth of the unit allows the Core X9 to support up to quad-fan radiators mounted to its top or sides and up to a tri-fan radiator in front. At an MSRP of $169.99, the Core X9 E-ATX Cube Chassis features a competitive price in light of its size and configurability.
Courtesy of Thermaltake
Courtesy of Thermaltake
The Core X9 case was designed to be fully modular, supporting a variety of build configurations to be able to adapt to the whatever build style the end user can dream up. The case comes with a variety of mounts for mounting fans or liquid cooling radiators to the top, side, or bottom of the case. Until you can accurately visually just how many radiators and fans that this case supports, you really don't have a feel for the immense size of the Core X9. From front to back, the case support 4 x 120mm fans or a 480mm radiator along either of its lower sides or in the dual top mounts. On top, you can actually mount a total of eight 120mm fans or dual 480mm radiators if you so choose. And that doesn't take into account the additional two 140mm fans that can be mounted in the upper and lower sections of the case's rear panel, nor the three 120mm fans, dual 200mm fans, or 360mm radiator that can be mounted to the case's front panel.
Seeing Ryan transition from being a long-time Android user over to iOS late last year has had me thinking. While I've had hands on with flagship phones from many manufacturers since then, I haven't actually carried an Android device with me since the Nexus S (eventually, with the 4.0 Ice Cream Sandwich upgrade). Maybe it was time to go back in order to gain a more informed perspective of the mobile device market as it stands today.
So that's exactly what I did. When we received our Samsung Galaxy S7 review unit (full review coming soon, I promise!), I decided to go ahead and put a real effort forth into using Android for an extended period of time.
Full disclosure, I am still carrying my iPhone with me since we received a T-Mobile locked unit, and my personal number is on Verizon. However, I have been using the S7 for everything but phone calls, and the occasional text message to people who only has my iPhone number.
Now one of the questions you might be asking yourself right now is why did I choose the Galaxy S7 of all devices to make this transition with. Most Android aficionados would probably insist that I chose a Nexus device to get the best experience and one that Google intends to provide when developing Android. While these people aren't wrong, I decided that I wanted to go with a more popular device as opposed to the more niche Nexus line.
Whether you Samsung's approach or not, the fact is that they sell more Android devices than anyone else and the Galaxy S7 will be their flagship offering for the next year or so.
Introduction and Features
Earlier this year we reviewed the EVGA 750W GQ power supply and found it to be a worthy addition to EVGA’s already plentiful power supply lineup. Today we are taking a detailed look at another member of the GQ series, the 650W GQ. It’s always nice to be able to compare different models of the same series for consistency. The GQ series is aimed at price conscious consumers who want good value while still maintaining many of the performance features found in EVGA’s premium models. The GQ Series contains four models ranging from 650W up to 1000W: the EVGA 650 GQ, 750 GQ, 850 GQ and 1000 GQ.
All of the GQ series power supplies are 80 Plus Gold certified for high efficiency and feature modular cables, high-quality Japanese brand capacitors, and a quiet 135mm cooling fan with a fluid dynamic bearing. The GQ series power supplies are NVIDIA SLI and AMD Crossfire Ready and are backed by a 5-year warranty.
EVGA 650W GQ PSU Key Features:
• Fully modular cables to reduce clutter and improve airflow
• 80 PLUS Gold certified, with up to 90%/92% efficiency (115VAC/240VAC)
• 100% Japanese brand capacitors ensure long-term reliability
• Quiet 135mm Fluid Dynamic bearing fan for reliability and quiet operation
• ECO Intelligent Thermal Control allows silent, fan-less operation at low power
• NVIDIA SLI & AMD Crossfire Ready
• Ready for 4th Generation Intel Core Processors (C6/C7 Idle Mode)
• Compliant with ErP Lot 6 2013 Requirement
• Active Power Factor correction (0.99) with Universal AC input
• 5-Year warranty and EVGA Customer Support
EVGA was founded in 1999 with headquarters in Brea, California. They continue to specialize in producing NVIDIA based graphics adapters and Intel based motherboards and keep expanding their PC power supply product line, which currently includes thirty-eight models ranging from the high-end 1,600W SuperNOVA T2 to the budget minded EVGA 400W power supply.
(Courtesy of EVGA)
As you can see in the table above, EVGA currently offers six different variations of 650W power supplies. Let’s get started with the review and see how the 650 GQ compares to the 750 GQ.