Flash player not detected. Click here to install flash.
« 1 2 3 4 5 »

It's been a busy year for phones so far ... who is coming out on top?

Subject: Mobile | August 11, 2015 - 01:39 PM |
Tagged: smartphones, Moto G, galaxy s6, LG G4, iphone 6, HTC One M9, blackphone

The Inquirer has taken a look back at the past years smartphone releases with an eye towards providing a resource to help you compare them.  So far there are 11 phones in their round up, including the somewhat maligned Blackphone which was intended to be completely secure but turned out to be a little less invulnerable than advertised.  An overview of each phone is provided covering basic statistics such as screen size and resolution and often the processor inside.  As you would expect they also include a link to their reviews of the phone and they plan on updating the article as new phones are released.

blackphone-new-imagery-540x334.png

"THE SMARTPHONE MARKET is becoming increasingly competitive, make it harder and harder for buyers to choose which handset is right for them."

Here are some more Mobile articles from around the web:

Mobile

Source: The Inquirer

Windows 10 for everything arrives

Subject: General Tech | August 11, 2015 - 12:52 PM |
Tagged: windows 10, iot, raspberry pi 2

The slimmed down version of Windows 10 for devices such as the Raspberry Pi 2 has arrived and it is royalty free for makers, available right here.  The Register describes some problems with the current version, mostly incompatibility with certain peripherals but also include occasional video crashes or networking issues.  Seeing as how this particular incarnation of the OS is designed for creative minds tinkering on custom hardware the issues are not unexpected nor should you consider it proof the OS is not usable if you plan on tinkering with it.  You will need a full PC for development with Windows 10 and Visual Studio 2015 to start using the slimmed down Windows 10, nothing new but certainly worth noting.  Check out more on the Universal Windows Platform and Windows 10 for the IoT at The Register.

RPi2_0.png

"Microsoft has shipped the public release of Windows 10 IoT Core, the pared-down version of Windows 10 for embedded devices, including the Intel MinnowBoard Max and the Raspberry Pi 2."

Here is some more Tech News from around the web:

Tech Talk

Source: The Register

Overclock any NVIDIA GPU on Desktop and Mobile with a New Utility

Subject: Graphics Cards | August 10, 2015 - 06:14 PM |
Tagged: overclocking, overclock, open source, nvidia, MSI Afterburner, API

An author called "2PKAQWTUQM2Q7DJG" (likely not a real name) has published a fascinating little article today on his/her Wordpress blog entitled, "Overclocking Tools for NVIDIA GPUs Suck. I Made My Own". What it contains is a full account of the process of creating an overclocking tool beyond the constraints of common utilities such as MSI Afterburner.

By probing MSI's OC utility using Ollydbg (an x86 "assembler level analysing debugger") the author was able to track down how Afterburner was working.

nvapiload.png

“nvapi.dll” definitely gets loaded here using LoadLibrary/GetModuleHandle. We’re on the right track. Now where exactly is that lib used? ... That’s simple, with the program running and the realtime graph disabled (it polls NvAPI constantly adding noise to the mass of API calls). we place a memory breakpoint on the .Text memory segment of the NVapi.dll inside MSI Afterburner’s process... Then we set the sliders in the MSI tool to get some negligible GPU underclock and hit the “apply” button. It breaks inside NvAPI… magic!

After further explaining the process and his/her source code for an overclocking utility, the user goes on to show the finished product in the form of a command line utility.

overclock.png

There is a link to the finished version of this utility at the end of the article, as well as the entire process with all source code. It makes for an interesting read (even for the painfully inept at programming, such as myself), and the provided link to download this mysterious overclocking utility (disguised as a JPG image file, no less) makes it both tempting and a little dubious. Does this really allow overclocking any NVIDIA GPU, including mobile? What could be the harm in trying?? In all seriousness however since some of what was seemingly uncovered in the article is no doubt proprietary, how long will this information be available?

It would probably be wise to follow the link to the Wordpress page ASAP!

Source: Wordpress

Another look at Shuttle's DS57U barebones SFF system

Subject: Systems | August 10, 2015 - 03:52 PM |
Tagged: Celeron 3205U, DS57U, shuttle, SFF

Madshrimps have just wrapped up testing the Intel Celeron 3205U powered Shuttle DS57U, a SFF system which can be mounted to the back of a monitor with VESA or placed beside your monitor in the included stand.  The presence of two serial ports, WOL and resume after power outage mean this little system could also be used in industrial or POS duties.  It is worth noting that this system only supports 1.35V SODIMMs, make sure to choose the proper RAM to avoid disappointment.  Check out the full review here; if you like the case but not the CPU there are i3, i5 and even an i7 model for you to consider.

intro.jpg

"Shuttle has built the DS57U inside a proven chassis, which takes quite little space and succeeds to cool the internal components without the need of extra fans; one of the case laterals is acting like a huge heatsink and in this case it only remains warm even when the system is stressed to the max."

Here are some more Systems articles from around the web:

Systems

 

Source: Mad Shrimps

Meet the ADATA XPG SX930 family of SSDs

Subject: Storage | August 10, 2015 - 03:07 PM |
Tagged: adata, XPG SX930, JMF670H

ADATA's new XPG SX930 series is aimed at enthusiasts on a budget, the 120GB is about $65, the 240GB at $110 and the 480GB at $200.  The SSDs use the JMicron JMF670H controller, not one we have seen before and they also have a pseudo SLC cache which grows with the size of the drive from 4GB to 8GB to 16GB for the 480GB model.  The SSD Review tested out all three drives and found that the advertised speeds of 550MB/s read and 460MB/s write were more or less accurate and the drives did fairly well in their other tests as well.  If you need more speedy storage and are on a budget you should check out their full review.

Adata-XPG-SX930-SSD-Family-1024x683.jpg

"ADATA has memory products for all sections of the market, from consumer to industrial. As of late they have released a new consumer SSD, the XPG SX930. It is marketed towards the gamer and overclocker crowd at a pretty competitive price point."

Here are some more Storage reviews from around the web:

Storage

Running a small Win7 Domain and having bandwidth issues today?

Subject: General Tech | August 10, 2015 - 12:58 PM |
Tagged: windows 10, oops, microsoft

Microsoft promised that Windows 10 would not be pushed out to computers on a Domain, or at least allow you to block the update; a claim which has turned out to be slightly less than accurate.  If you are running a Windows 7 Domain which still relies Microsoft update as opposed to WSUS you may have noticed some serious traffic spikes this morning.  That is because some, perhaps all, of your computers are slurping down the 3GB Windows 10 update.  Check the Register for links to Microsoft and consider blocking Microsoft Update on your firewall until this has been sorted, unless you like a slow network and living dangerously.

images.jpg

"The problem is affecting domain-attached Windows 7 PCs not signed up to Windows Server Update Services (WSUS) for patches and updates, but looking for a Microsoft update instead."

Here is some more Tech News from around the web:

Tech Talk

Source: The Register
Manufacturer: PC Perspective

It's Basically a Function Call for GPUs

Mantle, Vulkan, and DirectX 12 all claim to reduce overhead and provide a staggering increase in “draw calls”. As mentioned in the previous editorial, loading graphics card with tasks will take a drastic change in these new APIs. With DirectX 10 and earlier, applications would assign attributes to (what it is told is) the global state of the graphics card. After everything is configured and bound, one of a few “draw” functions is called, which queues the task in the graphics driver as a “draw call”.

While this suggests that just a single graphics device is to be defined, which we also mentioned in the previous article, it also implies that one thread needs to be the authority. This limitation was known about for a while, and it contributed to the meme that consoles can squeeze all the performance they have, but PCs are “too high level” for that. Microsoft tried to combat this with “Deferred Contexts” in DirectX 11. This feature allows virtual, shadow states to be loaded from secondary threads, which can be appended to the global state, whole. It was a compromise between each thread being able to create its own commands, and the legacy decision to have a single, global state for the GPU.

Some developers experienced gains, while others lost a bit. It didn't live up to expectations.

pcper-2015-dx12-290x.png

The paradigm used to load graphics cards is the problem. It doesn't make sense anymore. A developer might not want to draw a primitive with every poke of the GPU. At times, they might want to shove a workload of simple linear algebra through it, while other requests could simply be pushing memory around to set up a later task (or to read the result of a previous one). More importantly, any thread could want to do this to any graphics device.

pcper-2015-dx12-980.png

The new graphics APIs allow developers to submit their tasks quicker and smarter, and it allows the drivers to schedule compatible tasks better, even simultaneously. In fact, the driver's job has been massively simplified altogether. When we tested 3DMark back in March, two interesting things were revealed:

  • Both AMD and NVIDIA are only a two-digit percentage of draw call performance apart
  • Both AMD and NVIDIA saw an order of magnitude increase in draw calls

Read on to see what this means for games and game development.

Khronos Group at SIGGRAPH 2015

Subject: Graphics Cards, Processors, Mobile, Shows and Expos | August 10, 2015 - 09:01 AM |
Tagged: vulkan, spir, siggraph 2015, Siggraph, opengl sc, OpenGL ES, opengl, opencl, Khronos

When the Khronos Group announced Vulkan at GDC, they mentioned that the API is coming this year, and that this date is intended to under promise and over deliver. Recently, fans were hoping that it would be published at SIGGRAPH, which officially begun yesterday. Unfortunately, Vulkan has not released. It does hold a significant chunk of the news, however. Also, it's not like DirectX 12 is holding a commanding lead at the moment. The headers were public only for a few months, and the code samples are less than two weeks old.

khronos-2015-siggraph-sixapis.png

The organization made announcements for six products today: OpenGL, OpenGL ES, OpenGL SC, OpenCL, SPIR, and, as mentioned, Vulkan. They wanted to make their commitment clear, to all of their standards. Vulkan is urgent, but some developers will still want the framework of OpenGL. Bind what you need to the context, then issue a draw and, if you do it wrong, the driver will often clean up the mess for you anyway. The briefing was structure to be evident that it is still in their mind, which is likely why they made sure three OpenGL logos greeted me in their slide deck as early as possible. They are also taking and closely examining feedback about who wants to use Vulkan or OpenGL, and why.

As for Vulkan, confirmed platforms have been announced. Vendors have committed to drivers on Windows 7, 8, 10, Linux, including Steam OS, and Tizen (OSX and iOS are absent, though). Beyond all of that, Google will accept Vulkan on Android. This is a big deal, as Google, despite its open nature, has been avoiding several Khronos Group standards. For instance, Nexus phones and tablets do not have OpenCL drivers, although Google isn't stopping third parties from rolling it into their devices, like Samsung and NVIDIA. Direct support of Vulkan should help cross-platform development as well as, and more importantly, target the multi-core, relatively slow threaded processors of those devices. This could even be of significant use for web browsers, especially in sites with a lot of simple 2D effects. Google is also contributing support from their drawElements Quality Program (dEQP), which is a conformance test suite that they bought back in 2014. They are going to expand it to Vulkan, so that developers will have more consistency between devices -- a big win for Android.

google-android-opengl-es-extensions.jpg

While we're not done with Vulkan, one of the biggest announcements is OpenGL ES 3.2 and it fits here nicely. At around the time that OpenGL ES 3.1 brought Compute Shaders to the embedded platform, Google launched the Android Extension Pack (AEP). This absorbed OpenGL ES 3.1 and added Tessellation, Geometry Shaders, and ASTC texture compression to it. It was also more tension between Google and cross-platform developers, feeling like Google was trying to pull its developers away from Khronos Group. Today, OpenGL ES 3.2 was announced and includes each of the AEP features, plus a few more (like “enhanced” blending). Better yet, Google will support it directly.

Next up are the desktop standards, before we finish with a resurrected embedded standard.

OpenGL has a few new extensions added. One interesting one is the ability to assign locations to multi-samples within a pixel. There is a whole list of sub-pixel layouts, such as rotated grid and Poisson disc. Apparently this extension allows developers to choose it, as certain algorithms work better or worse for certain geometries and structures. There were probably vendor-specific extensions for a while, but now it's a ratified one. Another extension allows “streamlined sparse textures”, which helps manage data where the number of unpopulated entries outweighs the number of populated ones.

OpenCL 2.0 was given a refresh, too. It contains a few bug fixes and clarifications that will help it be adopted. C++ headers were also released, although I cannot comment much on it. I do not know the state that OpenCL 2.0 was in before now.

And this is when we make our way back to Vulkan.

khronos-2015-siggraph-spirv.png

SPIR-V, the code that runs on the GPU (or other offloading device, including the other cores of a CPU) in OpenCL and Vulkan is seeing a lot of community support. Projects are under way to allow developers to write GPU code in several interesting languages: Python, .NET (C#), Rust, Haskell, and many more. The slide lists nine that Khronos Group knows about, but those four are pretty interesting. Again, this is saying that you can write code in the aforementioned languages and have it run directly on a GPU. Curiously missing is HLSL, and the President of Khronos Group agreed that it would be a useful language. The ability to cross-compile HLSL into SPIR-V means that shader code written for DirectX 9, 10, 11, and 12 could be compiled for Vulkan. He expects that it won't take long for a project to start, and might already be happening somewhere outside his Google abilities. Regardless, those who are afraid to program in the C-like GLSL and HLSL shading languages might find C# and Python to be a bit more their speed, and they seem to be happening through SPIR-V.

As mentioned, we'll end on something completely different.

khronos-2015-siggraph-sc.png

For several years, the OpenGL SC has been on hiatus. This group defines standards for graphics (and soon GPU compute) in “safety critical” applications. For the longest time, this meant aircraft. The dozens of planes (which I assume meant dozens of models of planes) that adopted this technology were fine with a fixed-function pipeline. It has been about ten years since OpenGL SC 1.0 launched, which was based on OpenGL ES 1.0. SC 2.0 is planned to launch in 2016, which will be based on the much more modern OpenGL ES 2 and ES 3 APIs that allow pixel and vertex shaders. The Khronos Group is asking for participation to direct SC 2.0, as well as a future graphics and compute API that is potentially based on Vulkan.

The devices that this platform intends to target are: aircraft (again), automobiles, drones, and robots. There are a lot of ways that GPUs can help these devices, but they need a good API to certify against. It needs to withstand more than an Ouya, because crashes could be much more literal.

Photos and Tests of Skylake (Intel Core i7-6700K) Delidded

Subject: Processors | August 8, 2015 - 05:55 PM |
Tagged: Skylake, Intel, delid, CPU die, cpu, Core i7-6700K

PC Watch, a Japanese computer hardware website, acquired at least one Skylake i7-6700K and removed the heatspreader. With access to the bare die, they took some photos and tested a few thermal compound replacements, which quantifies how good (or bad) Intel's default thermal grease is. As evidenced by the launch of Ivy Bridge and, later, Devil's Canyon, the choice of thermal interface between the die and the lid can make a fairly large difference in temperatures and overclocking.

intel-2015-skylake-delid-01.jpg

Image Credit: PC Watch

They chose the vice method for the same reason that Morry chose this method in his i7-4770k delid article last year. This basically uses a slight amount of torque and external pressure or shock to pop the lid off the processor. Despite how it looks, this is considered to be less traumatic than using a razer blade to cut the seal, because human hands are not the most precise instruments and a slight miss could damage the PCB. PC Watch, apparently, needed to use a wrench to get enough torque on the vice, which is transferred to the processor as pressure.

intel-2015-skylake-delid-02.jpg

Image Credit: PC Watch

Of course, Intel could always offer enthusiasts with choices in thermal compounds before they put the lid on, which would be safest. How about that, Intel?

intel-2015-skylake-delid-03.jpg

Image Credit: PC Watch

With the lid off, PC Watch mentioned that the thermal compound seems to be roughly the same as Devil's Canyon, which is quite good. They also noticed that the PCB is significantly more thin than Haswell, dropping in thickness from about 1.1mm to about 0.8mm. For some benchmarks, they tested it with the stock interface, an aftermarket solution called Prolimatech PK-3, and a liquid metal alloy called Coollaboratory Liquid Pro.

intel-2015-skylake-delid-04.jpg

Image Credit: PC Watch

At 4.0 GHz, PK-3 dropped the temperature by about 4 degrees Celsius, while Liquid Metal knocked it down 16 degrees. At 4.6 GHz, PK-3 continued to give a delta of about 4 degrees, while Liquid Metal widened its gap to 20 degrees. It reduced an 88 C temperature to 68 C!

intel-2015-skylake-delid-05.jpg

Image Credit: PC Watch

There are obviously limitations to how practical this is. If you were concerned about thermal wear on your die, you probably wouldn't forcibly remove its heatspreader from its PCB to acquire it. That would be like performing surgery on yourself to remove your own appendix, which wasn't inflamed, just in case. Also, from an overclocking standpoint, heat doesn't scale with frequency. Twenty degrees is a huge gap, but even a hundred MHz could eat it up, depending on your die.

It's still interesting for those who try, though.

Source: PC Watch
Author:
Subject: General Tech
Manufacturer: Gigabyte

Killing those end of summer blues

As we approach the end of summer and the beginning of the life of Windows 10, PC Perspective and Gigabyte (along with Thermaltake and Kingston) have teamed up to bring our readers a system build guide and giveaway that is sure to get your gears turning. If you think that an X99-based system with an 8-core Intel Extreme processor, SLI graphics, 480GB SSD and 32GB of memory sounds up your alley...pay attention.

01.jpg

Deep in thought...

Even with the dawn of Skylake nearly upon us, there is no debate that the Haswell-E platform will continue to be the basis of the enthusiasts dream system for a long time. Lower power consumption is great, but nothing is going to top 8-cores, 16-threads and all the PCI Express lanes you could need for expansion to faster storage and accessories. With that in mind Gigabyte has partnered with PC Perspective to showcase the power of X99 and what a builder today can expect when putting together a system with a fairly high budget, but with lofty goals in mind as well.

Let's take a look at the components we are using today.

  Gigabyte X99 System Build
Processor Intel Core i7-5960X - $1048
Motherboard Gigabyte X99 Gaming 5P - $309
Memory Kingston HyperX Fury DDR4-2666 32GB - $325
Graphics Card 2 x Gigabyte G1 Gaming GTX 960 2GB - $199
Storage Kingston HyperX Savage 480GB SSD - $194
Case Thermaltake Core V51 - $82
Power Supply Thermaltake Toughpower Grand 850 watt - $189
CPU Cooler Thermaltake Water 3.0 Extreme S - $94
Total Price $1591 - Amazon Full Card (except CPU)
$1048 - Amazon Intel Core i7-5960X
Grand Total: $2639

Continue reading our system build and find out how you can WIN this PC!!

The Intel SMM bug is bad, but not that bad

Subject: General Tech | August 7, 2015 - 01:31 PM |
Tagged: fud, security, Intel, amd, x86, SMM

The SSM security hole that Christopher Domas has demonstrated (pdf)  is worrying but don't panic, it requires your system to be compromised before you are vulnerable.  That said, once you have access to the SMM you can do anything you feel like to the computer up to and including ensuring you can reinfect the machine even after a complete format or UEFI update.  The flaw was proven on Intel x86 machines but is likely to apply to AMD processors as well as they were using the same architecture around the turn of the millennium and thankfully the issue has been mitigated in recent processors.  Intel will be releasing patches for effected CPUs, although not all the processors can be patched and we have yet to hear from AMD.  You can get an over view of the issue by following the link at Slashdot and speculate on if this flaw was a mistake or inserted there on purpose in our comment section.

logo.png

"Security researcher Christopher Domas has demonstrated a method of installing a rootkit in a PC's firmware that exploits a feature built into every x86 chip manufactured since 1997. The rootkit infects the processor's System Management Mode, and could be used to wipe the UEFI or even to re-infect the OS after a clean install. Protection features like Secure Boot wouldnt help, because they too rely on the SMM to be secure."

Here is some more Tech News from around the web:

Tech Talk

Source: Slashdot

Going Beyond the Reference GTX 970

Zotac has been an interesting company to watch for the past few years.  It is a company that has made a name for themselves in the small form factor community with some really interesting designs and products.  They continue down that path, but they have increasingly focused on high quality graphics cards that address a pretty wide market.  They provide unique products from the $40 level up through the latest GTX 980 Ti with hybrid water and air cooling for $770.  The company used to focus on reference designs, but some years past they widened their appeal by applying their own design decisions to the latest NVIDIA products.

zot_01.jpg

Catchy looking boxes for people who mostly order online! Still, nice design.

The beginning of this year saw Zotac introduce their latest “Core” brand products that aim to provide high end features to more modestly priced parts.  The Core series makes some compromises to hit price points that are more desirable for a larger swath of consumers.  The cards often rely on more reference style PCBs with good quality components and advanced cooling solutions.  This equation has been used before, but Zotac is treading some new ground by offering very highly clocked cards right out of the box.

Overall Zotac has a very positive reputation in the industry for quality and support.

zot_02.jpg

Plenty of padding in the box to protect your latest investment.

 

Zotac GTX 970 AMP! Extreme Core Edition

The product we are looking at today is the somewhat long-named AMP! Extreme Core Edition.  This is based on the NVIDIA GTX 970 chip which features 56 ROPS, 1.75 MB of L2 cache, and 1664 CUDA Cores.  The GTX 970 has of course been scrutinized heavily due to the unique nature of its memory subsystem.  While it does physically have a 256 bit bus, the last 512 MB (out of 4GB) is addressed by a significantly slower unit due to shared memory controller capacity.  In theory the card reference design supports up to 224 GB/sec of memory bandwidth.  There are obviously some very unhappy people out there about this situation, but much of this could have been avoided if NVIDIA had disclosed the exact nature of the GTX 970 configuration.

Click here to read the entire Zotac GTX 970 AMP! Extreme Core Edition Review!

Upcoming Oculus SDK 0.7 Integrates Direct Driver Mode from AMD and NVIDIA

Subject: Graphics Cards | August 7, 2015 - 10:46 AM |
Tagged: sdk, Oculus, nvidia, direct driver mode, amd

In an email sent out by Oculus this morning, the company has revealed some interesting details  about the upcoming release of the Oculus SDK 0.7 on August 20th. The most interesting change is the introduction of Direct Driver Mode, developed in tandem with both AMD and NVIDIA.

sdk0.7.jpg

This new version of the SDK will remove the simplistic "Extended Mode" that many users and developers implemented for a quick and dirty way of getting the Rift development kits up and running. However, that implementation had the downside of additional latency, something that Oculus is trying to eliminate completely.

Here is what Oculus wrote about the "Direct Driver Mode" in its email to developers:

Direct Driver Mode is the most robust and reliable solution for interfacing with the Rift to date. Rather than inserting VR functionality between the OS and the graphics driver, headset awareness is added directly to the driver. As a result, Direct Driver Mode avoids many of the latency challenges of Extended Mode and also significantly reduces the number of conflicts between the Oculus SDK and third party applications. Note that Direct Driver Mode requires new drivers from NVIDIA and AMD, particularly for Kepler (GTX 645 or better) and GCN (HD 7730 or better) architectures, respectively.

We have heard NVIDIA and AMD talk about the benefits of direct driver implementations for VR headsets for along time. NVIDIA calls its software implementation GameWorks VR and AMD calls its software support LiquidVR. Both aim to do the same thing - give more direct access to the headset hardware to the developer while offering new ways for faster and lower latency rendering to games.

rift.jpg

Both companies have unique features to offer as well, including NVIDIA and it's multi-res shading technology. Check out our interview with NVIDIA on the topic below:

NVIDIA's Tom Petersen came to our offices to talk about GameWorks VR

Other notes in the email include a tentative scheduled release of November for the 1.0 version of the Oculus SDK. But until that version releases, Oculus is only guaranteeing that each new runtime will support the previous version of the SDK. So, when SDK 0.8 is released, you can only guarantee support for it and 0.7. When 0.9 comes out, game developers will need make sure they are at least on SDK 0.8 otherwise they risk incompatibility. Things will be tough for developers in this short window of time, but Oculus claims its necessary to "allow them to more rapidly evolve the software architecture and API." After SDK 1.0 hits, future SDK releases will continue to support 1.0.

Source: Oculus

DOTA 2 to Increase Custom Game Mode Player Limit to 24

Subject: General Tech | August 7, 2015 - 07:01 AM |
Tagged: valve, DOTA 2

MOBAs tend to be focus on gameplay mechanics with three to five players per team. The concept is that a handful of players will need to balance between the various attack paths, and a limited amount of cooperation is possible before you start leaving zones uncovered. It also means that one problematic player can tank an entire team.

valve-dota-international.png

This will not change in the official DOTA 2 game, but Valve is expanding the limit for custom games. At The International 5, Valve announced that those games can support up to 24 players. The first public game was a 10 vs 10 match at the end of the fourth day of the tournament. While I don't play DOTA 2, it sounds like Custom Games in DOTA 2 Reborn are a lot like StarCraft Arcade, where users can create mods like dungeon crawlers and even objective-based games. In this case, an increased player limit would be very useful. I am not sure whether it works for the base game, though -- maybe it works better?

This patch launches next week.

Intel SSD 750 Series 800GB SKU Appears!

Subject: Storage | August 6, 2015 - 06:37 PM |
Tagged: SSD 750, ssd, pcie, NVMe, Intel

Back when we reviewed the Intel SSD 750, it was only available in a 400GB and 1.2TB capacity, leaving a wide expanse of capacity between those two figures.

DSC03622-.JPG

A new 800GB SKU of the Intel SSD 750 Series of PCIe SSDs was hinted at with the Skylake launch press materials, and it appears to have been a reality:

ARK.png

They may not be on the shelves yet, but appearing on ARK is a pretty good indicator that these are coming soon. We don't have pricing yet, but I would suspect a cost/GB closer to the 1.2TB model than to the 400GB model, which should come in at around $700. Performance sees a slight hit for the 800GB model, likely since this is an 'uneven' number of dies for the design of the SSD DC P3500 line it was based on.

Which would you prefer - a single 800GB or a pair of 400GB SSD 750's in a RAID (now that it is possible)?

Source: Intel ARK

World of Warcraft Legion Announced

Subject: General Tech | August 6, 2015 - 06:18 PM |
Tagged: pc gaming, wow, blizzard

Shortly after Blizzard has released their financial results, they announced “Legion”, a new expansion pack for World of Warcraft. They are arriving more rapidly than they have in the past. The amount of time between Mists of Pandaria's release and Warlords of Draenor's announcement is a little more than a year and a month. A year later, Warlords of Draenor was released and now, nine months later, Legion was announced. I expect that the stream of content is to either stimulate subscriptions or, less likely, finish the narrative before the game fades out.

blizzard-2015-legion-concept.jpg

Image via PC Gamer

Before we get to the expansion, we'll briefly mention those financial results. In May, Blizzard reported that, while Warlords of Draenor pushed the subscription count to over 10 million, it fell back down to about 7.1 million by the end of the quarter. This is a loss of about 29%. This quarter saw another loss of about 1.5 million subscribers, from 7.1 million to 5.6 million. This is a loss of about 27%. This is a fairly steady, exponential loss of a little more than 25% every 3 months, which is fairly quick. This also means that Draenor was enough to offset about six months. Not much more to say about that -- I just find it interesting.

As for Legion, it will be a fairly sizable boost in content. The level cap has been increased to 110, which will hopefully include new skills and armor leading up to it. A new class, Demon Hunter, has also been added. You will not need to level them up from 1, and they will be capable as either DPS or tank. Of course, new raids will be included. Blizzard seems to have wanted to highlight dungeons, however. The way it was described to PC Gamer makes it sound like they want them to be more interesting as set pieces, with story and an interesting environment.

No pricing or availability information, but we'll probably hear a lot at Blizzcon.

Source: PC Gamer

Wireless productivity, Logitech's MX Master and MX Anywhere 2

Subject: General Tech | August 6, 2015 - 05:26 PM |
Tagged: logitech, mx master, mx anywhere 2, input

Logitech is found on many desktops, both gamer and spreadsheet slaves often choose this familiar name in peripherals.  The Tech Report looks at two wireless mice aimed at those who use their mice to make money as opposed to war, the larger MX Master and the smaller and more portable MX Anywhere 2.  Both these mice can have up to three profiles to let you move between different PCs, letting you save base station or Bluetooth 4 connections and swap them at the press of a button.  Check out how they perform in their duties in the full review.

mxpair.jpg

"Logitech's MX Master and MX Anywhere 2 mice represent the pinnacle of the company's productivity-oriented pointing devices. We spent some hand time with each one to see whether they're truly the overlords of the office."

Here is some more Tech News from around the web:

Tech Talk

Subject: Storage
Manufacturer: Intel
Tagged: Z170, Skylake, rst, raid, Intel

A quick look at storage

** This piece has been updated to reflect changes since first posting. See page two for PCIe RAID results! **

Our Intel Skylake launch coverage is intense! Make sure you hit up all the stories and videos that are interesting for you!

When I saw the small amount of press information provided with the launch of Intel Skylake, I was both surprised and impressed. The new Z170 chipset was going to have an upgraded DMI link, nearly doubling throughput. DMI has, for a long time, been suspected as the reason Intel SATA controllers have pegged at ~1.8 GB/sec, which limits the effectiveness of a RAID with more than 3 SSDs. Improved DMI throughput could enable the possibility of a 6-SSD RAID-0 that exceeds 3GB/sec, which would compete with PCIe SSDs.

ASUS PCIe RAID.png

Speaking of PCIe SSDs, that’s the other big addition to Z170. Intel’s Rapid Storage Technology was going to be expanded to include PCIe (even NVMe) SSDs, with the caveat that they must be physically connected to PCIe lanes falling under the DMI-connected chipset. This is not as big of as issue as you might think, as Skylake does not have 28 or 40 PCIe lanes as seen with X99 solutions. Z170 motherboards only have to route 16 PCIe lanes from the CPU to either two (8x8) or three (8x4x4) PCIe slots, and the remaining slots must all hang off of the chipset. This includes the PCIe portion of M.2 and SATA Express devices.

PCH storage config.png

Continue reading our preview of the new storage options on the Z170 chipset!!

Podcast #361 - Intel Skylake Core i7-6700K, Logitech G29 Racing Wheel, Lenovo LaVie-Z and more!

Subject: General Tech | August 6, 2015 - 03:04 PM |
Tagged: Z170-A, z170 deluxe, Z170, video, Skylake, podcast, nvidia, maxwell, logitech g29, Lenovo, lavie-z, Intel, gigabyte, asus, 950ti, 6700k

PC Perspective Podcast #361 - 08/06/2015

Join us this week as we discuss the Intel Skylake Core i7-6700K, Logitech G29 Racing Wheel, Lenovo LaVie-Z and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath, and Allyn Malventano

Subscribe to the PC Perspective YouTube Channel for more videos, reviews and podcasts!!

Can it Run VR? Crytek and Basemark Join Forces to Create Virtual Reality Benchmark

Subject: General Tech | August 6, 2015 - 02:14 PM |
Tagged: Basemark, crytek, oculus rift

With the release of Oculus Rift and various other head mounted displays you may be wondering if your current machine is powerful enough for you to use one of these devices or if you need to upgrade before you will enjoy the experience. 

Basemark logo blue.png

Basemark and Crytek have joined forces to create a new benchmark to test how your system will fare.  The benchmark will give you information on latency, verify your if hardware is able to run at 60, 75, 90 or 120fps with varying levels of graphics detail and even verify if your audio source can properly provide spacial audio cues.

Image_Based_Lighting.jpg

Helsinki (Finland) and Frankfurt am Main (Germany) August 6th, 2015 – Basemark and Crytek today announced a new partnership to help create a definitive PC system test for virtual reality gaming.

The new VR benchmark will enable gamers and PC hardware companies to easily assess the level of experience they can expect when running virtual reality content, and will be the first service available that gives users recognizable, real-world metrics to describe their system’s VR readiness with various HMDs out there.

Developed using Crytek’s CRYENGINE technology, the benchmark will provide detailed feedback in areas such as the best graphical settings to use with a variety of VR headsets. Basemark’s expertise in measuring performance standards will be key as they formulate an objective test that evaluates everything from frame rate capabilities to memory consumption, latency issues, 3D audio performance and much more.

Pixel_Accurate_Displacement_Mapping.jpg

Crytek’s Creative Director for CRYENGINE, Frank Vitz, said: “Basemark is already helping to measure technology standards in other areas of gaming, and we’re thrilled to be partnering with them as we work to establish a user-friendly yardstick for VR performance. We believe CRYENGINE can become a go-to tool for developers looking to create compelling VR experiences, and this partnership means players can also count on CRYENGINE as they evaluate whether their PC is ready for the most advanced, cutting-edge VR content available.”

“We wanted to make a real-world VR gaming benchmark as opposed to a theoretical one and hence we’re very excited to announce this partnership with Crytek, the leading game engine company”, said Tero Sarkkinen, founder and CEO of Basemark, “By using CRYENGINE as the base and vetting the test workloads under our rigorous development process involving all the key technology players, we will forge the definitive benchmark for all PC VR gamers.”

Source: Basemark