All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Subject: General Tech, Graphics Cards, Shows and Expos | August 16, 2014 - 12:33 AM | Scott Michaud
Tagged: siggraph 2014, Siggraph, OpenGL Next, opengl 4.5, opengl, nvidia, Mantle, Khronos, Intel, DirectX 12, amd
Let's be clear: there are two stories here. The first is the release of OpenGL 4.5 and the second is the announcement of the "Next Generation OpenGL Initiative". They both occur on the same press release, but they are two, different statements.
OpenGL 4.5 Released
OpenGL 4.5 expands the core specification with a few extensions. Compatible hardware, with OpenGL 4.5 drivers, will be guaranteed to support these. This includes features like direct_state_access, which allows accessing objects in a context without binding to it, and support of OpenGL ES3.1 features that are traditionally missing from OpenGL 4, which allows easier porting of OpenGL ES3.1 applications to OpenGL.
It also adds a few new extensions as an option:
ARB_pipeline_statistics_query lets a developer ask the GPU what it has been doing. This could be useful for "profiling" an application (list completed work to identify optimization points).
ARB_sparse_buffer allows developers to perform calculations on pieces of generic buffers, without loading it all into memory. This is similar to ARB_sparse_textures... except that those are for textures. Buffers are useful for things like vertex data (and so forth).
ARB_transform_feedback_overflow_query is apparently designed to let developers choose whether or not to draw objects based on whether the buffer is overflowed. I might be wrong, but it seems like this would be useful for deciding whether or not to draw objects generated by geometry shaders.
KHR_blend_equation_advanced allows new blending equations between objects. If you use Photoshop, this would be "multiply", "screen", "darken", "lighten", "difference", and so forth. On NVIDIA's side, this will be directly supported on Maxwell and Tegra K1 (and later). Fermi and Kepler will support the functionality, but the driver will perform the calculations with shaders. AMD has yet to comment, as far as I can tell.
Image from NVIDIA GTC Presentation
If you are a developer, NVIDIA has launched 340.65 (340.23.01 for Linux) beta drivers for developers. If you are not looking to create OpenGL 4.5 applications, do not get this driver. You really should not have any use for it, at all.
Next Generation OpenGL Initiative Announced
The Khronos Group has also announced "a call for participation" to outline a new specification for graphics and compute. They want it to allow developers explicit control over CPU and GPU tasks, be multithreaded, have minimal overhead, have a common shader language, and "rigorous conformance testing". This sounds a lot like the design goals of Mantle (and what we know of DirectX 12).
And really, from what I hear and understand, that is what OpenGL needs at this point. Graphics cards look nothing like they did a decade ago (or over two decades ago). They each have very similar interfaces and data structures, even if their fundamental architectures vary greatly. If we can draw a line in the sand, legacy APIs can be supported but not optimized heavily by the drivers. After a short time, available performance for legacy applications would be so high that it wouldn't matter, as long as they continue to run.
Add to it, next-generation drivers should be significantly easier to develop, considering the reduced error checking (and other responsibilities). As I said on Intel's DirectX 12 story, it is still unclear whether it will lead to enough performance increase to make most optimizations, such as those which increase workload or developer effort in exchange for queuing fewer GPU commands, unnecessary. We will need to wait for game developers to use it for a bit before we know.
Subject: General Tech, Graphics Cards, Processors, Mobile, Shows and Expos | August 14, 2014 - 01:55 AM | Scott Michaud
Tagged: siggraph 2014, Siggraph, microsoft, Intel, DirectX 12, directx 11, DirectX
Along with GDC Europe and Gamescom, Siggraph 2014 is going on in Vancouver, BC. At it, Intel had a DirectX 12 demo at their booth. This scene, containing 50,000 asteroids, each in its own draw call, was developed on both Direct3D 11 and Direct3D 12 code paths and could apparently be switched while the demo is running. Intel claims to have measured both power as well as frame rate.
Variable power to hit a desired frame rate, DX11 and DX12.
The test system is a Surface Pro 3 with an Intel HD 4400 GPU. Doing a bit of digging, this would make it the i5-based Surface Pro 3. Removing another shovel-load of mystery, this would be the Intel Core i5-4300U with two cores, four threads, 1.9 GHz base clock, up-to 2.9 GHz turbo clock, 3MB of cache, and (of course) based on the Haswell architecture.
While not top-of-the-line, it is also not bottom-of-the-barrel. It is a respectable CPU.
Intel's demo on this processor shows a significant power reduction in the CPU, and even a slight decrease in GPU power, for the same target frame rate. If power was not throttled, Intel's demo goes from 19 FPS all the way up to a playable 33 FPS.
Intel will discuss more during a video interview, tomorrow (Thursday) at 5pm EDT.
Maximum power in DirectX 11 mode.
For my contribution to the story, I would like to address the first comment on the MSDN article. It claims that this is just an "ideal scenario" of a scene that is bottlenecked by draw calls. The thing is: that is the point. Sure, a game developer could optimize the scene to (maybe) instance objects together, and so forth, but that is unnecessary work. Why should programmers, or worse, artists, need to spend so much of their time developing art so that it could be batch together into fewer, bigger commands? Would it not be much easier, and all-around better, if the content could be developed as it most naturally comes together?
That, of course, depends on how much performance improvement we will see from DirectX 12, compared to theoretical max efficiency. If pushing two workloads through a DX12 GPU takes about the same time as pushing one, double-sized workload, then it allows developers to, literally, perform whatever solution is most direct.
Maximum power when switching to DirectX 12 mode.
If, on the other hand, pushing two workloads is 1000x slower than pushing a single, double-sized one, but DirectX 11 was 10,000x slower, then it could be less relevant because developers will still need to do their tricks in those situations. The closer it gets, the fewer occasions that strict optimization is necessary.
If there are any DirectX 11 game developers, artists, and producers out there, we would like to hear from you. How much would a (let's say) 90% reduction in draw call latency (which is around what Mantle claims) give you, in terms of fewer required optimizations? Can you afford to solve problems "the naive way" now? Some of the time? Most of the time? Would it still be worth it to do things like object instancing and fewer, larger materials and shaders? How often?
Subject: Storage, Shows and Expos | August 7, 2014 - 09:37 PM | Allyn Malventano
Tagged: ssd, SM2256, silicon motion, sata, FMS 2014, FMS
Silicon Motion has announced their SM2256 controller. We caught a glimpse of this new controller on the Flash Memory Summit show floor:
The big deal here is the fact that this controller is a complete drop-in solution that can drive multiple different types of flash, as seen below:
The SM2256 can drive all variants of TLC flash.
The controller itself looks to have decent specs, considering it is meant to drive 1xnm TLC flash. Just under 100k random 4k IOPS. Writes are understandably below the max saturation of SATA 6Gb/sec at 400MB/sec (writing to TLC is tricky!). There is also mention of Silicon Motion's NANDXtend Technology, which claims to add some extra ECC and DSP tech towards the end of increasing the ability to correct for bit errors in the flash (more likely as you venture into 8 bit per cell territory).
Subject: Storage, Shows and Expos | August 7, 2014 - 09:25 PM | Allyn Malventano
Tagged: ssd, sata, PS5007, PS3110, phison, pcie, FMS 2014, FMS
At the Flash Memory Summit, Phison has updated their SSD controller lineup with a new quad-core SSD controller.
The PS3110 is capable of handling TLC as well as MLC flash, and the added horsepower lets it push as high as 100k IOPS.
Also seen was an upcoming PS5007 controller, capable of pushing PCIe 3.0 x4 SSDs at 300k IOPS and close to 3GB/sec sequential throughputs. While there were no actual devices on display of this new controller, we did spot the full specs:
Full press blast on the PS3110 appears after the break:
Subject: General Tech, Storage, Shows and Expos | August 7, 2014 - 06:17 PM | Scott Michaud
Tagged: ssd, phase change memory, PCM, hgst, FMS 2014, FMS
According to an HGST press release, the company will bring an SSD based on phase change memory to the 2014 Flash Memory Summit in Santa Clara, California. They claim that it will actually be at their booth, on the show floor, for two days (August 6th and 7th).
The device, which is not branded, connects via PCIe 2.0 x4. It is designed for speed. It is allegedly capable of 3 million IOPS, with just 1.5 microseconds required for a single access. For comparison, the 800GB Intel SSD DC P3700, recently reviewed by Allyn, had a dominating lead over the competitors that he tested. It was just shy of 250 thousand IOPS. This is, supposedly, about twelve times faster.
While it is based on a different technology than NAND, and thus not directly comparable, the PCM chips are apparently manufactured at 45nm. Regardless, that is significantly larger lithography than competing products. Intel is manufacturing their flash at 20nm, while Samsung managed to use a 30nm process for their recent V-NAND launch.
What does concern me is the capacity per chip. According to the press release, it is 1Gb per chip. That is about two orders of magnitude smaller than what NAND is pushing. That is, also, the only reference to capacity in the entire press release. It makes me wonder how small the total drive capacity will be, especially compared to RAM drives.
Of course, because it does not seem to be a marketed product yet, nothing about pricing or availability. It will almost definitely be aimed at the enterprise market, though (especially given HGST's track record).
*** Update from Allyn ***
I'm hijacking Scott's news post with photos of the actual PCM SSD, from the FMS show floor:
In case you all are wondering, yes, it does in fact work:
One of the advantages of PCM is that it is addressed at smaller sections as compared to typical flash memory. This means you can see ~700k *single sector* random IOPS at QD=1. You can only pull off that sort of figure with extremely low IO latency. They only showed this output at their display, but ramping up QD > 1 should reasonably lead to the 3 million figure claimed in their release.
Subject: Storage, Shows and Expos | August 6, 2014 - 07:03 PM | Allyn Malventano
Tagged: ssd, pcie, NVMe, Marvell, FMS 2014, FMS, controller, 88SS1093
Marvell is notorious for being the first to bring a 6Gb/sec SATA controller to market, and they continue to do very well in that area. Their very capable 88SS9189 controller powers the Crucial MX100 and M550, as well as the ADATA SP920.
Today they have announced a newer controller, the 88SS1093. Despite the confusing numbering, the 88SS1093 has a PCIe 3.0 x4 host interface and will support the full NVMe protocol. The provided specs are on the light side, as performance of this controller will ultimately depend on the speed and parallelism of the attached flash, but its sure to be a decent performer. I suspect it would behave like their SATA part, only no longer bottlenecked by SATA 6Gb/sec speeds.
More to follow as I hope to see this controller in person on the exhibition hall (which opens to press in a few hours). Full press blast after the break.
*** Update ***
Apologies as there was no photo to be taken - Marvell had no booth at the exibition space at FMS.
Subject: Storage, Shows and Expos | August 5, 2014 - 08:19 PM | Allyn Malventano
Tagged: FMS, vnand, tlc, ssd, Samsung, FMS 2014, Flash Memory Summit
Just minutes ago at the Flash Memory Summit, Samsung announced the production of 32-layer TLC VNAND:
This is the key to production of a soon-to-be-released 850 EVO, which should bring the excellent performance of the 850 Pro, with the reduced cost benefit we saw with the previous generation 840 EVO. Here's what the progression to 3D VNAND looks like:
3D TLC VNAND will look identical to the right most image in the above slide, but the difference will be that the charge stored has more variability. Given that Samsung's VNAND tech has more volume to store electrons when compared to competing 2D planar flash technology, it's a safe bet that this new TLC will come with higher endurance ratings than those other technologies. There is much more information on Samsung's VNAND technology on page 1 of our 850 Pro review. Be sure to check that out if you haven't already!
Another announcement made was more of an initiative, but a very interesting one at that. SSDs are generally dumb when it comes to coordinating with the host - in that there is virtually no coordination. An SSD has no idea which pieces of files were meant to be grouped together, etc (top half of this slide):
Stuff comes into the SSD and it puts it where it can based on its best guess as to how it should optimize those writes. What you'd want to have, ideally, is a more intelligent method of coordination between the host system and the SSD (more like the bottom half of the above slide). Samsung has been dabbling in the possibilities here and has seen some demonstrable gains to be made. In a system where they made the host software aware of the SSD flash space, and vice versa, they were able to significantly reduce write latency during high IOPS activity.
The key is that if the host / host software has more control over where and how data is stored on the SSD, the end result is a much more optimized write pattern, which ultimately boosts overall throughput and IOPS. We are still in the experimentation stage on Storage Intelligence, with more to follow as standards are developed and the industry pushes forward.
It might be a while before we see Storage Intelligence go mainstream, but I'm definitely eager to see 3D TLC VNAND hit the market, and now we know it's coming! More to follow in the coming days as we continue our live coverage of the Flash Memory Summit!
Subject: Editorial, General Tech, Shows and Expos | July 23, 2014 - 08:43 PM | Ryan Shrout
Tagged: workshop, video, streaming, quakecon, prizes, live, giveaways
UPDATE: The event is over, but the video is embeded below if you want to see the presentations! Thanks again to everyone that attended and all of our sponsors!
It is that time of year again: another installment of the PC Perspective Hardware Workshop! Once again we will be presenting on the main stage at Quakecon 2014 being held in Dallas, TX July 17-20th.
Main Stage - Quakecon 2014
Saturday, July 19th, 12:00pm CT
Our thanks go out to the organizers of Quakecon for allowing us and our partners to put together a show that we are proud of every year. We love giving back to the community of enthusiasts and gamers that drive us to do what we do! Get ready for 2 hours of prizes, games and raffles and the chances are pretty good that you'll take something out with you - really, they are pretty good!
Our primary partners at the event are those that threw in for our ability to host the workshop at Quakecon and for the hundreds of shirts we have ready to toss out! Our thanks to NVIDIA, Seasonic and Logitech!!
If you can't make it to the workshop - don't worry! You can still watch the workshop live on our live page as we stream it over one of several online services. Just remember this URL: http://pcper.com/live and you will find your way!
PC Perspective LIVE Podcast and Meetup
We are planning on hosting any fans that want to watch us record our weekly PC Perspective Podcast (http://pcper.com/podcast) on Wednesday or Thursday evening in our meeting room at the Hilton Anatole. I don't yet know exactly WHEN or WHERE the location will be, but I will update this page accordingly on Wednesday July 16th when we get the data. You might also consider following me on Twitter for updates on that status as well.
After the recording, we'll hop over the hotel bar for a couple drinks and hang out. We have room for at leaast 50-60 people to join us in the room but we'll still be recording if just ONE of you shows up. :)
Prize List (will continue to grow!)
Subject: General Tech, Shows and Expos | July 19, 2014 - 09:13 PM | Scott Michaud
Tagged: quakecon 2014, quakecon, id, crytek
Tiago Sousa was "Lead R&D Graphics Engineer" at Crytek, according to his now defunct Twitter account, "@CRYTEK_TIAGO". According to his new Twitter account, "@idSoftwareTiago", he will be joining id Software to help with DOOM and idTech 6.
A little less DOOM and gloom.
I find this more interesting because idTech 5 has not exactly seen much usage, outside of RAGE. Wolfenstein: The New Order was also released on the technology, two months ago. There is one other game planned -- and that is it. Sure, RAGE is almost three years old and the engine was first revealed in 2007, making it seven-year-old technology, basically. Still, that is a significant investment to see basically no return on, especially considering that its sales figures were not too impressive (Steam and other digital delivery services excluded).
Happy to announce i'll be helping the amazingly talented id Software team with Doom and idTech 6. Very excited :)
— Tiago Sousa (@idSoftwareTiago) July 18, 2014
I also cannot tell if this looks positive for id, after mixed comments from current and former employees (or people who claim to be), or bad for Crytek. The latter company was rumored to be hurting for cash since 2011 and saw the departure of many employees. I expect that there will be more to this story in the coming months and years.
Subject: Shows and Expos | July 9, 2014 - 09:27 PM | Ryan Shrout
Tagged: workshop, quakecon, contest, byoc
Are you interested in attending Quakecon 2014 next weekend in Dallas, TX but just can't swing the BYOC spot? Well, thanks to our friends at Quakecon and at PC Part Picker, we have two BYOC spots up for grabs for fans of PC Perspective!
While we are excited to be hosting our PC Perspective Hardware Workshop with thousands of dollars in giveaways to pass out on Saturday the 19th, I know that the big draw is the chance to spend Thursday, Friday and Saturday at North America's largest LAN Party.
The giveaway is simple.
- Fill out the form below with your name and email address.
- Make sure you are able and willing to attend Quakecon from July 17th - July 20th. There is no point in winning a free BYOC spot that you cannot use!
- We'll pick a winner on Friday, July 11th so you'll have enough time to make plans.
There you have it. Get it to it guys and we'll see you in Dallas!
Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | July 7, 2014 - 08:06 AM | Scott Michaud
Tagged: tegra k1, OpenGL ES, opengl, Khronos, google io, google, android extension pack, Android
Sure, this is a little late. Honestly, when I first heard the announcement, I did not see much news in it. The slide from the keynote (below) showed four points: Tesselation, Geometry Shaders, Computer [sic] Shaders, and ASTC Texture Compression. Honestly, I thought tesselation and geometry shaders were part of the OpenGL ES 3.1 spec, like compute shaders. This led to my immediate reaction: "Oh cool. They implemented OpenGL ES 3.1. Nice. Not worth a news post."
Image Credit: Blogogist
Apparently, they were not part of the ES 3.1 spec (although compute shaders are). My mistake. It turns out that Google is cooking their their own vendor-specific extensions. This is quite interesting, as it adds functionality to the API without the developer needing to target a specific GPU vendor (INTEL, NV, ATI, AMD), waiting for approval from the Architecture Review Board (ARB), or using multi-vendor extensions (EXT). In other words, it sounds like developers can target Google's vendor without knowing the actual hardware.
Hiding the GPU vendor from the developer is not the only reason for Google to host their own vendor extension. The added features are mostly from full OpenGL. This makes sense, because it was announced with NVIDIA and their Tegra K1, Kepler-based SoC. Full OpenGL compatibility was NVIDIA's selling point for the K1, due to its heritage as a desktop GPU. But, instead of requiring apps to be programmed with full OpenGL in mind, Google's extension pushes it to OpenGL ES 3.1. If the developer wants to dip their toe into OpenGL, then they could add a few Android Extension Pack features to their existing ES engine.
Epic Games' Unreal Engine 4 "Rivalry" Demo from Google I/O 2014.
The last feature, ASTC Texture Compression, was an interesting one. Apparently the Khronos Group, owners of OpenGL, were looking for a new generation of texture compression technologies. NVIDIA suggested their ZIL technology. ARM and AMD also proposed "Adaptive Scalable Texture Compression". ARM and AMD won, although the Khronos Group stated that the collaboration between ARM and NVIDIA made both proposals better than either in isolation.
Android Extension Pack is set to launch with "Android L". The next release of Android is not currently associated with a snack food. If I was their marketer, I would block out the next three versions as 5.x, and name them (L)emon, then (M)eringue, and finally (P)ie.
Would I do anything with the two skipped letters before pie? (N)(O).
Subject: General Tech, Mobile, Shows and Expos | June 15, 2014 - 05:51 AM | Scott Michaud
Tagged: x86, SteamOS, Steam Machine, Steam Controller, steam, mobile, handheld, E3 14, E3
To be doubly clear, if the title was not explicit enough, this announcement is not made by Valve. This company is called, "SteamBoy Machine team". If not a hoax, this is one of the many Steam Machines which are expected to come out of the SteamOS initiative. Rather than taking the platform to a desktop or home theater PC (HTPC) form-factor, this company wants to target the handheld PC gaming market.
If it comes out, that is a clever use of SteamOS. I can see Big Picture Mode being just as useful on a small screen as it is on a TV, especially with its large font and controller navigation. The teasers suggest that it will use the haptic feedback-based touchpads which Valve are expected to base the Steam Controller on. It will also include a 5-inch touchscreen.
The Escapist got into contact with the team and received a few more specs:
- Quad-Core CPU (x86)
- 4GB RAM
- 32GB built-in storage
Even if this company does not make good on their expectations, companies will now be considering portable SteamOS devices. This is the sort of outside-the-box thinking that Valve was pushing for when they wanted to create an open platform. Each party will struggle to win in their personal goals, yet they can also rely on the crowd (other companies or individuals) to keep up in areas where they do not want an edge.
Philosophy aside, the company is targeting 2015 with a "Standard Edition" supporting WiFi and 3G. It would make sense to have a WiFi-only model, but who knows.
Subject: General Tech, Systems, Shows and Expos | June 11, 2014 - 06:44 AM | Scott Michaud
Tagged: Steam Machine, E3 14, E3, dell, alienware alpha, alienware
While "Steam Machines" are delayed, Alienware will still launch their console form-factor PC. The $550 price tag includes a black Xbox 360 wireless controller (with receiver) and Windows 8.1 64-bit. Alienware has also designed their own "Console-mode UI" for Windows 8.1, which can be navigated directly with a controller. It will ship Holiday 2014.
Apparently PC-based consoles equate to dubstep and parkour.
About the "Console-mode UI", it will apparently be what the user sees when the Alpha boots. The user can then select between Steam Big Picture, media, and programs. They also allow users to boot into the standard Windows 8.1 interface.
As for its specifications:
|Base Model ($550)||Upgrade Options|
|Processor||Haswell-based Intel Core i3||Core i5, Core i7 (user accessible)|
|GPU||"Custom" Maxwell-based, 2GB GDDR5
(see next paragraph)
|(none) (not user accessible, soldered on)|
|System Memory||4GB at 1600 MHz||8GB (user accessible)|
|HDD||500GB SATA3||1TB or 2TB (user accessible)|
|Wireless||Dual-band 802.11ac||(user accessible)|
The GPU is not specified, or even given a similar part to refer to. PC World claims that it will be comparable to the performance found on the two next-gen consoles. Since the 750 Ti has around 1.3 TeraFLOPs of performance, this GPU is probably near that, or slightly above it. PC Gamer says that it will be based on mobile Maxwell, so it might be similar to an current or upcoming laptop GPU.
One thing that has not been addressed is the HDMI-in port. We know that it supports passthrough for low latency, but we do not know what it will do with the input video. Alienware has several of these set up at their booth on the show floor, so we might hear more soon. While its specifications are a bit on the light side, particularly on the default amount of RAM (although that is easily and cheaply upgraded), its $550 price, which includes a wireless controller and its adapter, is also pretty good.
Subject: General Tech, Shows and Expos | June 10, 2014 - 03:31 AM | Scott Michaud
Tagged: E3, E3 14, GTA5, GTA Online
So my best guess is that Rockstar was waiting on the "next-gen" assets before they bothered releasing Grand Theft Auto V on the PC. The game will be released this fall, alongside Xbox One and PlayStation 4 ports. They do not mention distribution platforms, but Steam is a fairly safe assumption, at least now that Games for Windows has been given its final rest.
Hopefully, this delay in releasing a PC version will be a temporary hiccup due to the overlapping console generations. With Grand Theft Auto IV, the same could not be said. The problem is, with how secretive Rockstar is, we cannot really tell whether the above assumption is true, or whether they were just non-committal to the PC platform until now. At either rate, until the PC version is launched, Rockstar has not and will not get my money. Of course, there is always that danger that, by the time the game does launch, I will not be able to afford its time or expense.
That's why you should always release the PC version as early as possible.
Subject: General Tech, Shows and Expos | June 10, 2014 - 01:04 AM | Scott Michaud
Tagged: E3, E3 14, GOG, gog galaxy
Good Old Games (GOG), a subsidiary of CD Projekt RED, is releasing an online gaming manager similar to Steam and Origin. The difference is that everything about it is DRM-free and completely optional. Galaxy will manage game updates, provide achievements, and host communication between friends... if you want. If you don't? That's okay. Have fun.
Obviously, their most popular competitor is Valve. Steam has a history of being nice to their customers and erring on their side. GOG, historically, takes it to the consumer-friendly extreme. If it lives up to their statements, this is no exception. The hope seems to be just that people will remember GOG more often and have more happy customers.
Basically, most platforms are give-and-take. This is take what you want.
When will it launch? What will it look like? Who knows. We will get more news this year, which suggests that we will not get the software until at least next year. Hopefully they will take their time and get it right. I mean, it is not like they need to rush. It is not a mandatory DRM platform - it is not a DRM platform at all. I do expect they will try to target The Witcher 3's launch window (February 2015) for marketing purposes, though.
Subject: General Tech, Mobile, Shows and Expos | June 9, 2014 - 06:10 PM | Scott Michaud
Tagged: shield tablet, shield, nvidia, E3 14, E3
The Tech Report had their screenshot-fu tested today with the brief lifespan of NVIDIA's SHIELD Tablet product page. As you can see, it is fairly empty. We know that it will have at least one bullet point of "Features" and that its name will be "SHIELD Tablet".
Image Credit: The Tech Report
Of course, being the first day of E3, it is easy to expect that such a device will be announced in the next couple of days. This is expected to be based on the Tegra K1 with 2GB of RAM and have a 2048x1536 touch display.
It does question what exactly is a "SHIELD", however. Apart from being a first-party device, how would they be any different from other TegraZone devices? We know that Half Life 2 and Portal have been ported to the SHIELD product line, exclusively, and will not be available on other Tegra-powered devices. Now that the SHIELD line is extending to tablets, I wonder how NVIDIA will handle this seemingly two-tier class of products (SHIELD vs Tegra OEM devices). It might even depend on how many design wins they achieve, along with their overall mobile market share.
Subject: General Tech, Shows and Expos | June 6, 2014 - 05:49 PM | Jeremy Hellstrom
Tagged: computex 2014
HP is courting mobile users with their Pro x2 series, a stylus enabled tablet with a keyboard dock containing extra outputs and a second battery in 1080p 12.5" and 11.6" 1366x768 flavours; the smaller model is already available already starting at $850. EVGA stepped up their game with the 1600W SuperNOVA PSU that comes with a 10 year warranty while Fractal Design was showing off refillable and expandable self contained watercoolers. You can also catch AData's new SSDs and Kingston's M.2 SSDs and even more over at The Tech Report.
"PC makers appear to be embracing the convertible tablet form factor with gusto at Computex 2014, and HP is no exception. Today, the company announced a pair of business-focused two-in-ones: the Pro x2 612 and Pro x2 410."
Here is some more Tech News from around the web:
- COMPUTEX 2014: Intel unleashed the Beast OC Event @ Madshrimps
- HyperX again claims new DDR3 frequency WR at 2282.8MHz @ Madshrimps
- Devs get first look at next Visual Studio @ The Register
- 'NSA-proof' Protonet server crowdfunds $1m in under 90 minutes @ The Inquirer
- Chrome market share overtakes Internet Explorer for the first time @ The Inquirer
- FIGHT! Intel disputes ARM's claims of Android superiority @ The Register
- The Hovering, Holographic, Star Wars Display @ Hack a Day
- Win MSI Z97 Gaming 9 AC, R9 290X Gaming GPU and LE Siberia V2 @ Kitguru
- Enter to win one of three Biostar Hi-Fi Z97WE motherboards @ The Tech Report
Subject: General Tech, Shows and Expos | June 5, 2014 - 11:37 PM | Scott Michaud
Tagged: computex, computex 2014, roccat, tyon
So this mouse has many buttons. It even has not-buttons. The ROCCAT Tyon has 31 user-customizable functions mapped over 16 buttons. The "Tyon Xcelerator", near its thumb buttons, is an analog switch designed for functions such as throttle or vertical movement. The "Dorsal Fin" is a switch that tilts left and right, like a tilt wheel, except that it also has a tilting mouse wheel.
I guess you can never have too many tilt functions.
Yo Cat, Heard You Like Buttons...
In short, ROCCAT has basically put as many functions on that mouse as they believe comfortable. Personally, I think the "Xcelerator" could be quite useful for games, like Battlefield: Bad Company 2 with its UAV, where you need to move in three dimensions and rotate in two dimensions, at the same time. That just leaves about 30 other functions to think about.
The ROCCAT Tyon is "coming soon" for 99.99 Euros (~$136 USD).
Subject: General Tech, Shows and Expos | June 4, 2014 - 05:23 PM | Jeremy Hellstrom
Tagged: thermaltake, roccat, nzxt, gigabyte, computex 2014, asus
The Tech Report has been busy at Computex, visiting as many booths as they can in amongst the numerous vendors showing off their upcoming products. From ASUS we get another look at the ROG systems and a G-Sync monitor as several new motherboards. Both Thermaltake and Roccat have new peripherals to show off while NZXT is more focussed on cooling products. Gigabyte has taken advantage of the event to show how fast their limited edition Z97X-SOC Force LN2 can push DDR3, hitting 4.5GHz in a live demo! There is more coverage that that, as well as our own, so you can expect to be busy over the next few days.
"Earlier today at Computex, Asus let loose a veritable cornucopia of items under its Republic of Gamers brand. Among them: two stylish mini gaming desktops plus a 27" display outfitted with Nvidia's G-Sync technology."
Here is some more Tech News from around the web:
- Computex 2014 Gigabyte Suite Visit @ Hardware Asylum
- Computex 2014 In Win S-Frame @ Hardware Asylum
- Intel gives biz typoslabs their very own 14nm Core-M silicon @ The Register
- Kaveri Mobile APUs; AMD's FX Reincarnated @ Hardware Canucks
- A first look at AMD's Kaveri APU for notebooks @ The Tech Report
- Linux hit by GnuTLS exploit, follows Heartbleed model @ The Inquirer
- TSMC reportedly to tie up with Micron to develop 3D ICs @ DigiTimes
- PCIe hard drives? You read that right, says WD @ The Register
- Pittasoft BlackVue Sport SC500 Action Camera @ NikKTech
Subject: General Tech, Storage, Shows and Expos | June 3, 2014 - 07:37 AM | Scott Michaud
Tagged: computex, computex 2014, WD, ssd, pcie, SATA Express, hdd
SATA Express is an interface to either connect a hard drive to PCIe lanes, or up to two drives via SATA. Obviously, PCIe bandwidth over a cable connection is the real draw. To use the full speed, however, the drive needs to be able to communicate over PCIe. Currently, the standard uses two PCI Express 2.0 lanes (1 GB/s).
Now that Z97 and H97 have launched, WD is set to show off the technology at Computex. The above image is apparently of a dual-drive product, containing 4TB of rotating media and 128GB of SSD memory. I am immediately reminded of the Western Digital Black2 dual drive which Allyn reviewed last November. That product crammed a 120GB SSD into a 2.5" 1TB HDD, which appeared to the system as two separate drives. The drive has "Technology Demonstration" written in red font right on it, but it could be a good representation of what the company is thinking about.
WD also asserts that their prototype uses standard AHCI drivers, for OS compatibility.
If you want to see this product in action, then -- well -- you kind-of need to be at Computex. At some point, you might be able to see it in your own PC. When? How much? No pricing and availability, again, because it is a tech demo.