Subject: General Tech, Cases and Cooling, Shows and Expos | June 2, 2014 - 07:01 AM | Scott Michaud
Tagged: computex, Computer 2014
Cherry MX RGB key switches were teased since December but not yet made into a product. They generated interest by integrating red, green, and blue LEDs that, together, are capable of glowing any one of 16 million colors. Each key can even glow its own color and brightness independently, allowing users to color certain zones, with animation, if desired. Corsair had a year of exclusivity over this switch with their line of keyboards which they have done nothing with, so far.
Today, they announce that MX RGB switches will be available in four models:
- K70 RGB Red, available in late July ($169.99 MSRP)
- K70 RGB Blue, available in late August ($169.99 MSRP)
- K70 RGB Brown, available in late August ($169.99 MSRP)
- K95 RGB Red, available in late August ($189.99 MSRP)
If you happen to have wanted Cherry MX Blue or Brown, you will be looking at the K70 because the K95 RGB will only be available in Cherry MX RGB Red. Of course, that could change in future announcements but, even still, the main difference is the 18 macro keys. Honestly, though I have had several keyboards which offer these, I have never used mine. Then again, I also do not play MMOs or MOBAs so judge for yourself whether the extra keys are deal breakers.
Corsair Vengeance K95 RGB
As usual, Corsair puts a lot of thought into their keyboards. Each one is based on an NKRO matrix which provides "100% anti-ghosting" (rant: more precisely, the keyboard is built well enough that it physically cannot ghost to require anti-ghosting). Even their first generation design aced my grueling test, where I spam the equivalent of several hundred words per minute as input and compare it to what the keyboard believes.
Corsair Vengeance K70 RGB
Also announced, the M65 RGB Gaming Mouse. RGB LED lighting on a mouse is not as novel, but it will match your keyboard. That will be available late August for $69.99 MSRP.
All devices will come with a two (2) year warranty, which definitely gives confidence to someone considering peripherals in this price point.
For more Computex 2014 coverage, please check out our feed!
Subject: General Tech, Storage, Shows and Expos | June 2, 2014 - 07:01 AM | Scott Michaud
Tagged: usb 3.0, thumb drive, ssd, flash drive, corsair, computex 2014, computex
The Flash Voyager GTX is Corsair's attempt to be an SSD over USB 3.0. Differentiating itself from a standard USB flash drive, the Voyager GTX includes TRIM support, S.M.A.R.T. monitoring, and interfaces with USB Attached SCSI. It also comes in two, SSD-sized capacities, 128GB ($119.99) and 256GB ($199.99). These drives are rated at 450MB/s read and 350MB/s write.
This pricing structure puts the Voyager GTX against the Samsung 840 Pro, which is an interesting comparison to make. Both drives are backed by a five (5) year warranty and, while the 840 Pro has higher read bandwidth, the write speeds are fairly comparable. IOPS and write durability is not listed for the Corsair Flash Voyager GTX but, even if they are marginally behind, this has the advantage of USB.
Benchmarking should be interesting for this. I would be curious if this could lead to portable OS installations and abrupt boosts to Steam library sizes, both with SSD-like speeds.
The Corsair Flash Voyager GTX USB 3.0 drives will be available in July. The 128GB version has an MSRP of $119.99, while the 256GB is listed at $199.99.
For more Computex 2014 coverage, please check out our feed!
Subject: General Tech, Shows and Expos | April 8, 2014 - 07:07 PM | Scott Michaud
Tagged: thunderbolt, NAB 14, NAB, Elgato
Hmm, this is more Thunderbolt than I think we heard all year. Is there like, a video production event going on right now? No matter, because news is news (and so are product announcements). The Elgato Thunderbolt Dock connects to Thunderbolt, go figure, and provides three USB 3.0 ports, one Gigabit Ethernet port, one HDMI 1.4 output, one 3.5mm headphone jack, and one 3.5mm microphone jack. It also has a second Thunderbolt port to daisy chain with other devices, which is a common trait in Thunderbolt devices. It will retail for $229.95.
Yup, it is a Thunderbolt accessory.
Why does it seem like every Mac user in commercials have a studio apartment???
It makes sense to see devices like this. Thunderbolt is really an extension of PCIe which allows anything that was once an add-in board to be connected externally, albeit with significantly reduced bandwidth compared to PCIe 3.0 16x. This looks very clean, tidy, and much more desirable than crawling under the desk and swapping wires and thumb drives in the darkness behind your PC.
I would like to see some benchmarks on this device, however. Clearly, the sum of these outputs should be higher than the bandwidth allowed by Thunderbolt (especially if daisy-chaining another Thunderbolt device). I wonder how efficient it will be at keeping high quality signals when several devices are connected and running simultaneously.
The Elgato Thunderbolt Dock is available now for computers with a Thunderbolt port and either Mac OSX 10.9 or Windows 8.1. I guess us Windows 7 fans need to get used to the dust bunnies behind our PCs for a little longer...
Subject: General Tech, Graphics Cards, Processors, Shows and Expos | April 8, 2014 - 03:43 PM | Scott Michaud
Tagged: Intel, NAB, NAB 14, iris pro, Adobe, premiere pro, Adobe CC
When Adobe started to GPU-accelerate their applications beyond OpenGL, it started with NVIDIA and its CUDA platform. After some period of time, they started to integrate OpenCL support and bring AMD into the fold. At first, it was limited to a couple of Apple laptops but has since expanded to include several GPUs on both OSX and Windows. Since then, Adobe switched to a subscription-based release system and has published updates on a more rapid schedule. The next update of Adobe Premiere Pro CC will bring OpenCL to Intel Iris Pro iGPUs.
Of course, they specifically mentioned Adobe Premiere Pro CC which suggests that Photoshop CC users might be coming later. The press release does suggest that the update will affect both Mac and Windows versions of Adobe Premiere Pro CC, however, so at least platforms will not be divided. Well, that is, if you find a Windows machine with Iris Pro graphics. They do exist...
A release date has not been announced for this software upgrade.
Subject: General Tech, Networking, Systems, Shows and Expos | April 8, 2014 - 03:26 PM | Scott Michaud
Tagged: NAB, NAB 14, Thunderbolt 2, thunderbolt
Video professionals are still interested in Thunderbolt in probably much the same way as Firewire needed to be pried from their cold, dead hands. It is a very high bandwidth connector, useful for sending and receiving 4K video. Also, it was originally exclusive to Apple so you can guess which industries were first-adopters. Intel has focused their Thunderbolt announcements on the National Association of Broadcasters (NAB) show. This year, Thunderbolt Networking will be available for Windows via a driver. This will allow any combination of Macs and Windows PCs to be paired together by a 10 Gigabit network.
Of course, this is not going to be something that you can plug into a router. This is a point-to-point network for sharing files between two devices... really fast. Perhaps one use case would be a workstation with a Mac and a Windows PC on a KVM switch. If both are connected with Thunderbolt 2, they could share the same storage pool.
While this feature already exists on Apple devices, the PC driver will be available... "soon".
Subject: General Tech, Systems, Shows and Expos | April 8, 2014 - 01:11 AM | Scott Michaud
Tagged: BUILD 2014, microsoft, windows, winRT
A few days ago, I reported on the news from BUILD 2014 that Windows would see the return of the Start Menu and windowed apps. These features, which are not included with today's Windows 8.1 Update 1, will come in a later version. While I found these interface changes interesting, I reiterated that the user interface was not my concern: Windows Store certification was. I did leave room for a little hope, however, because Microsoft scheduled an announcement of changes. It was focused on enterprise customers, so I did not hold my breath.
And some things did change... but not enough for the non-enterprise user.
Microsoft is still hanging on to the curation of apps, except for "domain-joined" x86 Enterprise and x86 Pro PCs; RT devices and "not domain-joined" computers will only allow sideloaded apps with a key. This certificate (key) is not free for everyone. Of course, this does not have anything to do with native x86 applications. Thankfully, the prospect of WinRT APIs eventually replacing Win32, completely, seems less likely now. It could still be possible if Windows Store has a major surge in popularity but, as it stands right now, Microsoft seems to be spending less effort containing x86 for an eventual lobotomy.
If it does happen, it would be a concern for a variety of reasons:
Governments, foreign or domestic, who pressure Microsoft to ban encryption software.
Internet Explorer's Trident would have no competition to adopt new web standards.
Cannot create an app for just a friend or family member (unless it's a web app in IE).
When you build censorship, the crazies will come with demands to abuse it.
So I am still concerned about the future of Windows. I am still not willing to believe that Microsoft will support x86-exclusive applications until the end of time. If that happens, and sideloading is not publicly available, and web standards are forced into stagnation by a lack of alternative web browsers, then I can see bad times ahead. I will not really feel comfortable until a definitive pledge to allow users to control what can go on their device, even if Microsoft (or people with some form of authority over them) dislikes it, is made.
But I know that many disagree with me. What are your thoughts? Comment away!
Subject: General Tech, Shows and Expos | April 4, 2014 - 03:42 AM | Scott Michaud
Tagged: BUILD 2014, microsoft, .net
.NET has been very popular since its initial release. I saw it used frequently in applications, particularly when a simple form-like interface is required. It was easy to develop and accessible from several languages, such as C++, C#, and VB.NET. Enterprise application developers were particularly interested in it, especially with its managed security.
The framework drove an open source movement to write their own version, Mono, spearheaded by Novell. Some time later, the company Xamarin was created from the original Mono development team and maintains the project to this day. In fact, Miguel de Icaza was at Build 2014 discussing the initiative. He seems content with Microsoft's new Roslyn compiler and the working relationship between the two companies as a whole.
WinJS is released under the very permissive Apache 2.0 license. Other code, such as Windows Phone Toolkit, are released under other licenses, such as the Microsoft Public License (Ms-PL). Pay attention to any given project's license. It would not be wise to assume. Still, it sounds like a good step.
Subject: General Tech, Shows and Expos | April 2, 2014 - 09:53 PM | Scott Michaud
Tagged: BUILD 2014, microsoft, windows, start menu
Microsoft had numerous announcements during their Build 2014 opening keynote, which makes sense as they needed to fill the three hours that they assigned for it. In this post, I will focus on the upcoming changes to the Windows desktop experience. Two, albeit related, features were highlighted: the ability to run Modern Apps in a desktop window, and the corresponding return of the Start Menu.
I must say, the way that they grafted Start Screen tiles on the Start Menu is pretty slick. The Start Menu, since Windows Vista, has felt awkward with its split between recently used applications and common shortcuts in a breakout on the right with an expanded "All Programs" submenu handle on the bottom. It is functional, and it works perfectly fine, but something just felt weird about it. This looks a lot cleaner, in my opinion, especially since its width is variable according to how many applications are pinned.
Of course, my major complaint with Windows 8.x has nothing to do with the interface. There has not been any discussion around sideloading applications to get around Windows Store certification requirements. This is a major concern for browser vendors and should be one for many others, from hobbyists who might want to share their creations with one or two friends or family members, rather than everyone in an entire Windows Store region, or citizens of countries whose governments might pressure Microsoft to ban encryption or security applications.
That said, there is a session tomorrow called "Deploying and Managing Enterprise Apps", discussing changes app sideloading in Windows 8.1. Enterprise users are already allowed sideloading certificates from Microsoft. Maybe it will be expanded? I am not holding my breath.
Keep an eye out, because there should be a lot of news over the next couple of days.
Subject: Editorial, General Tech, Graphics Cards, Processors, Shows and Expos | March 30, 2014 - 01:45 AM | Scott Michaud
Tagged: gdc 14, GDC, GCN, amd
While Mantle and DirectX 12 are designed to reduce overhead and keep GPUs loaded, the conversation shifts when you are limited by shader throughput. Modern graphics processors are dominated by sometimes thousands of compute cores. Video drivers are complex packages of software. One of their many tasks is converting your scripts, known as shaders, into machine code for its hardware. If this machine code is efficient, it could mean drastically higher frame rates, especially at extreme resolutions and intense quality settings.
Emil Persson of Avalanche Studios, probably known best for the Just Cause franchise, published his slides and speech on optimizing shaders. His talk focuses on AMD's GCN architecture, due to its existence in both console and PC, while bringing up older GPUs for examples. Yes, he has many snippets of GPU assembly code.
AMD's GCN architecture is actually quite interesting, especially dissected as it was in the presentation. It is simpler than its ancestors and much more CPU-like, with resources mapped to memory (and caches of said memory) rather than "slots" (although drivers and APIs often pretend those relics still exist) and with how vectors are mostly treated as collections of scalars, and so forth. Tricks which attempt to combine instructions together into vectors, such as using dot products, can just put irrelevant restrictions on the compiler and optimizer... as it breaks down those vector operations into those very same component-by-component ops that you thought you were avoiding.
Basically, and it makes sense coming from GDC, this talk rarely glosses over points. It goes over execution speed of one individual op compared to another, at various precisions, and which to avoid (protip: integer divide). Also, fused multiply-add is awesome.
I know I learned.
As a final note, this returns to the discussions we had prior to the launch of the next generation consoles. Developers are learning how to make their shader code much more efficient on GCN and that could easily translate to leading PC titles. Especially with DirectX 12 and Mantle, which lightens the CPU-based bottlenecks, learning how to do more work per FLOP addresses the other side. Everyone was looking at Mantle as AMD's play for success through harnessing console mindshare (and in terms of Intel vs AMD, it might help). But honestly, I believe that it will be trends like this presentation which prove more significant... even if behind-the-scenes. Of course developers were always having these discussions, but now console developers will probably be talking about only one architecture - that is a lot of people talking about very few things.
This is not really reducing overhead; this is teaching people how to do more work with less, especially in situations (high resolutions with complex shaders) where the GPU is most relevant.
Subject: General Tech, Shows and Expos | March 22, 2014 - 01:41 AM | Scott Michaud
Tagged: opengl, nvidia, Intel, gdc 14, GDC, amd
So, for all the discussion about DirectX 12, the three main desktop GPU vendors, NVIDIA, AMD, and Intel, want to tell OpenGL developers how to tune their applications. Using OpenGL 4.2 and a few cross-vendor extensions, because OpenGL is all about its extensions, a handful of known tricks can reduce driver overhead up to ten-fold and increase performance up to fifteen-fold. The talk is very graphics developer-centric, but it basically describes a series of tricks known to accomplish feats similar to what Mantle and DirectX 12 suggest.
The 130-slide presentation is broken into a few sections, each GPU vendor getting a decent chunk of time. On occasion, they would mention which implementation fairs better with one function call. The main point that they wanted to drive home (since they clearly repeated the slide three times with three different fonts) is that none of this requires a new API. Everything exists and can be implemented right now. The real trick is to know how to not poke the graphics library in the wrong way.
The page also hosts a keynote from the recent Steam Dev Days.
That said, an advantage that I expect from DirectX 12 and Mantle is reduced driver complexity. Since the processors have settled into standards, I expect that drivers will not need to do as much unless the library demands it for legacy reasons. I am not sure how extending OpenGL will affect that benefit, as opposed to just isolating the legacy and building on a solid foundation, but I wonder if these extensions could be just as easy to maintain and optimize. Maybe it is.
Either way, the performance figures do not lie.