Subject: General Tech | June 7, 2017 - 02:35 AM | Tim Verry
Tagged: msi, SFF, barebones, nuc, kaby lake, Intel, Optane, computex
MSI recently introduced a new member of its Cubi small form factor barebones PC lineup. The Cubi 3 is a fanless PC that is build around Intel’s Kaby Lake-U processors and will arrive sometime this fall.
The Cubi 3 is a bit larger than its predecessors, but with the larger enclosure MSI was able to achieve a fanless design for up to (U series) Core i7 processors. The SFF PC sports a brushed aluminum case that shows off the top of the CPU heatsink through vents that run around the top edge of the case. There are two flat antennas for Wi-Fi and Bluetooh integrated into the left and right sides of the case.
FanlessTech reports that the MSI Cubi 3 will sport 15W Kaby Lake-U processors from low end Celerons up to Core i7 models. These parts are dual core parts with HyperThreading (2c/4t) with 3 MB or 4 MB of L3 cache and either HD (615 or 620) or Iris Plus (640 or 650) integrated graphics. The processor is paired with two DDR4 SO-DIMM slots for up to 32 GB of 2133 MHz memory, an M.2 2280 SSD (there is even Intel Optane support), and a single 2.5” drive.
The Cubi 3 has an audio jack and two USB 3.0 ports up front, and what appears to be two USB 2.0 ports on the left side. Rear I/O includes one HDMI, one DisplayPort, two more USB 3.0, two Gigabit Ethernet, two COM ports, and one power jack for the 65W AC power adapter.
There is no word on pricing yet, but it is slated to begin production in August with availability this fall.
It is always nice to see more competition in this niche fanless SFF space, and the little box would not look out of place on a desk or even in the living room. What are your thoughts?
Subject: General Tech | June 6, 2017 - 08:48 PM | Scott Michaud
Tagged: valve, steam, pc gaming
As of today, June 6th, Valve has closed their Greenlight program. New submissions will not be accepted and voting has been disabled. Next week, starting on June 13th, Valve will open Steam Direct, which allows anyone to put their game on the platform for a deposit of $100 per title, which will be refunded once the title makes $1,000 in sales. Valve performs a light amount of testing on each game it receives, so it makes sense to have something that prevents you from drowning upon the opening of the flood gates, and it’s nice that they refund it when sales are high enough that their typical fees cover their expenses, rather than double-dipping.
There is still some doubt floating around the net, though... especially regarding developers from impoverished nations. As a Canadian, it’s by no means unreasonable to spend around a hundred dollars, plus or minus the exchange rate of the year, to put a game, made up of years of work, onto a gigantic distribution platform. That doesn’t hold true everywhere. At the same time, Valve does have a measurable cost per submission, so, if they lower the barrier below that, it would be at their expense. It would also be the right thing to do in some cases. Either way, that’s just my unsolicited two cents.
Steam Direct opens on June 13th.
Logitech G413 Mechanical Gaming Keyboard
The rise in popularity of mechanical gaming keyboards has been accompanied by the spread of RGB backlighting. But RGBs, which often include intricate control systems and software, can significantly raise the price of an already expensive peripheral. There are many cheaper non-backlit mechanical keyboards out there, but they are often focused on typing, and lack the design and features that are unique to the gaming keyboard market.
Gamers on a budget, or those who simply dislike fancy RGB lights, are therefore faced with a relative dearth of options, and it's exactly this market segment that Logitech is targeting with its G413 Mechanical Gaming Keyboard.
Subject: General Tech | June 6, 2017 - 02:27 AM | Scott Michaud
Tagged: skype, microsoft
Microsoft has just announced that they will be retiring several Skype apps in about a month’s time (July 1st). The affected platforms are Windows Phone 8, Windows Phone 8.1, Messaging for Windows 10 Mobile, Windows RT, and Skype apps for TV. It’s important to note that Skype for Windows Phone still works, although it requires the Windows 10 Mobile Anniversary Update or later. This was originally announced last year, but no date was given at the time (just "in the coming months").
Some sites are noting a workaround for affected users: Skype for Web. Unfortunately, this is probably not a viable option in most circumstances. Specifically, Skype for Web does not officially support mobile browsers, which means that Windows RT users might be in luck, but every other affected device is without options come July 1st.
Subject: General Tech | June 6, 2017 - 02:07 AM | Scott Michaud
Tagged: microsoft, windows, windows 10
The Verge is reporting on an allegedly leaked slide from Microsoft that announces a new edition of Windows 10 Pro. It is given the placeholder name “Windows 10 Pro for Workstation PCs” and it has four advertised features: Workstations mode, ReFS, SMBDirect, the ability to use up to four CPUs, and the ability to use up to 6TB of RAM.
Image Credit: GrandMofongo (Twitter)
If this rumor is true, I don’t believe that it will behave like Windows 10 Enterprise. Because it unlocks the ability to address more RAM and CPU sockets, I doubt that users would be able to switch between Windows 10 Pro and “Windows 10 Pro for Workstation PCs” with just a no-reboot login to an Azure Active Directory. This is just speculation, of course, and speculation on a rumor at that.
The Workstation mode is kind-of interesting, though. The Windows 10 Creators Update introduced Game Mode, which allowed games to be prioritized over other software for higher performance (although it hasn’t been a hit so far). Last month, they also announced power management features to throttle background apps, but only when running on battery power. It makes sense that Microsoft would apply the same concepts wherever it would be beneficial, whether that’s optimizing for performance or efficiency for any given workload.
It does seem like an odd headlining feature for a new edition, which I’d assume requires an up-sell over the typical Windows 10 Pro SKU, when they haven’t demonstrated a clear win for Game Mode yet? What do you all think?
Subject: General Tech | June 5, 2017 - 04:58 PM | Scott Michaud
Tagged: mozilla, valve, steamvr, webvr, apple, macos
At WWDC, Valve and HTC announced that their SteamVR platform would be arriving for macOS. This means that the HTC Vive can now be targeted by games that ship for that operating system, which probably means that game engines, like Unreal Engine 4 and Unity, will add support soon. One of the first out of the gate, however, is Mozilla with WebVR for Firefox Nightly on macOS. Combine the two announcements, and you can use the HTC Vive to create and browse WebVR content on Apple desktops and laptops that have high-enough performance, without rebooting into a different OS.
Speaking of which, Apple also announced a Thunderbolt 3 enclosure with an AMD Radeon RX 580 and a USB-C hub. Alternatively, some of the new iMacs have Radeon graphics in them, with the new 27-inch having up to an RX 580. You can check out all of these announcements in Jim’s post.
Subject: General Tech | June 5, 2017 - 04:13 PM | Jim Tanous
Tagged: wwdc, imac pro, imac, apple, all-in-one
In a product-packed WWDC keynote Monday afternoon, Apple announced significant hardware updates to its all-in-one iMac desktop line. After letting the product line go without updates since late 2015, Apple is finally bringing Kaby Lake to its standard iMac models and, as rumored, will be launching a new high-end "iMac Pro" model in December.
The now "normal" line of iMacs received a range of expected feature updates, including USB-C/Thunderbolt 3 support, and new discrete GPU options from AMD.
The 21.5-inch 4K iMacs will be configurable with Radeon Pro 555 and 560 GPUs with up to 4GB of VRAM, while those opting for the 27-inch 5K iMac will be able to choose from the Radeon Pro 570, 575, or 580 with up to 8GB of VRAM.
The Radeon Pro 580, coupled with software and API improvements coming as part of the next version of macOS, "High Sierra" (no, seriously), was specifically called out as being ready to power a new era of VR experiences and content creation on the Mac, thanks to Apple partnerships with Valve (Steam VR), Unity, and Epic (Unreal Engine 4).
Other new features available on the iMac include higher official RAM limits (32GB for the 21.5-inch model and 64GB for the 27-inch), faster NVMe flash storage (up to 2TB capacities), two Thunderbolt 3 ports (which will support Apple's new external GPU initiative), and improved displays (higher maximum brightness, 10-bit dithering, and greater color reproduction).
The starting price for the new iMacs ranges from $1,099 to $1,799 and they're available for order today at Apple's website.
By far the more interesting Mac-related announcement from today's keynote is the new iMac Pro. Although it shares the same basic design as its "non-Pro" counterparts, it features an improved dual fan cooling system that Apple claims is able to accommodate much higher end hardware than has previously been available in an iMac.
This includes Xeon CPUs ranging from 8 to 18 cores, up to 128GB of 2666MHz DDR4 ECC memory, up to 4TB of flash storage that Apple rates at a speed of 3GB/s, graphics options powered by AMD's upcoming Vega platform, and, to power it all, a 500 watt power supply.
The new iMac Pro will also include four USB-C/Thunderbolt 3 ports (compared to just two on the non-Pro models), as well as 10Gb Ethernet (NBase-T), making it not only the most powerful iMac, but also the most powerful Mac yet, as Apple continues to let its Mac Pro line languish in the midst of future promised updates.
The iMac Pro's hardware is already quite pricey before you factor in Apple's 5K display, design, and "Apple Tax," so those familiar with the company won't be shocked to learn that this new flagship Mac will start at $5,000 when it launches this December.
Subject: General Tech | June 5, 2017 - 12:41 PM | Jeremy Hellstrom
Tagged: IBM, global foundries, Samsung, 5nm, 3nm. eulv, GAAFET
Extreme Ultraviolet Lithography has been the hope for reducing process size below the current size but it had not been used to create a successful 5nm chip, until now. IBM, Samsung and GLOBALFOUNDRIES have succeeded in producing a chip using IBM's gate-all-around transistors, which will be known as GAAFETs and will likely replace the current tri-gate FinFETs used today. A GAAFET resembles a FinFET rotated 90 degrees so that the channels run horizontally, stacked three layers high with gates filling in the gaps, hence the name chosen.
Density will go up, this process will fit 30 billion transistors in a 50mm2 chip, 50% more than the previous best commercial process and performance can be increased by 40% at the same power as our current chips or offer the same performance while consuming 75% less power. Ars Technica delves into the technology required to make GAAFETs and more of the potential in their article.
"IBM, working with Samsung and GlobalFoundries, has unveiled the world's first 5nm silicon chip. Beyond the usual power, performance, and density improvement from moving to smaller transistors, the 5nm IBM chip is notable for being one of the first to use horizontal gate-all-around (GAA) transistors, and the first real use of extreme ultraviolet (EUV) lithography."
Here is some more Tech News from around the web:
- Microsoft might be planning a Workstation edition of Windows 10 Pro @ The Inquirer
- Whoops! Microsoft accidentally lets out a mobile-'bricking' OS update @ The Register
- Futuremark PCMark 10 @ techPowerUp
- Google to give 6 months' warning for 2018 Chrome adblockalypse – report @ The Register
- Computex 2017: Gigabyte's latest and greatest gear @ The Tech Report
- The AMD Computex 2017 Press Conference Revealed! @ TechARP
- The Computex Taipei 2017 Live Coverage (Day 4) @ TechARP
An Data Format for Whole 3D Scenes
The Khronos Group has finalized the glTF 2.0 specification, and they recommend that interested parties integrate this 3D scene format into their content pipeline starting now. It’s ready.
glTF is a format to deliver 3D content, especially full scenes, in a compact and quick-loading data structure. These features differentiate glTF from other 3D formats, like Autodesk’s FBX and even the Khronos Group’s Collada, which are more like intermediate formats between tools, such as 3D editing software (ex: Maya and Blender) and game engines. They don’t see a competing format for final scenes that are designed to be ingested directly, quick and small.
glTF 2.0 makes several important changes.
The previous version of glTF was based on a defined GLSL material, which limited how it could be used, although it did align with WebGL at the time (and that spurred some early adoption). The new version switches to Physically Based Rendering (PBR) workflows to define their materials, which has a few advantages.
First, PBR can represent a wide range of materials with just a handful of parameters. Rather than dictating a specific shader, the data structure can just... structure the data. The industry has settled on two main workflows, metallic-roughness and specular-gloss, and glTF 2.0 supports them both. (Metallic-roughness is the core workflow, but specular-gloss is provided as an extension, and they can be used together in the same scene. Also, during the briefing, I noticed that transparency was not explicitly mentioned in the slide deck, but the Khronos Group confirmed that it is stored as the alpha channel of the base color, and thus supported.) Because the format is now based on existing workflows, the implementation can be programmed in OpenGL, Vulkan, DirectX, Metal, or even something like a software renderer. In fact, Microsoft was a specification editor on glTF 2.0, and they have publicly announced using the format in their upcoming products.
The original GLSL material, from glTF 1.0, is available as an extension (for backward compatibility).
A second advantage of PBR is that it is lighting-independent. When you define a PBR material for an object, it can be placed in any environment and it will behave as expected. Noticeable, albeit extreme examples of where this would have been useful are the outdoor scenes of Doom 3, and the indoor scenes of Battlefield 2. It also simplifies asset creation. Some applications, like Substance Painter and Quixel, have artists stencil materials onto their geometry, like gold, rusted iron, and scuffed plastic, and automatically generate the appropriate textures. It also aligns well with deferred rendering, see below, which performs lighting as a post-process step and thus skip pixels (fragments) that are overwritten.
PBR Deferred Buffers in Unreal Engine 4 Sun Temple.
Lighting is applied to these completed buffers, not every fragment.
glTF 2.0 also improves support for complex animations by adding morph targets. Most 3D animations, beyond just moving, rotating, and scaling whole objects, are based on skeletal animations. This method works by binding vertexes to bones, and moving, rotating, and scaling a hierarchy of joints. This works well for humans, animals, hinges, and other collections of joints and sockets, and it was already supported in glTF 1.0. Morph targets, on the other hand, allow the artist to directly control individual vertices between defined states. This is often demonstrated with a facial animation, interpolating between smiles and frowns, but, in an actual game, this is often approximated with skeletal animations (for performance reasons). Regardless, glTF 2.0 now supports morph targets, too, letting the artists make the choice that best suits their content.
Speaking of performance, the Khronos Group is also promoting “enhanced performance” as a benefit of glTF 2.0. I asked whether they have anything to elaborate on, and they responded with a little story. While glTF 1.0 validators were being created, one of the engineers compiled a list of design choices that would lead to minor performance issues. The fixes for these were originally supposed to be embodied in a glTF 1.1 specification, but PBR workflows and Microsoft’s request to abstract the format away from GLSL lead to glTF 2.0, which is where the performance optimization finally ended up. Basically, there wasn’t just one or two changes that made a big impact; it was the result of many tiny changes that add up.
Also, the binary version of glTF is now a core feature in glTF 2.0.
The slide looks at the potential future of glTF, after 2.0.
Looking forward, the Khronos Group has a few items on their glTF roadmap. These did not make glTF 2.0, but they are current topics for future versions. One potential addition is mesh compression, via the Google Draco team, to further decrease file size of 3D geometry. Another roadmap entry is progressive geometry streaming, via Fraunhofer SRC, which should speed up runtime performance.
Yet another roadmap entry is “Unified Compression Texture Format for Transmission”, specifically Basis by Binomial, for texture compression that remains as small as possible on the GPU. Graphics processors can only natively operate on a handful of formats, like DXT and ASTC, so textures need to be converted when they are loaded by an engine. Often, when a texture is loaded at runtime (rather than imported by the editor) it will be decompressed and left in that state on the GPU. Some engines, like Unity, have a runtime compress method that converts textures to DXT, but the developer needs to explicitly call it and the documentation says it’s lower quality than the algorithm used by the editor (although I haven’t tested this). Suffices to say, having a format that can circumvent all of that would be nice.
Again, if you’re interested in adding glTF 2.0 to your content pipeline, then get started. It’s ready. Microsoft is doing it, too.
Subject: General Tech | June 2, 2017 - 04:55 PM | Ryan Shrout
Tagged: nvidia, shield, SHIELD TV, plex, plex pass
Yesterday I posted a news blurb about the update to Plex that brought Live TV viewing and an enhanced DVR capability to the widely used and very popular media software package. In that story I mentioned that the NVIDIA SHIELD (and all Android TV systems) were among the first of the roll out, capable of both serving Live TV but also streaming and viewing it. Yes, the NVIDIA SHIELD continues to be one of the most interesting parts of the cord cutting economy, with a balance of hardware performance, software improvements, and cost.
Along with the Plex software update, NVIDIA has its own update pushing out starting yesterday, Experience Upgrade 5.2, starting with the SHIELD Preview Program members. This update brings a couple of important changes that make the Plex Live TV rollout much more interesting. First, the SHIELD now has support for a wider array of TV tuners, including direct attached USB TV tuners. Here is the updated list of supported hardware:
- HDHomeRun Network Tuners:
- Connect – Dual tuner, Base model
- Extend – Dual tuner, Converts MPEG2 to H.264 for lower bandwidth and size requirements
- Prime – Requires cable subscription and a CableCARD
- Hauppauge Dual USB Tuners:
- WinTV-dual HD 1595 (NTSC) – US/Canada
- WinTV-dual HD 1590 (DVB-T/T2) – UK/EU
- Single USB Tuners – Not recommended due to single tuner capability
- AVerMedia AVerTV Volar Hybrid Q (H837) for US/Canada
NVIDIA claims there are more tuners on the way soon, so we’ll keep an eye out on the updates.
The second update allows SHIELD to write to network attached storage devices (NAS). Previously, the Android TV box could only mount them as read-only partitions, even in Plex, making them useless for recording live TV via the Plex DVR. With the 5.2 release you can now direct write to NAS hardware, allowing the SHIELD to store copies of recorded TV shows and movies in a location that makes sense. If you have a non-hard drive SHIELD unit, this is a great feature, and even if you have the 500GB model, this easily expands usable storage with hardware you may already own.
Also as a part of the update are more general tweaks and improvements including “network storage directory and connectivity enhancements, Wi-Fi performance improvements, and experience enhancements for SHIELD remote and SHIELD controller.”
NVIDIA is celebrating the release of this Plex update by offering a 6-month Plex Pass subscription as a part of the deal if you buy a new SHIELD TV. That’s a $30 value, but a Plex Pass is a requirement to take advantage of Live TV. For users that already own the SHIELD, you’ll have to shell out the $5/mo for the premium Plex offering (worth it for sure in my view) to try out the live TV feature.