Subject: General Tech | October 14, 2014 - 06:28 PM | Jeremy Hellstrom
Tagged: predix, Cisco, Intel, GM, verizon, Privacy, security
GM's Predix asset management platform has been used for a while now, after they came to the realization that they were in the top 20 of the largest software developers on the planet. They found that by networking the machines in their factories as well as products that have been shipped to customers and are seeing active use that they could increase the efficiency of their factories and their products. They were aiming for 1% increase, which when you consider the scale of these industries can equate to billions of dollars and in many cases they did see what they had hoped for.
Now Cisco and Intel have signed up to use the Predix platform for the same results, however they will be applying it to the Cloud and edge devices as well as the routers and switches Cisco specializes in. This should at the very least enhance the ability to monitor network traffic, predict resource shortages and handle outages with a very good possibility of a small increase in performance and efficiency across the board. This is good news to those who currently deal with the cloud but it is perhaps worth noting that you will be offering up your companies metrics to Predix and you should be aware of any possible security concerns that may raise because of that integration to another system. You could however argue that once you have moved to the cloud that this is already happening.
"GE, Intel, Cisco, and Verizon have announced a big data deal to connect Predix — GE’s software platform — to machines, systems, and edge devices regardless of manufacturer."
Here is some more Tech News from around the web:
- Flexible FinFETs work at high temperatures @ Nanotechweb
- Firefox 33 Arrives With OpenH264 Support @ Slashdot
- Intel 'underestimates error bounds by 1.3 QUINTILLION' @ The Register
- Linux Foundation announces Dronecode alliance for open source Drone ware @ The Inquirer
- NETGEAR AC750 WiFi Extender @ HardwareHeaven
- Apotop Wi-Copy @ Phoronix
Subject: General Tech | October 3, 2014 - 03:04 PM | Jeremy Hellstrom
Tagged: amd, Intel, Cherry Trail, Nolan, Amur
As usual neither AMD nor Intel had any comments to pass onto DigiTimes about processors they have yet to release but the chances are that this story is fairly accurate. In March we should start hearing more about Cherry Trail, Intel's 64-bit ultramobile CPU designed for the next generation of tablets. AMD will be working on two chips, Nolan which we know very little about apart from the fact that it will be used in tablets and a new chip called Amur. Amur is an HSA chip designed specifically for use in devices running Android and Linux and incorporates ARM architecture, specifically the Cortex A57. That puts it in the Seattle family which Josh went into detail about in his article here which will make it a rather interesting product.
"Intel's Cherry Trail CPUs will enter mass production in March 2015. Intel is also preparing the Atom Z3000 processor for the 64-bit tablet market. As for 4G chips, Intel is set to use SoFIA-series processors for the tablet market, the sources said."
Here is some more Tech News from around the web:
- Microsoft's Windows 10 Preview has permission to watch your every move @ The Inquirer
- One Windows? How does that work... and WTF is a Universal App? @ The Register
- VMWare virtually in control of Shellshock @ The Register
- IBM teams with Nvidia to launch Power Systems server based on Openpower Foundation @ The Inquirer
- Assorted Fun Linux Command Line Hacks @ Linux.com
Subject: General Tech | September 30, 2014 - 01:11 PM | Jeremy Hellstrom
Tagged: arm, internet of things, Si106x, 108x, Silicon Labs, Intel, quark
While the Internet of Things is growing at an incredible pace the chip manufacturers which are competing for this new market segment are running into problems when trying to design chips to add to appliances. There is a balance which needs to be found between processing power and energy savings, the goal is to design very inexpensive chips which can run on microWatts of power but still be incorporate networked communication and sensors. The new Cortex-M7 is a 32-bit processor which is directly competing with 8 and 16 bit microcontrollers which provide far less features but also consume far less power. Does a smart light bulb really need to have a 32bit chip in it or will a lower cost MCU provide everything that is needed for the light to function? Intel's Quark is in a similar position, the processing power it is capable of could be a huge overkill compared to what the IoT product actually needs. The Register has made a good observation in this article, perhaps the Cortex M0 paired with an M4 or M7 when the application requires the extra horsepower is a good way for ARM to go in. Meanwhile, Qualcomm's Snapdragon 600 has been adopted to run an OS to control robots so don't think this market is going to get any less confusing in the near future.
"The Internet of Things (IoT) is growing an estimated five times more quickly than the overall embedded processing market, so it's no wonder chip suppliers are flocking to fit out connected cars, home gateways, wearables and streetlights as quickly as they can."
Here is some more Tech News from around the web:
- Microsoft's Asimov System To Monitor Users' Machines In Real Time @ Slashdot
- ARM teams with 1248 to launch Hyperweave gateway to 'IoT-enable' enterprises @ The Inquirer
- ARMs head Moonshot bodies: HP pops Applied Micro, TI chips into carts @ The Register
- Third patch brings more admin Shellshock for the battered and Bashed @ The Register
- Mining Bitcoins with Pencil and Paper @ Hack a Day
- How to Organize Your Linux File System for Clutter-Free Folders @ Linux.com
- Alien Isolation Community Preview event @ Kitguru
Subject: Graphics Cards, Processors | September 30, 2014 - 03:33 AM | Scott Michaud
Tagged: iris, Intel, core m, broadwell-y, broadwell-u, Broadwell
Intel's upcoming 14nm product line, Broadwell, is expected to have six categories of increasing performance. Broadwell-Y, later branded Core M, is part of the soldered BGA family at expected TDPs of 3.5 to 4.5W. Above this is Broadwell-U, which are also BGA packages, and thus require soldering by the system builder. VR-Zone China has a list of seemingly every 15W SKU in that category. 28W TDP "U" products are expected to be available in the following quarter, but are not listed.
Image Credit: VR-Zone
As for those 15W parts though, there are seventeen (17!) of them, ranging from Celeron to Core i7. While each product is dual-core, the ones that are Core i3 and up have Hyper-Threading, increasing the parallelism to four tasks simultaneously. In terms of cache, Celerons and Pentiums will have 2MB, Core i7s will have 4MB, and everything in between will have 3MB. Otherwise, the products vary on the clock frequency they were binned (bin-sorted) at, and the integrated graphics that they contain.
Image Credit: VR-Zone
These integrated iGPUs range from "Intel HD Graphics" on the Celerons and Pentiums, to "Intel Iris Graphics 6100" on one Core i7, two Core i5s, and one Core i3. The rest pretty much alternate between Intel HD Graphics 5500 and Intel HD Graphics 6000. Maximum frequency of any given iGPU can vary within the same product, but only by about 100 MHz at the most. The exact spread is below.
- Intel HD Graphics: 300 MHz base clock, 800 MHz at load.
- Intel HD Graphics 5500: 300 MHz base clock, 850-950 MHz at load (depending on SKU).
- Intel HD Graphics 6000: 300 MHz base clock, 1000 MHz at load.
- Intel Iris Graphics 6100: 300 MHz base clock, 1000-1100 MHz at load (depending on SKU).
Unfortunately, without the number of shader units to go along with the core clock, we cannot derive a FLOP value yet. This is a very important metric for increasing resolution and shader complexity, and it would provide a relatively fair metric to compare the new parts against previous offerings for higher resolutions and quality settings, especialy in DirectX 12 I would assume.
Image Credit: VR-Zone
Probably the most interesting part to me is that "Intel HD Graphics" without a number meant GT1 with Haswell. Starting with Broadwell, it has been upgraded to GT2 (apparently). As we can see from even the 4.5W Core M processors, Intel is taking graphics seriously. It is unclear whether their intention is to respect gaming's influence on device purchases, or if they are believing that generalized GPU compute will be "a thing" very soon.
Subject: General Tech | September 29, 2014 - 03:41 AM | Scott Michaud
Tagged: Realsense 3D, realsense, kinect, Intel
RealSense is Intel's 3D camera initiative for bringing face recognition, gesture control, speech input, and augmented reality to the PC. Its closest analogy would be Microsoft's Kinect for Windows. The technology has been presented at Intel keynotes for a while now, embodied in the "Intel Perceptual Computing SDK 2013" under its "Perceptual Computing" initiative.
Since August 31st, that has been removed from their site and replaced with the Intel RealSense SDK. While the software is free, you will probably need compatible hardware to do anything useful. None is available yet, but the "Intel RealSense Developer Kit" hardware (not to be confused with the "Intel RealSense SDK", which is software) is available for reservation at Intel's website. The camera is manufactured by Creative Labs and will cost $99. They are also very clear that this is a developer tool, and forbid it from being used in "mission critical applications". Basically, don't trust your life on it, or the lives and health of any other(s) or anything.
The developer kit will be available for many regions: the US, Canada, much of Europe, Brazil, India, China, Taiwan, Japan, Malaysia, South Korea, New Zealand, Australia, Russia, Israel, and Singapore.
Subject: General Tech, Processors, Mobile | September 27, 2014 - 02:38 PM | Scott Michaud
Tagged: Intel, spreadtrum, rda, Rockchip, SoC
A few months ago, Intel partnered with Rockchip to develop low-cost SoCs for Android. The companies would work together on a design that could be fabricated at TSMC. This time Intel is partnering with Tsinghua Unigroup Ltd. and, unlike Rockchip, also investing in them. The deal will be up to $1.5 billion USD in exchange for a 20% share (approximately) of a division of Tsinghua.
Image Credit: Wikipedia
Intel is hoping to use this partnership to develop mobile SoCs, for smart (and "feature") phones, tablets, and other devices, and get significant presence in the Chinese mobile market. Tsinghua acquired Spreadtrum Communications and RDA Microelectronics within the last two years. The "holding group" that owns these division is apparently the part of Tsinghua which Intel is investing in, specifically.
Spreadtrum will produce SoCs based on Intel's "Intel Architecture". This sounds like they are referring to the 32-bit IA-32, which means that Spreadtrum would be developing 32-bit SoCs, but it is possible that they could be talking about Intel 64. These products are expected for 2H'15.
Subject: General Tech, Motherboards, Processors | September 20, 2014 - 06:51 PM | Scott Michaud
Tagged: xeon, Haswell-EP, ddr4, ddr3, Intel
Well this is interesting and, while not new, is news to me.
The upper-tier Haswell processors ushered DDR4 into the desktops for enthusiasts and servers, but DIMMs are quite expensive and incompatible with the DDR3 sticks that your organization might have been stocking up on. Despite the memory controller being placed on the processor, ASRock has a few motherboards which claim DDR3 support. ASRock, responding to Anandtech's inquiry, confirmed that this is not an error and Intel will launch three SKUs, one eight-core, one ten-core, and one twelve-core, with a DDR3-supporting memory controller.
The three models are:
|E5-2629 v3||E5-2649 v3||E5-2669 v3|
|Cores (Threads)||8 (16)||10 (20)||12 (24)|
|Clock Rate||2.4 GHz||2.3 GHz||2.3 Ghz|
The processors, themselves, might not be cheap or easily attainable, though. There are rumors that Intel will require customers purchase at least a minimum amount. It might not be worth buying these processors unless you have a significant server farm (or similar situation).
Subject: General Tech | September 19, 2014 - 02:08 AM | Scott Michaud
Tagged: asm.js, simd, sse, avx, neon, arm, Intel, x86
Over at Microsoft's Modern.IE status page, many features are listed as being developed or considered. This includes support for Mozilla-developed ASM.js and, expected to be included in ECMAScript 7th edition, SIMD instructions. This is the one that I wanted to touch on most. SIMD, which is implemented as SSE, AVX, NEON, and other instruction sets, to perform many tasks in few, actual instructions. For browsers which support this, it could allow for significant speed-ups in vector-based tasks, such as manipulating colors, vertexes, and other data structures. Emscripten is in the process of integrating SIMD support and the technology is designed to support Web Workers, allowing SIMD-aware C and C++ code to be compiled into SIMD.JS and scale to multiple cores, if available, and they probably are these days.
In short, it will be possible to store and process colors, positions, forces, and other data structures as packed, 32-bit 4-vectors, rather than arbitrary objects with properties that must be manipulated individually. It increases computation throughput for significantly large datasets. This should make game developers happy, in particular.
Apparently, some level of support has been in Firefox Nightly for the last several versions. No about:config manipulation required, just call the appropriate function on window's SIMD subobject. Internet Explorer is considering it and Chromium is currently reviewing Intel's contribution.
Subject: General Tech | September 18, 2014 - 01:59 PM | Ken Addison
Tagged: windows 9, video, TSV, supernova, raptr, r9 390x, podcast, p3700, nvidia, Intel, idf, GTX 980, evga, ECS, ddr4, amd
PC Perspective Podcast #318 - 09/18/2014
Join us this week as we discuss GTX 980 and R9 390X Rumors, Storage News from IDF, ADATA SP610 SSDs and more!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the Store
- RSS - Subscribe through your regular RSS reader
- MP3 - Direct download link to the MP3 file
Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath, and Allyn Malventano
Program length: 1:33:48
Week in Review:
News items of interest:
Hardware/Software Picks of the Week:
Allyn: Windows Server 2012 R2 Storage Spaces goodness (updated features)
Subject: General Tech, Cases and Cooling, Systems, Shows and Expos | September 12, 2014 - 02:20 PM | Scott Michaud
Tagged: idf, idf 2014, nuc, Intel, SFF, small form factor
A few years ago, Intel introduced the NUC line of small form factor PCs. At this year's IDF, they have announced plans to make even smaller, and cheaper, specifications that are intended for OEMs to install Windows, Linux, Android, and Chrome OS on. This initiative is not yet named, but will consist of mostly soldered components, leaving basically just the wireless adapters user-replaceable, rather than the more user-serviceable NUC.
Image Credit: Liliputing
Being the owner of Moore's Law, they just couldn't help but fit it to some type of exponential curve. While it is with respect to generation, not time, Intel expects the new, currently unnamed form factor to halve both the volume (size) and bill of material (BOM) cost of the NUC. They then said that another generation after ("Future SFF") will halve the BOM cost again, to a quarter of the NUC.
What do our readers think? Would you be willing to give up socketed components for smaller and cheaper devices in this category or does this just become indistinguishable from mobile devices (which we already know can be cheap and packed into small spaces)?