Subject: Mobile | February 21, 2016 - 01:00 PM | Sebastian Peak
Tagged: VIBE K5 Plus, VIBE K5, Snapdragon 616, Snapdragon 415, smartphone, qualcomm, MWC 2016, MWC, Lenovo, Android
Lenovo has announced a new pair of smartphones in their VIBE series, and these offer very impressive specs considering the asking price.
The VIBE K5 will retail for $129, with the K5 Plus slightly higher at $149. What does this get you? Both are 5-inch devices, with a modest 1280x720 resolution on the standard K5, or FHD 1920x1080 on the K5 Plus. The phones are both powered by Qualcomm SoCs, with a Snapdragon 415 in the K5 (quad-core 1.4 GHz), and the faster Snapdragon 616 (8-core 1.7 GHz) in the K5 Plus.
Here’s a look at the specifications for these phones:
- Screen: 5.0” HD (1280x720) display (K5) or IPS Full HD (1920x1080) (K5 Plus)
- Processor: Qualcomm snapdragon 415 octa-core (K5) or 616 octa-core processor (K5 Plus)
- Storage: 2GB LP DDR3 RAM | 16GB eMCP built-in storage | up to 32GB microSD expandable storage support
- Graphics: Adreno 405: up to 550MHz 3D graphics accelerator
- Camera: Rear: 13MP with 5-piece lens and FHD video recording, Front: 5MP fixed-focus with 4-piece lens
- Connectivity: Dual SIM slots with 4G LTE connectivity + BT 4.1; WLAN: Wi-Fi 802.11 b/g/n, Wi-Fi hotspot
- Battery: 2750mAh interchangeable battery
- Audio: 2 x speakers, 2 x mics, 3.5 mm audio jack, Dolby Atmos
- Thickness: 8.2 mm (.32 in)
- Weight: 142 g (5 oz)
- OS: Android 5.1, Lollipop
On paper these smartphones present a compelling value reminiscent of the ASUS Zenfone 2, with the K5 Plus easily the better bargain with a 1920x1080 IPS display and octa-core processor for $149. We’ll have to wait to pass judgment until the UI performance and camera have been tested, but these new VIBE K5 phones certainly looks like a promising option.
The VIBE K5 and K5 Plus will be available in March.
Subject: Mobile | February 21, 2016 - 12:56 PM | Ryan Shrout
Tagged: MWC, MWC 2016, qualcomm, snapdragon, snapdragon wear
Earlier this month, Qualcomm announced the creation of the Snapdragon Wear platform and the Snapdragon Wear 2100 SoC, the very first in a new family of products built to address consumer wearables market. Even though the Snapdragon 400 series of processors had already found its way into a large majority (65% according to Qualcomm) of all of the currently shipping Android Wear watches, Qualcomm hopes that the improvements in the Snapdragon Wear 2100 will further the company's market share and improve on the experiences that users have with wearable products.
Snapdragon Wear 2100 offers several advantages over the Snapdragon 400 series of SoCs:
Utilizing Qualcomm Technologies’ expertise in connectivity and compute, the Snapdragon Wear platform consists of a full suite of silicon, software, support tools, and reference designs to allow mobile, fashion, and sports customers to bring a diverse range of full-featured wearables to customers quickly. Available in both tethered (Bluetooth® and Wi-Fi®) and connected (4G/LTE and 3G) versions, Snapdragon Wear 2100 innovates along four wearables core vectors:
- Smaller Size – 30 percent smaller than the popular Snapdragon 400, Snapdragon Wear 2100 can help enable new, thinner, sleeker designs
- Lower Power – 25 percent lower power than the Snapdragon 400 across both tethered and connected use cases, allowing for longer day of use battery life
- Smarter Sensors – With an integrated, ultra-low power sensor hub, Snapdragon Wear 2100 enables richer algorithms with greater accuracy than the Snapdragon 400
- Always Connected – Next-generation LTE modem with integrated GNSS, along with low power Wi-Fi and Bluetooth delivers an always connected experience
There is no direct mention of comparative performance though, something I am looking to get answered this week.
This week's announcement from Qualcomm is the addition of three new partners for the Snapdragon Wear platform, on top of the launch partner LG. The new names might not be household brands but they will offer a strong growth segment for Qualcomm as more vendors enter the wearables markets through ODMs.
- Borqs – A global leader in software and products for IoT providing customizable, differentiated and scalable Android-based smart connected devices and cloud service solutions, Borqs is offering connected (3G/4G) and tethered (Wi-Fi®/Bluetooth®) smartwatch and kid watch reference designs based on Snapdragon Wear 2100.
- Compal – A global manufacturer of notebook PCs, smartphone, tablet and display products and smart wearable devices, Compal is delivering reference designs and device production based on Snapdragon Wear 2100 supporting both Android Wear and Android operating systems and targeting connected (3G/4G) and tethered (Wi-Fi/Bluetooth) use cases.
- Infomark – An early innovator in the emerging kid watch segment, where the company has previously launched two generations of products (JooN1, JooN2) based on Qualcomm Technologies, Infomark is offering a reference design based on Snapdragon Wear 2100 targeting kid and elderly watch segments.
I should be getting hands-on with hardware built on the Snapdragon Wear 2100 SoC from LG and these three new partners this week while at Mobile World Congress 2016, so stayed tuned for more coverage!
Subject: Mobile | February 21, 2016 - 12:18 PM | Ryan Shrout
Tagged: MWC, MWC 2016, qualcomm, vulkan, snapdragon, snapdragon 820, adreno 530
As we prepare for the onslaught of new mobile devices and technologies being announced at Mobile World Congress in Barcelona, the low-level Vulkan API begins its campaign to take hold in the PC and mobile spaces, superceding the OpenGL standard that exists today in hopes of providing a more efficient use of compute resources across the industry.
Qualcomm announced official support for the Vulkan API on its Adreno 530 GPU and the Snapdragon 820 processor. Vulkan API support will be coming for upcoming other unannounced Adreno 5xx series GPUs and currently shipping Adreno 4xx GPUs, allowing us to wonder if Vulkan support will find its way into currently shipping handsets.
As Qualcomm points out in its press release on the news, the Vulkan API will bring some important and groundbreaking changes to the mobile space.
- Explicit control over GPU operation, with minimized driver overhead for improved performance;
- Multi-threading-friendly architecture to increase overall system performance;
- Optimal API design that can be used in a wide variety of devices including mobile, desktop, consoles, and embedded platforms;
- Use of Khronos’ new SPIR-V intermediate representation for shading language flexibility and more predictable implementation behavior;
- Extensible layered architecture that enables innovative tools without impacting production performance while validating, debugging, and profiling;
- Simple drivers for low-overhead efficiency and cross vendor portability.
Vulkan API support is being added to Qualcomm's development tools suite this week as well.
“We are pleased to have contributed to the definition of Khronos’ new Vulkan API. Qualcomm Technologies will be among the first to ship conformant Vulkan drivers, starting with Snapdragon 820’s embedded Adreno 530 GPU, and subsequently with our Adreno 4xx series GPUs. Vulkan enables the next generation of graphics performance by adding multi-threaded command buffer generation and explicit control of advanced graphics capabilities within Adreno GPUs,” said Micah Knapp, director of product management, Qualcomm Technologies, Inc. “In the coming days, we anticipate supporting Vulkan in the Snapdragon developer tools including Snapdragon Profiler and the Adreno SDK, to help application developers take advantage of this outstanding new API when creating graphics and compute applications for smartphones, tablets, VR HMDs and a variety of other types of devices that use Snapdragon processors.”
A quick look at the Khronos page listing companies with Vulkan conformant drivers shows Qualcomm on the short list, meaning it has provided the standards body with a driver that has passed its first level of certification. With its emphasis on efficiency, the Vulkan API and Qualcomm's early integration could be the most important place that the API ends up. In a technology field where battery life and performance must balance unlike anywhere else, getting this new implementation of graphics and compute could push mobile devices forward quickly.
It's Easier to Be Convincing than Correct
This is a difficult topic to discuss. Some perspectives assume that law enforcement have terrible, Orwellian intentions. Meanwhile, law enforcement officials, with genuinely good intentions, don't understand that the road to Hell is paved with those. Bad things are much more likely to happen when human flaws are justified away, which is easy to do when your job is preventing mass death and destruction. Human beings like to use large pools of evidence to validate assumptions, without realizing it, rather than discovering truth.
Ever notice how essays can always find sources, regardless of thesis? With increasing amounts of data, you are progressively more likely to make a convincing argument, but not necessarily a more true one. Mix in good intentions, which promotes complacency, and mistakes can happen.
But this is about Apple. Recently, the FBI demanded that Apple creates a version of iOS that can be broken into by law enforcement. They frequently use the term “back door,” while the government prefers other terminology. Really, words are words and the only thing that matters is what it describes -- and it describes a mechanism to compromise the device's security in some way.
This introduces several problems.
The common line that I hear is, “I don't care, because I have nothing to hide.” Well... that's wrong in a few ways. First, having nothing to hide is irrelevant if the person who wants access to your data assumes that you have something you want to hide, and is looking for evidence that convinces themselves that they're right. Second, you need to consider all the people who want access to this data. The FBI will not be the only one demanding a back door, or even the United States as a whole. There are a whole lot of nations that trusts individuals, including their own respective citizens, less than the United States. You can expect that each of them would request a backdoor.
You can also expect each of them, and organized criminals, wanting to break into each others'.
Lastly, we've been here before, and what it comes down to is criminalizing math. Encryption is just a mathematical process that is easy to perform, but hard to invert. It all started because it is easy to multiply two numbers together, but hard to factor them. The only method we know is dividing by every possible number that's smaller than the square root of said number. If the two numbers are prime, then you are stuck finding one number out of all those possibilities (the other prime number will be greater than the square root). In the 90s, numbers over a certain size were legally classified as weapons. That may sound ridiculous, and there would be good reason for that feeling. Either way, it changed; as a result, online banks and retailers thrived.
While we believe the FBI’s intentions are good, it would be wrong for the government to force us to build a backdoor into our products. And ultimately, we fear that this demand would undermine the very freedoms and liberty our government is meant to protect.
Good intentions lead to complacency, which is where the road to (metaphorical) Hell starts.
Caught Up to DirectX 12 in a Single Day
I'm not just talking about the specification. Members of the Khronos Group have also released compatible drivers, SDKs and tools to support them, conformance tests, and a proof-of-concept patch for Croteam's The Talos Principle. To reiterate, this is not a soft launch. The API, and its entire ecosystem, is out and ready for the public on Windows (at least 7+ at launch but a surprise Vista or XP announcement is technically possible) and several distributions of Linux. Google will provide an Android SDK in the near future.
I'm going to editorialize for the next two paragraphs. There was a concern that Vulkan would be too late. The thing is, as of today, Vulkan is now just as mature as DirectX 12. Of course, that could change at a moment's notice; we still don't know how the two APIs are being adopted behind the scenes. A few DirectX 12 titles are planned to launch in a few months, but no full, non-experimental, non-early access game currently exists. Each time I say this, someone links the Wikipedia list of DirectX 12 games. If you look at each entry, though, you'll see that all of them are either: early access, awaiting an unreleased DirectX 12 patch, or using a third-party engine (like Unreal Engine 4) that only list DirectX 12 as an experimental preview. No full, released, non-experimental DirectX 12 game exists today. Besides, if the latter counts, then you'll need to accept The Talos Principle's proof-of-concept patch, too.
But again, that could change. While today's launch speaks well to the Khronos Group and the API itself, it still needs to be adopted by third party engines, middleware, and software. These partners could, like the Khronos Group before today, be privately supporting Vulkan with the intent to flood out announcements; we won't know until they do... or don't. With the support of popular engines and frameworks, dependent software really just needs to enable it. This has not happened for DirectX 12 yet, and, now, there doesn't seem to be anything keeping it from happening for Vulkan at any moment. With the Game Developers Conference just a month away, we should soon find out.
But back to the announcement.
Vulkan-compatible drivers are launching today across multiple vendors and platforms, but I do not have a complete list. On Windows, I was told to expect drivers from NVIDIA for Windows 7, 8.x, 10 on Kepler and Maxwell GPUs. The standard is compatible with Fermi GPUs, but NVIDIA does not plan on supporting the API for those users due to its low market share. That said, they are paying attention to user feedback and they are not ruling it out, which probably means that they are keeping an open mind in case some piece of software gets popular and depends upon Vulkan. I have not heard from AMD or Intel about Vulkan drivers as of this writing, one way or the other. They could even arrive day one.
On Linux, NVIDIA, Intel, and Imagination Technologies have submitted conformant drivers.
Drivers alone do not make a hard launch, though. SDKs and tools have also arrived, including the LunarG SDK for Windows and Linux. LunarG is a company co-founded by Lens Owen, who had a previous graphics software company that was purchased by VMware. LunarG is backed by Valve, who also backed Vulkan in several other ways. The LunarG SDK helps developers validate their code, inspect what the API is doing, and otherwise debug. Even better, it is also open source, which means that the community can rapidly enhance it, even though it's in a releasable state as it is. RenderDoc,
the open-source graphics debugger by Crytek, will also add Vulkan support. ((Update (Feb 16 @ 12:39pm EST): Baldur Karlsson has just emailed me to let me know that it was a personal project at Crytek, not a Crytek project in general, and their GitHub page is much more up-to-date than the linked site.))
The major downside is that Vulkan (like Mantle and DX12) isn't simple.
These APIs are verbose and very different from previous ones, which requires more effort.
Image Credit: NVIDIA
There really isn't much to say about the Vulkan launch beyond this. What graphics APIs really try to accomplish is standardizing signals that enter and leave video cards, such that the GPUs know what to do with them. For the last two decades, we've settled on an arbitrary, single, global object that you attach buffers of data to, in specific formats, and call one of a half-dozen functions to send it.
Compute APIs, like CUDA and OpenCL, decided it was more efficient to handle queues, allowing the application to write commands and send them wherever they need to go. Multiple threads can write commands, and multiple accelerators (GPUs in our case) can be targeted individually. Vulkan, like Mantle and DirectX 12, takes this metaphor and adds graphics-specific instructions to it. Moreover, GPUs can schedule memory, compute, and graphics instructions at the same time, as long as the graphics task has leftover compute and memory resources, and / or the compute task has leftover memory resources.
This is not necessarily a “better” way to do graphics programming... it's different. That said, it has the potential to be much more efficient when dealing with lots of simple tasks that are sent from multiple CPU threads, especially to multiple GPUs (which currently require the driver to figure out how to convert draw calls into separate workloads -- leading to simplifications like mirrored memory and splitting workload by neighboring frames). Lots of tasks aligns well with video games, especially ones with lots of simple objects, like strategy games, shooters with lots of debris, or any game with large crowds of people. As it becomes ubiquitous, we'll see this bottleneck disappear and games will not need to be designed around these limitations. It might even be used for drawing with cross-platform 2D APIs, like Qt or even webpages, although those two examples (especially the Web) each have other, higher-priority bottlenecks. There are also other benefits to Vulkan.
The WebGL comparison is probably not as common knowledge as Khronos Group believes.
Still, Khronos Group was criticized when WebGL launched as "it was too tough for Web developers".
It didn't need to be easy. Frameworks arrived and simplified everything. It's now ubiquitous.
In fact, Adobe Animate CC (the successor to Flash Pro) is now a WebGL editor (experimentally).
Open platforms are required for this to become commonplace. Engines will probably target several APIs from their internal management APIs, but you can't target users who don't fit in any bucket. Vulkan brings this capability to basically any platform, as long as it has a compute-capable GPU and a driver developer who cares.
Thankfully, it arrived before any competitor established market share.
Subject: Mobile | February 12, 2016 - 04:26 PM | Sebastian Peak
Tagged: X16 modem, qualcomm, mu-mimo, modem, LTE, Gigabit LTE, FinFET, Carrier Aggregation, 14nm
Qualcomm’s new X16 LTE Modem is the industry's first Gigabit LTE chipset to be announced, achieving speeds of up to 1 Gbps using 4x Carrier Aggregation. The X16 succeeds the recently announced X12 modem, improving on the X12's 3x Carrier Aggregation and moving from LTE CAT 12 to CAT 16 on the downlink, while retaining CAT 13 on the uplink.
"In order to make a Gigabit Class LTE modem a reality, Qualcomm added a suite of enhancements – built on a foundation of commercially-proven Carrier Aggregation technology. The Snapdragon X16 LTE modem employs sophisticated digital signal processing to pack more bits per transmission with 256-QAM, receives data on four antennas through 4x4 MIMO, and supports for up to 4x Carrier Aggregation — all of which come together to achieve unprecedented download speeds."
Gigabit speeds are only possible if multiple data streams are connected to the device simultaneously, and with the new X16 modem such aggregation is performed using LTE-U and LAA.
(Image via EE Times)
What does all of this mean? Aggregation is a term you'll see a lot as we progress into the next generation of cellular data technology, and with the X16 Qualcomm is emphasizing carrier over link aggregation. Essentially Carrier Aggregation works by combining the carrier LTE data signal (licensed, high transmit power) with a shorter-range, shared spectrum (unlicensed, low transmit power) LTE signal. When the signals are combined at the device (i.e. your smartphone), significantly better throughput is possible with this larger (aggregated) data ‘pipe’.
Qualcomm lists the four main options for unlicensed LTE deployment as follows:
- LTE-U: Based on 3GPP Rel. 12, LTE-U targets early mobile operators deployments in USA, Korea and India, with coexistence tests defined by LTE-U forum
- LAA: Defined in 3GPP Rel. 13, LAA (Licensed Assisted Access) targets deployments in Europe, Japan, & beyond.
- LWA: Defined in 3GPP Rel. 13, LWA (LTE - Wi-Fi link aggregation) targets deployments where the operators already has carrier Wi-Fi deployments.
- MulteFire: Broadens the LTE ecosystem to new deployment opportunities by operating solely in unlicensed spectrum without a licensed anchor channel
The X16 is also Qualcomm’s first modem to be built on 14nm FinFet process, which Qualcomm says is highly scalable and will enable the company to evolve the modem product line “to address an even wider range of product, all the way down to power-efficient connectivity for IoT devices.”
Qualcomm has already begun sampling the X16, and expects the first commercial products in the second half of 2016.
Subject: Mobile | February 11, 2016 - 03:42 PM | Jeremy Hellstrom
Tagged: Skylake, asus zenbook, ux305ca, qhd
At 13.3" in size it still seems odd to use a 3200x1800 display, but that is why scaling is important; especially for aged eyes. The model of UX305CA that The Inquirer reviewed is running on a Skylake based Core M3-6Y30 clocked at 900MHz, 8B RAM and a 128GB SSD with other models available for those that wish upgraded components. The Inquirer ran into a few small issues with the OS and you cannot expect the laptop to handle demanding tasks but for browsing and productivity it had no issues. As well, the battery lasted over 9 hours during usage, not bad for a device weighing 1.2kg (2.65lbs).
"From the ports to processor to the operating system, this refresh has been subject to a rather diverse mix of changes, the biggest being the addition of a QHD+ resolution screen, despite the price staying level with the original FHD model."
Here are some more Mobile articles from around the web:
- PC Specialist Octane II Laptop @ Kitguru
- HP ENVY 15t (15t-ae100) @ Tech ARP
- Aorus X7 Pro-Sync @ Kitguru
- Huawei Mate 8 @ The Inquirer
- Google Nexus 6P @ The Inquirer
Subject: Systems, Mobile | February 9, 2016 - 10:16 AM | Sebastian Peak
Tagged: Tobii, notebook, msi, laptop, GT72S G Tobii, gaming laptop, g-sync, eye-tracking
MSI has released their GT72S G Tobii gaming notebook (first announced way back at Computex), which features NVIDIA G-Sync and eye-tracking technology that promises a more immersive gameplay experience.
“The world’s most advanced gaming laptop, the GT72S G Tobii with eye-tracking technology immerses gamers into a hands-free dimension by allowing them to switch targets in a game, select objects on the floor or even automatically pause a game by simply focusing or looking away.
Available immediately, MSI’s GT72S G Tobii will be bundled with Tom Clancy’s The Division and currently supports a variety of gaming titles, including Assassin’s Creed Syndicate, Assassin’s Creed Rogue, ArmA III, Elite Dangerous and more.”
Ryan took a look at the laptop at CES, and the video is imbedded below:
So how does the eye-tracking work?
“By going through a 15-second set-up process, users can securely log into their computers using a personalized glance; highlight, select or delete items with one look; seamlessly zoom and center maps without scrolling; and even sift through Windows, folders and its applications without lifting a finger.”
The notebook boasts some impressive specs, including:
- Tobii Eye Tracking Technology
- 17.3" Full HD 1920 x 1080 IPS display
- 6th Generation Intel Core i7 6820HK (2.70 GHz)
- NVIDIA GeForce GTX 980M with 8 GB GDDR5
- 32 GB Memory
- 256 GB SSD (PCIe Gen3 x4)
- 1 TB HDD
- BD Burner
- Killer Networking
- Dimensions: 16.85" x 11.57" x 2.30"; 8.50 lbs
The GT72S G Tobii retails for $2599.99 and is now available with an exclusive launch at Newegg.com, and the laptop includes a free copy of Tom Clancy: The Division.
Subject: Mobile | February 4, 2016 - 09:39 AM | Sebastian Peak
Tagged: wi-fi, shield tablet, shield, ota update, nvidia, android 6.0
NVIDIA has pulled the Android 6.0 OTA update for the original SHEILD (pre-K1) tablet after users experienced wi-fi connection issues. A post on NVIDIA's official forums explains:
"We have temporarily turned off the OTA update until we understand why a few users are losing WiFi connection after updating their tablet to OTA 4.0."
(Image: Android Police)
The post is authored by Manuel Guzman of NVIDIA Customer Care, and includes a list of potential fixes:
- Reboot your tablet 2-3 times. If this fails, power cycle your tablet 3-4 times (not reboot but complete power off). If this does not work, charge your tablet to 100% and attempt again a couple of times or so.
- Factory reset your tablet. Make sure you backup any important files before you perform this step.
- A couple of users reporting their WiFi coming back after leaving their tablet powered off for a few hours. Try leaving your tablet powered off for a few hours and then turn the device back on.
Users who still have issues connecting are asked to navigate to the Advanced W-Fi page on their tablet, and then to "take a screenshot and email the picture to firstname.lastname@example.org".
Subject: General Tech, Processors, Mobile | January 29, 2016 - 05:28 PM | Scott Michaud
Tagged: tesla, tesla motors, amd, Jim Keller, apple
Jim Keller, a huge name in the semiconductor industry for his work at AMD and Apple, recently left AMD before the launch of the Zen architecture. This made us nervous, because when a big name leaves a company before a product launch, it could either be that their work is complete... or they're evacuating before a stink-bomb detonates and the whole room smells like rotten eggs.
It turns out a third option is possible: Elon Musk offers you a job making autonomous vehicles. Jim Keller's job title at Tesla will be Vice President of Autopilot Hardware Engineering. I could see this position being enticing, to say the least, even if you are confident in your previous employer's upcoming product stack. It doesn't mean that AMD's Zen architecture will be either good or bad, but it nullifies the earlier predictions, when Jim Keller left AMD, at least until further notice.
We don't know who approached who, or when.
Another point of note: Tesla Motors currently uses NVIDIA Tegra SoCs in their cars, who are (obviously) competitors of Jim Keller's former employer, AMD. It sounds like Jim Keller is moving into a somewhat different role than he had at AMD and Apple, but it could be interesting if Tesla starts taking chip design in-house, to customize the chip to their specific needs, and take away responsibilities from NVIDIA.
The first time he was at AMD, he was the lead architecture of the Athlon 64 processor, and he co-authored x86-64. When he worked at Apple, he helped design the Apple A4 and A5 processors, which were the first two that Apple created in-house; the first three iPhone processors were Samsung SoCs.