MWC 2014: Lenovo YOGA Tablet 10HD+ Announced

Subject: General Tech, Systems, Mobile, Shows and Expos | February 23, 2014 - 01:01 PM |
Tagged: tablet, MWC 14, MWC, lenovo yoga, Lenovo

At Mobile World Congress 2014, Lenovo has announced the YOGA Tablet 10 HD+. Just last month, we discussed the Yoga Tablet 8 and Yoga Tablet 10 which were presented in October. Those tablets each had a 1280x800 display (even the 10-inch model), both sizes use the same MediaTek MT8125 SoC (Wi-Fi, MT8389 for 3G), and it is 1GB of RAM all-around. Performance was expected to be in the ballpark of a Tegra 3 device.

Lenovo-yoga-tablet-hand.jpg

These are all areas which are bumped for the new YOGA Tablet 10 HD+. The 10.1-inch screen is now at 1080p quality, the SoC is a Qualcomm Snapdragon Quad running at 1.8 GHz, and the RAM is doubled to 2GB. It will running Android 4.3 with an alleged over-the-air (OTA) update to 4.4 KitKat, at some point.

android-yoga.png

Make sure to bend at the knee and put your hands toge... oh right.

Comparing between the Yoga Ultrabooks, running Windows, and the YOGA Tablets, running Android, would probably not be wise. They are very different designs. The Ultrabooks hinge with an always-attached keyboard while the tablets have a keyboard-less stand. Rather than the Ultrabooks trying to make a keyboard comfortable in tablet usage, the tablets use the small metal hinge to prop up the screen. They key aspect of the cylindrical hinge is its usage as a handle and the volume it provides as battery storage. Ryan found the old versions' 18-hour rated battery life to be fairly accurate, and the new 10 HD+ is rated for the same duration (actually, with a bonus 1000 mAh over the original Tablet 10). Another benefit of its battery location is that, if you are holding the tablet by its hinge, the battery's weight will not have much torque on your fingers.

Of course, now comes the all-important pricing and availability. The Lenovo YOGA Tablet 10 HD+ will be released in April starting at $349. This is higher than the prices of the Tablet 8 and Tablet 10, $199 and $274 respectively, but you also get more for it.

Lenovo Press Release after the break.

Source: Lenovo

GTC 2013: eyeSight Will Use GPUs To Improve Its Gesture Recognition Software

Subject: General Tech | March 31, 2013 - 08:43 PM |
Tagged: nvidia, lenovo yoga, GTC 2013, GTC, gesture control, eyesight, ECS

During the Emerging Companies Summit at NVIDIA's GPU Technology Conference, Israeli company EyeSight Mobile Technologies' CEO Gideon Shmuel took the stage to discuss the future of its gesture recognition software. He also provided insight into how EyeSight plans to use graphics cards to improve and accelerate the process of identifying, and responding to, finger and hand movements along with face detection.

GTC_ECS_EyeSight_Gideon Shmuel (2).jpg

EyeSight is a five year old company that has developed gesture recognition software that can be installed on existing machines (though it appears to be aimed more at OEMs than directly to consumers). It can use standard cameras, such as webcams, to get its 2D input data and then gets a relative Z-axis from proprietary algorithms. This gives EyeSight essentially 2.5D of input data, and camera resolution and frame rate permitting, allows the software to identify and track finger and hand movements. EyeSight CEO Gideon Shmuel stated at the ECS presentation that the software is currently capable of "finger-level accuracy" at 5 meters from a TV.

GTC_ECS_EyeSight_Gideon Shmuel (5).jpg

Gestures include the ability to use your fingers as a mouse to point at on-screen objects, waving your hand to turn pages, scrolling, and even give hand signal cues.

The software is not open source, and there are no plans to move in that direction. The company has 15 patents pending on its technology, several of which it managed to file before the US Patent Office changed from First to Invent to First Inventor to File (heh, which is another article...). The software will support up to 20 million hardware devices in 2013, and EyeSight expects the number of compatible camera-packing devices to increase further to as many as 3.5 billion in 2015. Other features include the ability transparently map EyeSight input to Android apps without user's needing to muck with settings, and the ability to detect faces and "emotional signals" even in low light. According to the website, SDKs are available for Windows, Linux, and Android. The software maps the gestures it recognizes to Windows shortcuts, to increase compatibility with many existing applications (so long as they support keyboard shortcuts).

GTC_ECS_EyeSight_Gideon Shmuel (10).jpg

Currently, the EyeSight software is mostly run on the CPU, but the company is heavily investing into incorporating GPU support. Moving the processing to GPUs will allow the software to run faster and more power efficiently, especially on mobile devices (NVIDIA's Tegra platform was specifically mentioned). EyeSight's future road-map includes using GPU acceleration to bolster the number of supported gestures, move image processing to the GPUs, add velocity and vector control inputs, incorporate a better low-light filter (which will run on the GPU), and offload processing from the CPU to optimize power management and save CPU resources for the OS and other applications which is especially important for mobile devices. Gideon Shmuel also stated that he wants to see the technology being used on "anything with a display" from your smartphone to your air conditioner.

GTC_ECS_EyeSight_Gideon Shmuel (6).jpg

A basic version of the EyeSight input technology reportedly comes installed on the Lenovo Yoga convertible tablet. I think this software has potential, and would provide that Minority Report-like interaction that many enthusiasts wish for. Hopefully, EyeSight can deliver on its claimed accuracy figures and OEMs will embrace the technology by integrating it into future devices.

EyeSight has posted additional video demos and information about its touch-free technology on its website.

Do you think this "touch-free" gesture technology has merit, or will this type of input remain limited to awkward-integration in console games?