All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
NVIDIA Finally Gets Serious with Tegra
Tegra has had an interesting run of things. The original Tegra 1 was utilized only by Microsoft with Zune. Tegra 2 had a better adoption, but did not produce the design wins to propel NVIDIA to a leadership position in cell phones and tablets. Tegra 3 found a spot in Microsoft’s Surface, but that has turned out to be a far more bitter experience than expected. Tegra 4 so far has been integrated into a handful of products and is being featured in NVIDIA’s upcoming Shield product. It also hit some production snags that made it later to market than expected.
I think the primary issue with the first three generations of products is pretty simple. There was a distinct lack of differentiation from the other ARM based products around. Yes, NVIDIA brought their graphics prowess to the market, but never in a form that distanced itself adequately from the competition. Tegra 2 boasted GeForce based graphics, but we did not find out until later that it was comprised of basically four pixel shaders and four vertex shaders that had more in common with the GeForce 7800/7900 series than it did with any of the modern unified architectures of the time. Tegra 3 boasted a big graphical boost, but it was in the form of doubling the pixel shader units and leaving the vertex units alone.
While NVIDIA had very strong developer relations and a leg up on the competition in terms of software support, it was never enough to propel Tegra beyond a handful of devices. NVIDIA is trying to rectify that with Tegra 4 and the 72 shader units that it contains (still divided between pixel and vertex units). Tegra 4 is not perfect in that it is late to market and the GPU is not OpenGL ES 3.0 compliant. ARM, Imagination Technologies, and Qualcomm are offering new graphics processing units that are not only OpenGL ES 3.0 compliant, but also offer OpenCL 1.1 support. Tegra 4 does not support OpenCL. In fact, it does not support NVIDIA’s in-house CUDA. Ouch.
Jumping into a new market is not an easy thing, and invariably mistakes will be made. NVIDIA worked hard to make a solid foundation with their products, and certainly they had to learn to walk before they could run. Unfortunately, running effectively entails having design wins due to outstanding features, performance, and power consumption. NVIDIA was really only average in all of those areas. NVIDIA is hoping to change that. Their first salvo into offering a product that offers features and support that is a step above the competition is what we are talking about today.
Introduction and Design
With the release of Haswell upon us, we’re being treated to an impacting refresh of some already-impressive notebooks. Chief among the benefits is the much-championed battery life improvements—and while better power efficiency is obviously valuable where portability is a primary focus, beefier models can also benefit by way of increased versatility. Sure, gaming notebooks are normally tethered to an AC adapter, but when it’s time to unplug for some more menial tasks, it’s good to know that you won’t be out of juice in a couple of hours.
Of course, an abundance of gaming muscle never hurts, either. As the test platform for one of our recent mobile GPU analyses, MSI’s 15.6” GT60 gaming notebook is, for lack of a better description, one hell of a beast. Following up on Ryan’s extensive GPU testing, we’ll now take a more balanced and comprehensive look at the GT60 itself. Is it worth the daunting $1,999 MSRP? Does the jump to Haswell provide ample and economical benefits? And really, how much of a difference does it make in terms of battery life?
Our GT60 test machine featured the following configuration:
In case it wasn’t already apparent, this device makes no compromises. Sporting a desktop-grade GPU and a quad-core Haswell CPU, it looks poised to be the most powerful notebook we’ve tested to date. Other configurations exist as well, spanning various CPU, GPU, and storage options. However, all available GT60 configurations feature a 1080p anti-glare screen, discrete graphics (starting at the GTX 670M and up), Killer Gigabit LAN, and a case built from metal and heavy-duty plastic. They also come preconfigured with Windows 8, so the only way to get Windows 7 with your GT60 is to purchase it through a reseller that performs customizations.
Another Wrench – GeForce GTX 760M Results
Just recently, I evaluated some of the current processor-integrated graphics options from our new Frame Rating performance metric. The results were very interesting, proving Intel has done some great work with its new HD 5000 graphics option for Ultrabooks. You might have noticed that the MSI GE40 didn’t just come with the integrated HD 4600 graphics but also included a discrete NVIDIA GeForce GTX 760M, on-board. While that previous article was to focus on the integrated graphics of Haswell, Trinity, and Richland, I did find some noteworthy results with the GTX 760M that I wanted to investigate and present.
The MSI GE40 is a new Haswell-based notebook that includes the Core i7-4702MQ quad-core processor and Intel HD 4600 graphics. Along with it MSI has included the Kepler architecture GeForce GTX 760M discrete GPU.
This GPU offers 768 CUDA cores running at a 657 MHz base clock but can stretch higher with GPU Boost technology. It is configured with 2GB of GDDR5 memory running at 2.0 GHz.
If you didn’t read the previous integrated graphics article, linked above, you’re going to have some of the data presented there spoiled and so you might want to get a baseline of information by getting through that first. Also, remember that we are using our Frame Rating performance evaluation system for this testing – a key differentiator from most other mobile GPU testing. And in fact it is that difference that allowed us to spot an interesting issue with the configuration we are showing you today.
If you are not familiar with the Frame Rating methodology, and how we had to change some things for mobile GPU testing, I would really encourage you to read this page of the previous mobility Frame Rating article for the scoop. The data presented below depends on that background knowledge!
Okay, you’ve been warned – on to the results.
Battle of the IGPs
Our long journey with Frame Rating, a new capture-based analysis tool to measure graphics performance of PCs and GPUs, began almost two years ago as a way to properly evaluate the real-world experiences for gamers. What started as a project attempting to learn about multi-GPU complications has really become a new standard in graphics evaluation and I truly believe it will play a crucial role going forward in GPU and game testing.
Today we use these Frame Rating methods and tools, which are elaborately detailed in our Frame Rating Dissected article, and apply them to a completely new market: notebooks. Even though Frame Rating was meant for high performance discrete desktop GPUs, the theory and science behind the entire process is completely applicable to notebook graphics and even on the integrated graphics solutions on Haswell processors and Richland APUs. It also is able to measure performance of discrete/integrated graphics combos from NVIDIA and AMD in a unique way that has already found some interesting results.
Battle of the IGPs
Even though neither side wants us to call it this, we are testing integrated graphics today. With the release of Intel’s Haswell processor (the Core i7/i5/i3 4000) the company has upgraded the graphics noticeably on several of their mobile and desktop products. In my first review of the Core i7-4770K, a desktop LGA1150 part, the integrated graphics now known as the HD 4600 were only slightly faster than the graphics of the previous generation Ivy Bridge and Sandy Bridge. Even though we had all the technical details of the HD 5000 and Iris / Iris Pro graphics options, no desktop parts actually utilize them so we had to wait for some more hardware to show up.
When Apple held a press conference and announced new MacBook Air machines that used Intel’s Haswell architecture, I knew I could count on Ken to go and pick one up for himself. Of course, before I let him start using it for his own purposes, I made him sit through a few agonizing days of benchmarking and testing in both Windows and Mac OS X environments. Ken has already posted a review of the MacBook Air 11-in model ‘from a Windows perspective’ and in that we teased that we had done quite a bit more evaluation of the graphics performance to be shown later. Now is later.
So the first combatant in our integrated graphics showdown with Frame Rating is the 11-in MacBook Air. A small, but powerful Ultrabook that sports more than 11 hours of battery life (in OS X at least) but also includes the new HD 5000 integrated graphics options. Along with that battery life though is the GT3 variation of the new Intel processor graphics that doubles the number of compute units as compared to the GT2. The GT2 is the architecture behind the HD 4600 graphics that sits with nearly all of the desktop processors, and many of the notebook versions, so I am very curious how this comparison is going to stand.
Introduction and Design
As headlines mount championing the supposed shift toward tablets for the average consumer, PC manufacturers continue to devise clever hybrid solutions to try and lure those who are on the fence toward more traditional machines. Along with last year’s IdeaPad Yoga 13 and ThinkPad Twist, Lenovo shortly thereafter launched the smallest of the bunch, an 11.6” convertible tablet PC with a 5-point touch 720p IPS display.
Unlike its newer, more powerful counterpart, the Yoga 11S, it runs Windows RT and features an NVIDIA Tegra 3 Quad-Core system on a chip (SoC). There are pros and cons to this configuration in contrast to the 11S. For starters, the lower-voltage, fanless design of the 11 guarantees superior battery life (something which we’ll cover in detail in just a bit). It’s also consequently (slightly) smaller and lighter than the 11S, which gains a hair on height and weighs around a quarter pound more. But, as you’re probably aware, Windows RT also doesn’t qualify as a fully-functional version of Windows—and, in fact, the Yoga 11’s versatility is constrained by the relatively meager selection of apps available on the Windows Store. The other obvious difference is architecture and chipset, where the Yoga 11’s phone- and tablet-grade ARM-based NVIDIA Tegra 3 is replaced on the 11S by Intel Core Ivy Bridge ULV processors.
But let’s forget about that for a moment. What it all boils down to is that these two machines, while similar in terms of design, are different enough (both in terms of specs and price) to warrant a choice between them based on your intended use. The IdeaPad Yoga 11 configuration we reviewed can currently be found for around $570 at retailers such as Amazon and Newegg. In terms of its innards:
If it looks an awful lot like the specs of your latest smartphone, that’s probably because it is. The Yoga 11 banks on the fact that such ARM-based SoCs have become powerful enough to run a modern personal computer comfortably—and by combining the strengths of an efficient, low-power chipset with the body of a notebook, it reaps benefits from both categories. Of course, there are trade-offs involved, starting with the 2 GB memory ceiling of the chipset and extending to the aforementioned limitations of Windows RT. So the ultimate question is, once those trade-offs are considered, is the Yoga 11 still worth the investment?
Apple has seen a healthy boost in computer sales and adoption since the transition to Intel-based platforms in 2006, but the MacBook line has far and away been the biggest benefactor. Apple has come a long way both from an engineering standpoint and consumer satisfaction point since the long retired iBook and PowerBook lines. This is especially evident when you look at their current product lineup, and products like the 11” MacBook Air.
Even though it may not be the most popular opinion around here, I have been a Mac user since 2005 with the original Mac Mini, and I have used a MacBook as my primary computer since 2008. I switched to the 11” MacBook Air when it came out in 2011, and experienced the growing pains of using a low power platform as my main computer.
While I still have a desktop for the occasional video that I edit at home, or game I manage to find time to play, the majority of my day involves being portable. Both in class and at the office, and I quickly grew to appreciate the 11” form factor, as well as the portability it offers. However, I was quite dissatisfied with the performance and battery life that my ageing ultraportable offered. Desperate for improvements, I decided to see what two generations worth of Intel engineering afforded, and picked up the new Haswell-based 11” MacBook Air.
Since the redesign of the MacBook Air in 2010, the overall look and feel has stayed virtually the same. While the Mini DisplayPort connector on the side became a Thunderbolt connector in 2011, things are still pretty much the same.
In this way, the 2013 MacBook Air should provide no surprises. The one visual difference I can notice involves upgrading the microphone on the left side to a stereo array, causing there to be two grilles this time, instead of one. However, the faults I found in the past with the MacBook Air have nothing to do with the aesthetics or build quality of the device, so I am not too disappointed by the design stagnation.
From an industrial design perspective, everything about this notebook feels familiar to me, which is a positive. I still believe that Apple’s trackpad implementation is the best I've used, and the backlit chiclet keyboard they have been using for years is a good compromise between thickness and key travel.
Kepler-based Mobile GPUs
Late last month, just before the tech world blew up from the mess that is Computex, NVIDIA announced a new line of mobility discrete graphics parts under the GTX 700M series label. At the time we simply posted some news and specifications about the new products but left performance evaluation for a later time. Today we have that for the highest end offering, the GeForce GTX 780M.
As with most mobility GPU releases it seems, the GTX 700M series is not really a new GPU and only offers cursory feature improvements. Based completely on the Kepler line of parts, the GTX 700M will range from 1536 CUDA cores on the GTX 780M to 768 cores on the GTX 760M.
The flagship GTX 780M is essentially a desktop GTX 680 card in a mobile form factor with lower clock speeds. With 1536 CUDA cores running at 823 MHz and boosting to higher speeds depending on the notebook configuration, a 256-bit memory controller running at 5 GHz, the GTX 780M will likely be the fastest mobile GPU you can buy. (And we’ll be testing that in the coming pages.)
The GTX 760M, 765M and 770M offering ranges of performance that scale down to 768 cores at 657 MHz. NVIDIA claims we’ll see the GTX 760M in systems as small as 14-in and below with weights at 2kg or so from vendors like MSI and Acer. For Ultrabooks and thinner machines you’ll have to step down to smaller, less power hungry GPUs like the GT 750 and 740 but even then we expect NVIDIA to have much faster gaming performance than the Haswell-based processor graphics.
An new era for computing? Or, just a bit of catching up?
Early Tuesday, at 2am for viewers in eastern North America, Intel performed their Computex 2013 keynote to officially kick off Haswell. Unlike ASUS from the night prior, Intel did not announce a barrage of new products; the purpose is to promote future technologies and the new products of their OEM and ODM partners. In all, there was a pretty wide variety of discussed topics.
Intel carried on with the computational era analogy: the 80's was dominated by mainframes; the 90's were predominantly client-server; and the 2000's brought the internet to the forefront. While true, they did not explicitly mention how each era never actually died but rather just bled through: we still use mainframes, especially with cloud infrastructure; we still use client-server; and just about no-one would argue that the internet has been displaced, despite its struggle against semi-native apps.
Intel believes that we are currently in the two-in-one era, which they probably mean "multiple-in-one" due to devices such as the ASUS Transformer Book Trio. They created a tagline, almost a mantra, illustrating their vision:
"It's a laptop when you need it; it's a tablet when you want it."
But before elaborating, they wanted to discuss their position in the mobile market. They believe they are becoming a major player in the mobile market with key design wins and outperforming some incumbent system on a chips (SoCs). The upcoming Silvermont architecture pines to be fill in the gaps below Haswell, driving smartphones and tablets and stretching upward to include entry-level notebooks and all-in-one PCs. The architecture promises to scale between offering three-fold more performance than its past generation, or a fifth of the power for equivalent performance.
Ryan discussed Silvermont last month, be sure to give his thoughts a browse for more depth.
Cortex-A12 fills a gap
Starting off Computex with an interesting announcement, ARM is talking about a new Cortex-A12 core that will attempt to address a performance gap in the SoC ecosystem between the A9 and A15. In the battle to compete with Krait and Intel's Silvermont architecture due in late 2013, ARM definitely needed to address the separation in performance and efficiency of the A9 and A15.
Source: ARM. Top to bottom: Cortex-A15, A12, A9 die size estimate
Targeted at mid-range devices that tend to be more cost (and thus die-size) limited, the Cortex-A12 will ship in late 2014 for product sampling and you should begin seeing hardware for sale in early 2015.
Architecturally, the changes for the upcoming A12 core revolve around a move to fully out of order dual-issue design including the integrated floating point units. The execution units are faster and the memory design has been improved but ARM wasn't ready to talk about specifics with me yet; expect that later in the year.
ARM claims this results in a 40% performance gain for the Cortex-A12 over the Cortex-A9, tested in SPECint. Because product won't even start sampling until late in 2014 we have no way to verify this data yet or to evaluate efficiency claims. That time lag between announcement and release will also give competitors like Intel, AMD and even Qualcomm time to answer back with potential earlier availability.
A Reference Platform - But not a great one
Believe it or not, AMD claims that the Brazos platform, along with the "Brazos 2.0" update the following year, were the company's most successful mobile platforms in terms of sales and design wins. When it first took the scene in late 2010, it was going head to head against the likes of Intel's Atom processor and the combination of Atom + NVIDIA ION and winning. It was sold in mini-ITX motherboard form factors as well as small clamshell notebooks (gasp, dare we say...NETBOOKS?) and though it might not have gotten the universal attention it deserved, it was a great part.
With Kabini (and Temash as well), AMD is making another attempt to pull in some marketshare in the low power, low cost mobile markets. I have already gone over the details of the mobile platforms that AMD is calling Elite Mobility (Temash) and Mainstream (Kabini) in a previous article that launched today.
This article will quickly focus on the real-world performance of the Kabini platform as demonstrated by a reference laptop I received while visiting AMD in Toronto a few weeks ago. While this design isn't going to be available in retail (and I am somewhat thankful based on the build quality) the key is to look at the performance and power efficiency of the platform itself, not the specific implementation.
Kabini Architecture Overview
The building blocks of Kabini are four Jaguar x86 cores and 128 Radeon cores colleted in a pair of Compute Units - similar in many ways to the CUs found in the Radeon HD 7000 series discrete GPUs. Josh has written a very good article that focuses on the completely new architecture that is Jaguar and compared it to other processors including AMD's previous core used in Brazos, the Bobcat core.
Introduction and Design
While Lenovo hasn’t historically been known for its gaming PCs, it’s poised to make quite a splash with the latest entry in its IdeaPad line. Owing little to the company’s business-oriented roots, the Y500 aims to be all power—moreso than any other laptop from the manufacturer to date—tactfully squeezed into a price tag that would normally be unattainable given the promised performance. But can it succeed?
Our Y500 review unit can be had for $1,249 at Newegg and other retailers, or for as low as $1,180 at Best Buy. Lenovo also sells customizable models, though the price is generally higher. Here’s the full list of specifications:
The configurations offered by Lenovo range in price fairly widely, from as low as $849 for a model sporting 8 GB of RAM with a single GT 650M with 2 GB GDDR5. The best value is certainly this configuration that we received, however.
What’s so special about it? Well, apart from the obvious (powerful quad-core CPU and 16 GB RAM), this laptop actually includes two NVIDIA GeForce GT 650M GPUs (both with 2 GB GDDR5) configured in SLI. Seeing as it’s just a 15.6-inch model, how does it manage to do that? By way of a clever compromise: the exchange of the usual optical drive for an Ultrabay, something normally only seen in Lenovo’s ThinkPad line of laptops. So I guess the Y500 does owe a little bit of its success to its business-grade brethren after all.
In our review unit (and in the particular configuration noted above), this Ultrabay comes prepopulated with the second GT 650M, equipped with its own heatsink/fan and all. The addition of this GPU effectively launches the Y500 into high-end gaming laptop territory—at least on the spec sheet. Other options for the Ultrabay also exist (sold separately), including a DVD burner and a second hard drive. The bay is easily removable via a switch on the back of the PC (see below).
A much needed architecture shift
It has been almost exactly five years since the release of the first Atom branded processors from Intel, starting with the Atom 230 and 330 based on the Diamondville design. Built for netbooks and nettops at the time, the Atom chips were a reaction to a unique market that the company had not planned for. While the early Atoms were great sellers, they were universally criticized by the media for slow performance and sub-par user experiences.
Atom has seen numerous refreshes since 2008, but they were all modifications of the simplistic, in-order architecture that was launched initially. With today's official release of the Silvermont architecture, the Atom processors see their first complete redesign from the ground up. With the focus on tablets and phones rather than netbooks, can Intel finally find a foothold in the growing markets dominated by ARM partners?
I should note that even though we are seeing the architectural reveal today, Intel doesn't plan on having shipping parts until late in 2013 for embedded, server and tablets and not until 2014 for smartphones. Why the early reveal on the design then? I think that pressure from ARM's designs (Krait, Exynos) as well as the upcoming release of AMD's own Kabini is forcing Intel's hand a bit. Certainly they don't want to be perceived as having fallen behind and getting news about the potential benefits of their own x86 option out in the public will help.
Silvermont will be the first Atom processor built on the 22nm process, leaving the 32nm designs of Saltwell behind it. This also marks the beginning of a new change in the Atom design process, to adopt the tick/tock model we have seen on Intel's consumer desktop and notebook parts. At the next node drop of 14nm, we'll see see an annual cadence that first focuses on the node change, then an architecture change at the same node.
By keeping Atom on the same process technology as Core (Ivy Bridge, Haswell, etc), Intel can put more of a focus on the power capabilities of their manufacturing.
Introduction and Technical Specifications
Courtesy of Oyen Digital
Oyen Digital, a popular manufacturer of portable storage enclosures and devices, provided us with its MiniPro™ eSATA / USB 3.0 Portable Hard Drive enclosure for testing USB 3.0 enhanced mode on the ASUS P8Z77-I Deluxe motherboard. This enclosure offers support for USB 2.0, USB 3.0, and eSATA ports in conjunction with a 2.5" hard drive. We put this enclosure on the test bench with the ASUS P8Z77-I Deluxe board to test the performance limits of the device. The MiniPro™ enclosure can be found at your favorite e-tailer for $39.95.
The MiniPro™ SATA / USB 3.0 Portable Hard Drive enclosure is a simple aluminum enclosure supporting any 2.5" form factor hard drive up to SATA III speeds. The enclosure itself supports USB 2.0, USB 3.0, and eSATA connections. Because of its use of the ASMedia 1053e chipset for USB 3.0 support, the enclosure supports both USB 3.0 normal mode transfer speeds and UASP (USB Attached SCSI Protocol) mode transfer speeds. UASP mode is a method of bulk transfer for USB 3.0 connections that increases transfer speeds through the use of parallel simultaneous packet transfers. Per our sources at ASUS, UASP can be explained as follows:
The adoption of the SCSI Protocol in USB 3.0 provides its users with the advantage of having better data throughput than traditional BOT (Bulk-Only Transfer) protocol, all thanks to its streaming architecture as well as the improved queuing (NCQ support) and task management, which eliminated much of the round trip time between USB commands, so more commands can be sent simultaneously. Moreover, thanks to the multi-tasking aware architecture, the performance is further enhanced when multiple transfers occur.
The downside of UASP is that the receiving device (Flash drive/external hard drive etc) must also be UASP enabled for the protocol to work. This requires checking your peripherals before purchase. However since UASP is an industry standard, the device support for ASUS UASP implementation is not restricted to a particular controller manufacturer or device type, so the overall number of peripherals available should undoubtedly grow.
Technical Specifications (taken from the Oyen Digital website)
eSATA 6G (Up to 6.0 Gbps)
USB 3.0: (Up to 5.0 Gbps)
|SATA III (up to 15mm SATA 2.5" HDD/SSD)|
|Windows XP/Vista/7/8 & above; MAC OS 10.2 & above; Linux 2.4.22 & above|
NVIDIA releases the GeForce GT 700M family
NVIDIA revolutionized gaming on the desktop with the release of its 600-series Kepler-based graphics cards in March 2012. With the release of the GeForce GT 700M series, Kepler enters the mobile arena to power laptops, ultrabooks, and all-in-one systems.
Today, NVIDIA introduces four new members to its mobile line: the GeForce GT 750M, the GeForce GT 740M, the GeForce GT 735M, and the GeForce GT 720M. These four new mobile graphics processors join the previously-released members of the GeForce GT 700m series: the GeForce GT 730M and the GeForce GT 710M. With the exception of the Fermi-based GeForce GT 720M, all of the newly-released mobile cores are based on NVIDIA's 28nm Kepler architecture.
Notebooks based on the GeForce GT 700M series will offer in-built support for the following new technologies:
Automatic Battery Savings through NVIDIA Optimus Technology
Automatic Game Configuration through the GeForce Experience
Automatic Performance Optimization through NVIDIA GPU Boost 2.0
ARM is a company that no longer needs much of an introduction. This was not always the case. ARM has certainly made a name for themselves among PC, tablet, and handheld consumers. Their primary source of income is licensing CPU designs as well as their ISA. While names like the Cortex A9 and Cortex A15 are fairly well known, not as many people know about the graphics IP that ARM also licenses. Mali is the product name of the graphics IP, and it encompasses an entire range of features and performance that can be licensed by other 3rd parties.
I was able to get a block of time with Nizar Romdhane, Head of the Mali Ecosystem at ARM. I was able to ask a few questions about Mali, ARM’s plans to address the increasingly important mobile graphics market, and how they will compete with competition from Imagination Technologies, Intel, AMD, NVIDIA, and Qualcomm.
We would like to thank Nizar for his time, as well as Phil Hughes in facilitating this interview. Stay tuned as we are expecting to continue this series of interviews with other ARM employees in the near future.
AMD Exposes Richland
When we first heard about “Richland” last year, there was a little bit of excitement from people. Not many were sure what to expect other than a faster “Trinity” based CPU with a couple extra goodies. Today we finally get to see what Richland is. While interesting, it is not necessarily exciting. While an improvement, it will not take AMD over the top in the mobile market. What it actually brings to the table is better competition and a software suite that could help to convince buyers to choose AMD instead of a competing Intel part.
From a design standpoint, it is nearly identical to the previous Trinity. That being said, a modern processor is not exactly simple. A lot of software optimizations can be applied to these products to increase performance and efficiency. It seems that AMD has done exactly that. We had heard rumors that the graphics portion was in fact changed, but it looks like it has stayed the same. Process improvements have been made, but that is about the extent of actual hardware changes to the design.
The new Richland APUs are branded the A-5000 series of products. The top end is the A10-5750M with HD-8650 integrated graphics. This is still the VLIW-4 based graphics unit seen in the previous Trinity products, but enough changes have been made with software that I can enable Dual Graphics with the new Solar System based GPUs (GCN). The speeds of these products have received a nice boost. As compared to the previous top end A10-4600, the 5750 takes the base speed from 2.3 GHz to 2.5 GHz. Boost goes from 3.2 GHz up to 3.5 GHz. The graphics portion takes the base clock from 496 MHz up to 533 MHz, while turbo mode improves over the 4600 from 685 MHz to 720 MHz. These are not staggering figures, but it all still fits within the 35 watt TDP of the previous product.
One other important improvement is the ability to utilize DDR-3 1866 memory. Throughout the past year we have seen memory densities increase fairly dramatically without impacting power consumption. This goes for speed as well. While we would expect to see lower power DIMMs be used in the thin and light categories, expect to see faster DDR-3 1866 in the larger notebooks that will soon be heading our way.
The Ice Storm Test
Love it or hate it, 3DMark has a unique place in the world of PC gaming and enthusiasts. Since 3DMark99 was released...in 1998...with a target on DirectX 6, Futuremark has been developing benchmarks on a regular basis in time with major API changes and also major harware changes. The most recent release of 3DMark11 has been out since late in 2010 and has been a regular part of our many graphics card reviews on PC Perspective.
Today Futuremark is not only releasing a new version of the benchmark but is also taking fundamentally different approach to performance testing and platforms. The new 3DMark, just called "3DMark", will not only target high-end gaming PCs but integrated graphics platforms and even tablets and smartphones.
We interviewed the President of Futuremark, Oliver Baltuch, over the weekend and asked some questions about this new direction for 3DMark, how mobile devices were going to affect benchmarks going forward and asked about the new results patterns, stuttering and more. Check out the video below!
Make no bones about it, this is a synthetic benchmark and if you have had issues with that in the past because it is not a "real world" gaming test, you will continue to have those complaints. Personally I see the information that 3DMark provides to be very informative though it definitely shouldn't be depended on as the ONLY graphics performance metric.
Introduction and Technical Specifications
Courtesy of Lenovo
As one of the newest members of Lenovo's Thinkpad line, the Lenovo Thinkpad Twist attempts to bridge the gap between laptops and tablets in a convertible Ultrabook format. We decided to put the Twist through the normal suite of benchmark and functional tests, along with some tests specifically geared towards laptops, to gage how well it performs. At a starting MSRP of $829.00 for the base model, the Lenovo Thinkpad Twist offers an intriguing price to feature proposition with its ability to convert from a fully functional laptop into a tablet almost seamlessly.
Courtesy of Lenovo
The Thinkpad Twist offers an innovative take for the user that wants the best of both worlds - the portability and usability of a laptop with the ease of use of a tablet. Featuring the Windows 8 OS, the Twist comes with a 5-point touchscreen usable in all modes of operation. Lenovo designed in support for the following features: USB 2.0 and 3.0 type devices; three networking types including a Realtek-based GigE NIC, a Broadcom-based 802.11n Wi-Fi adapter, and a Broadcom-based Bluetooth adapter; 4-in-1 media card reader port; mini-HDMI and mini-Display Port video output ports; a dual-purpose audio port; and a 720p HD-capable integrated webcam.
Courtesy of Lenovo
In designing the Twist, Lenovo decided to use a center hinge on which the screen pivots to support its four modes of operation: laptop mode, presentation mode where the screen can be rotated to face the audience, tent mode which allows the system to stand upright for movie or other media viewing, and tablet mode where the screen folds down to cover the keyboard entirely.
Windows RT: Runtime? Or Get Up and Run Time?
Update #1, 10/26/2012: Apparently it does not take long to see the first tremors of certification woes. A Windows developer by the name of Jeffrey Harmon allegedly wrestled with Microsoft certification support 6 times over 2 months because his app did not meet minimum standards. He was not given clear and specific reasons why -- apparently little more than copy/paste of the regulations he failed to achieve. Kind-of what to expect from a closed platform... right? Imagine if some nonsensical terms become mandated or other problems crop up?
Also, Microsoft has just said they will allow PEGI 18 games which would have received an ESRB M rating. Of course their regulations can and will change further over time... the point is the difference between a store refusing to carry versus banishing from the whole platform even for limited sharing. The necessity of uproars, especially so early on and so frequently, should be red flags for censorship to come. Could be for artistically-intentioned nudity or sexual themes. Could even be not about sex, language, and violence at all.
Last month, I suggested that the transition to Windows RT bares the same hurdles as transitioning to Linux. Many obstacles blocking our path, like Adobe and PC gaming, are considering Linux; the rest have good reason to follow.
This month we receive Windows RT and Microsoft’s attempt to shackle us to it: Windows 8.
To be clear: Microsoft has large incentives to banish the legacy of Windows. The way Windows 8 is structured reduces it to a benign tumorous growth atop Windows RT. The applications we love and the openness we adore are contained to an app.
I will explain how you should hate this -- after I explain why and support it with evidence.
Microsoft is currently in the rare state of sharp and aggressive focus to a vision. Do not misrepresent this as greed: it is not. Microsoft must face countless jokes about security and stability. Microsoft designed Windows with strong slants towards convenience over security.
That ideology faded early into the life of Windows XP. How Windows operates is fundamentally different. Windows machines are quite secure, architecturally. Con-artists are getting desperate. Recent attacks are almost exclusively based on fear and deception of the user. Common examples are fake anti-virus software or fraudulent call center phone calls. We all win when attackers get innovative: survival of the fittest implies death of the weakest.