Introduction and VIA’s Future

VIA had their annual technology forum held at the Hyatt in Taipei during the same period as Computex again this year, and some of the brightest minds in the industry were on hand to give us some thoughts on what they see in the future. Oh, and a 2.0 GHz dual core X2!

Introduction

Every year during the annual Computex show in Taipei, Taiwan, VIA assembles a group of some of the most powerful people in the PC industry and invites the media and their partners to a technology forum that is unlike anything else you will find.  Here you will usually find some of the brightest minds from VIA, AMD, TSMC, Phoenix and all gathered to share their thoughts on the future of their specific portions of our industry. 

VIA Technology Forum 2005 - Shows and Expos 8

This year was no exception, though we did miss out on some of my favorite speakers such as a Wenchi Chen and Dr. Morris Change due to personal reasons.  Their replacements were just as informative though, and today I am going to share with you some of what they had to say.

VIA’s Future

Speaking in the opening keynote for VIA was Tzumu Lin, Senior Vice President.  We all know that VIA is mainly a chipset designer but in recent years the amount of their income that has been derived from their high end solutions (such as the K8T90 and PT890 chipsets) has fallen as the success of their EPIA and small form factor initiatives take off.  The truth of the matter is that in the last 18 months, the only dramatic chipset development we have seen is the advent of dual GPU technology and the amount of space that chipset vendors had to differentiate themselves has shrunk.  In the AMD segment, where VIA has traditionally had the most success, their ability to gain a performance advantage was cut out when AMD decided to put their memory controller on the chip directly, something that had before been the duty of the chipset designers.

VIA Technology Forum 2005 - Shows and Expos 9

This problem of course wasn’t seen by just VIA; SiS, ULi and NVIDIA were also put in the same circumstances as VIA on the AMD front, but NVIDIA seems to have marketed themselves out of the rut and with the advent of SLI technology was able to grab the attention of the marketplace enough to stand out among the other very similar chipsets.

Since SLI has really become the majority of the hype in the enthusiast market VIA’s chipsets have unfortunately been put on the back burner both by the end users and the mainboard vendors. Users looking for a low cost motherboard for the AMD64 market would be more than happy to see K8T890 motherboards for sale, but with even Asus backing away from the VIA chipsets and focusing solely on the NVIDIA brand, VIA is in a tough spot for the time being.  We have word that a new marketing effort is soon taking place that will attempt to put the K8T890 in the spotlight for its value and performance features and hopefully put some pressure on the board vendors to start making these boards.

On the subject of the VIA processor and EPIA markets, Tzumu said that he feels that the most important metric for computers going forward is going to be performance per watt, not just total performance.  We can already see this taking place in the current enthusiast market with the move towards dual core processors.  I would doubt that this metric will be the MOST important for the high end market, but there is definitely a limit that even gamers will accept for power usage and heat. 

Tzumu also brought up a very interesting program that VIA is participating in called ‘One PC, One Village.’  The idea is to populate the nation of India with PCs in areas where there currently are none.  This can even include towns that do not have current electrical power, in which case VIA was showcasing that their EPIA models can run off of a car battery for extended periods of time. 

AMD’s Dirk Meyer

Another keynote at the VTF was Dirk Meyer, VP of Computation Products Group at AMD.  His keynote this year didn’t have the same level of technicality to this year as the previous one but he wasn’t trying to get the world to accept the AMD64 technology this year; that has already happened.  What he did talk about was a quick summary of why AMD made the architectural modifications it did something that most PC Perspective readers are already aware of. 

VIA Technology Forum 2005 - Shows and Expos 10

To get the importance of the move to 64-bit technology across to the audience, Dirk presented some very bold and unarguable information.  He said that according to some research firm, the world created <B>5 Exabytes</B> of digital information by the end of 2005.  For those of you not in the know, an exabyte is in fact 2^60 bytes: that’s a lot!  This he said was one of the driving reasons why 64-bit technology is so important and the world is going to need more and more memory and computing power to make sense of all the information being created. 

He even went as far as to say that the combination of pervasive 64-bit computers, multi-core processor technology and the system architecture innovations seen in the removal of the FSB on the AMD64 processors as the ‘single most important and revolutionary package of hardware platform innovations in our lifetime.’  That is a bold statement, but this is coming from a man that is supposed to be able to see further into the architectural future than most anyone in the industry.

Dirk also dispelled the myth that Moore’s law of doubling transistor count on chips every 18 months actually meant that microprocessor performance should double every 18 months.  That simply isn’t the case, as we can obviously see over the past two or more years.  But our transistor counts are still following the law verbatim, and in fact the move to dual and multi core processors is going to see the trend continue. 

For processor performance increases, Meyer outlined how AMD sees it working out over the next several years.  First, single core processor performance increases will slow down even more than they already have, but they will not stop.  New microcode and slightly increased clock rates will continue to happen as the industry finds new workarounds for leakage and heat issues.  The move to high-K gates is the next step we will see development in for this exact issue. 

The doubling of cores on a processor for the next two or three generations is very possible, he also said.  That means within life of the K9 and K10 we will likely see processors with 8 or more physical cores on them, though their exact configurations we can only guess at.  Will the cell architecture that already has about this many cores on it being the prelude to what AMD and Intel will find themselves doing soon? 

Meyer also said that special code acceleration is also a viable option for processor performance to increase, but that choosing which of these types of codes to implement will be the big issue.  He gave examples of special accelerations for TCP work as well as vector math acceleration, but also said that these technologies will be very expensive from a transistor count point of view as well as for choosing the ones that users will actually need!  Throwing a lot of special features on the chip that no one uses will only raise the processor prices and benefit the user very little.

Finally, he said that the industry has now learned the backwards compatibility is going to be a requirement going forward in the PC industry.  Forcing users to completely move to new software and supporting hardware isn’t something they are willing to do, and Meyer pointed towards the ailing Itanium as a good example of this happening.

He also mentioned that the gains for adding additional cores to processors are also limited on how much they can improve a user’s performance.  A law that many of you may not be familiar with is called Amdahl’s Law that says the gains of additional processors and cores is limited by the amount of serialized work in a task.  Serialized work simply means that step 1 must be completed before step 2 can start and 2 must finish before 3 can start, and so on.  In that case, adding a new core to this setup is not going to improve performance.  Many software houses now are starting to find ways to break up their current serialized tasking into as many parallel tasks as possible, but at the end of the day, there is a point were there MUST be some serialized work being done.  Once that point is reached, adding more cores to a processor becomes useless.  Meyer said that in today’s market, the vast majority of enterprise applications see the most benefit from having 2-8 cores on a processor, and only a few see any gains above that.  The laws of diminishing returns coming into affect in a lot of even theoretical applications before this as well.

« PreviousNext »