All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Subject: Motherboards, Shows and Expos | June 4, 2012 - 06:46 PM | Jeremy Hellstrom
Tagged: ultra durable 5, IR3550 PowIRstage, International Rectifier, gigabyte, computex 2012, computex
City of Industry, California, June 4th , 2012– GIGABYTE TECHNOLGY Co. Ltd., a leading manufacturer of motherboards and graphics cards, today introduced their latest Ultra Durable 5 technology, featuring a range of high current capable components including the industry’s highest rated 60A IR3550 PowIRstage ICs that provide the best quality power delivery to the CPU for record-breaking performance, cool and efficient operation and extended motherboard lifespan.
GIGABYTE is yet again raising the bar of motherboard quality and durability with their Ultra Durable 5 technology, which includes high current capable components for the CPU power zone including the highest rated and awarded IR3550 PowIRstage ICs from International Rectifier, 2X Copper PCB and high current Ferrite Core Chokes rated up to 60A which together are able to deliver up to 60°C* cooler temperatures than traditional motherboards. Featured on a range of new motherboards based the Intel X79 and Z77 Express Chipsets, GIGABYTE Ultra Durable 5 technology is the next evolution in quality motherboard design.
“As the exclusive motherboard manufacturer to utilize the amazingly efficient IR3550 PowIRstage ICs from International Rectifier, GIGABYTE has spent a considerable amount of engineering resources to ensure our Ultra Durable 5 motherboards are our best Ultra Durable motherboards yet,” commented Henry Kao, Vice President of GIGABYTE Motherboard Business Unit. “GIGABYTE Ultra Durable 5 motherboards are especially optimized for water cooled systems and overclocked Intel Core i7-3770K Ivy Bridge “K” SKU CPUs due to their exceptionally low operating temperatures, and make the perfect match for anyone looking to push their system to limits.”
“We are delighted that IR’s award winning IR3550 PowIRstage provides the high current, thermal capability and outstanding performance to power GIGABYTE’s new Ultra Durable 5 motherboard series,” said Deepak Savadatti, Vice President and General Manager, IR’s Enterprise Power Business Unit.
Subject: Motherboards, Shows and Expos | June 4, 2012 - 04:14 PM | Ryan Shrout
Tagged: computex, gigabyte, Z77, thunderbolt, Z77X-UP5 TH
Anandtech has the scoop on a new motherboard being shown at Computex this week from Gigabyte that features not just Thunderbolt support, not just a single Thunderbolt port but DUAL Thunderbolt connections. The Z77X-UP5 TH is the first such PC platform to offer support for four channels of the new connectivity technology.
Image source: Anandtech.com
Both ports can run at the full 10 Gbps speeds thanks to the inclusion of a PLX PCI Express bridge chip so bandwidth restrictions shouldn't be a concern. We don't know the pricing or availability of this new board quite yet but Gigabyte is finally jumping into the world of Thunderbolt after the MSI and ASUS announcements last month.
Hit up the Anandtech link above for more photos of this board!
Subject: Shows and Expos | June 4, 2012 - 04:11 PM | Jeremy Hellstrom
Tagged: asus, RT-AC66U, wifi, wireless router
If you are wondering why you should care about the new ASUS RT-AC66U dual band wireless router, perhaps the thought of a better than gigabit speed wireless connection might interest you. It isn't just about the speed, even though it can easily be set up to provide basic access to one machine you can actually have up to 8 SSIDs to allow you to set up multiple networks with separate privileges making this router great for small to medium sized businesses as well as home users. It has two USB ports and is perfectly capable of using a USB 3G dongle to allow shared connections over the cell network or you could plug in data you want to share as the router can also act as an FTP server. Check out the full press release below for even more information.
Fremont, CA (June 4, 2012) - the new ASUS RT-AC66U router integrates dual-band Gigabit wireless with fifth generation 802.11ac Wi-Fi technology, also known as 5G WiFi, which enables speeds up to three times faster than existing 802.11n devices. As one of the first routers to achieve this, it tops Gigabit wireless requirements with a combined 2.4GHz/5GHz bandwidth of 1.75Gbps. Coupled with exclusive ASUS AiRadar signal amplification and shaping technology, easy to use ASUSWRT setup software, multiple SSIDs, and IPv6 support the RT-AC66U is the perfect router for HD media streaming, large concurrent file transfers, and gaming. Impressive USB-based capabilities turn the RT-AC66U into a complete 3G, FTP, DLNA, and printer server device for genuine multi-role functionality in the home or at a small business.
Going beyond Gigabit Wi-Fi
The RT-AC66U is one of the world’s first dual-band wireless routers to support the advanced 802.11ac wireless protocol, enabling 5GHz band operation up to 1.3Gbps. These new capabilities are made possible by the inclusion of Broadcom’s powerful 5G WiFi chipset. 2.4GHz band capabilities work up to 450Mbps so the concurrent combined bandwidth of the RT-AC66U is 1.75Gbps. This unique router features sophisticated ASUS AiRadar technology to amplify signal strength and improve directionality to overcome environmental obstructions and increase data transfer rates. The inclusion of 5G WiFi makes the RT-AC66U one of the most future-proof routers on the market, ready for the next generation of high speed networks.
Extensive feature list enhances networking experiences
The RT-AC66U features easy and fast setup in just three steps with the ASUSWRT dashboard, while strict QoS (Quality of Service) standards help ensure improved bandwidth optimization and multitasking capabilities. Up to eight SSIDs are supported, so users can easily setup distinct networks with different access privileges and levels of security without having to compromise passwords. The RT-AC66U supports the new IPv6 standard for better packet transmission and addressing.
USB applications extend router versatility
With its twin USB ports, the RT-AC66U becomes a true multi-role device. Attaching a 3G dongle allows it to share 3G connections among several users on different devices. Full DLNA compatibility affords smooth connectivity with a variety of entertainment platforms, including game consoles, tablets, Blu-ray players, smart TVs, and set-top boxes. The RT-AC66U can also serve as a dedicated FTP server and/or printer server, letting users share resources for greater productivity while reducing costs as there is no need to buy standalone server hardware.
Full 802.11ac product lineup
In addition to the RT-AC66U router, ASUS is also releasing the PCE-AC66 and USB-AC53 client adapters, both capable of 802.11ac speeds. The dual-band PCE-AC66 offers a PCI Express client card for desktops with a 3 x 3 high-powered transmission antenna design. It offers transfer rates up to 1.3Gbps in 5GHz and 450Mbps in 2.4GHz operation modes. For easy USB upgrades to 802.11ac, the USB-AC53 compact dongle plugs into a USB port with a 2 x 2 design. In 5GHz operation the USB-AC53 offers transfer rates up to 867Mbps, while in 2.4GHz transfer speeds are up to 300Mbps, achieving a total throughput of around 1.3Gbps. The PCE-AC66 and USB-AC53 adapters are enabled by Broadcom’s 5G WiFi chipsets and demonstrate ASUS technology leadership in bringing a full 802.11ac ecosystem to consumers.
Subject: General Tech, Shows and Expos | June 4, 2012 - 01:02 PM | Jeremy Hellstrom
Tagged: windows 8, ultrabook, taichi, tablet, computex, asus, transformer book, Transformer
ASUS has been showing off its new mobile products at Computex, as you can see from Ryan's pictures below this post. You can catch all the PC Perspective coverage by checking this page, as all Computex related content will show up there. With all the fancy new products, the more pictures the better which is why you should also check out the coverage The Tech Report put up. They snapped a few photos of the dual display Taichi which doesn't have a lid, instead there is a second independent touch screen display on the back which takes the idea behind ASUS' Transformer series to a whole new level. That doesn't mean they abandoned the Transformer though as they also showed off three brand new Ivy Bridge powered Transformer Books and two separate tablets, the 600 and the 810 with the Tegra powered 600 running WinRT for ARM and the 810 running Windows 8 thanks to its Atom processor.
"We're rarely surprised at trade shows these days, but Asus CEO Johnny Shih saved something special for the end of his press conference today. After discussing everything from cloud storage to all-in-ones to notebooks and tablets, he pulled out one more thing: the Taichi. It looked like any other notebook, and Shih took great pleasure in showing off the "beautiful black mirror finish" on the top panel. I couldn't help but shake my head and sigh; the glossy finish was covered in fingerprints and smudges."
Here is some more Tech News from around the web:
- Nvidia reveals driver support for Windows 8 preview release @ The Inquirer
- Gigabyte goes dual-port Thunderbolt at Computex @ Kitguru
- Gigabyte’s first A85X socket FM2 motherboard @ Kitguru
- ARM Expects 20-Nanometer Processors By Late 2013 @ Slashdot
- Fujifilm FinePix T400 Review @ TechReviewSource
- CoolerMaster Joint Contest @ NikKTech
Subject: General Tech, Shows and Expos | June 1, 2012 - 04:35 PM | Scott Michaud
Tagged: E3, E3 12
The 2012 Electronic Entertainment Expo (E3 2012) is taking place next week from Tuesday through to Thursday. Monday will start the week with four press conferences. Stay tuned for as much PC-centric coverage as we can feed you with over the week including expected Unreal Engine 4 news.
If you work in an electronic entertainment retailer -- prepare to be asked weird questions next week.
E3 2012 is kicking off next week and a lot of announcements are expected to come out it. We here at PC Perspective are most interested in learning more about Unreal Engine 4 which is expected to be publicly announced at the expo. We expect that there will be something else which will surprised us as well.
You better be here next week!
Monday will kick off E3 with four press conferences:
- Microsoft from 12:30PM EST to 2PM EST
- EA from 4PM EST to 5PM EST
- Ubisoft from 6PM EST to 7PM EST
- Sony from 9PM EST to 10:30PM EST
Nintendo will take the stage the following day with a Tuesday at Noon EST conference.
Apart from Unreal Engine news I am very excited to find out what Valve has in store for E3. Valve has a private meeting room this year which they skipped for E3 2011. In E3 2010 they demonstrated Portal 2 and this year it is possible that we will see little more than DOTA 2 -- but there is always hope for something more.
What are you guys and girls hoping to see? Unreal 4? Valve cake? Beyond Good and Evil 2?
Subject: Shows and Expos | May 31, 2012 - 12:21 PM | Jeremy Hellstrom
Tagged: thunderbolt, ssd, ocz, lightfoot, computex 2012, z-drive R4 CloudServ, SandForce 2581, PCIe SSD
OCZ will be showing off some of the same things they showed off at CES though they are much closer to release. Lightfoot is their external Thunderbolt enclosure which will house SSDs that can utilize the extra bandwith provided by the new external transfer technology. They will also being showing off Enterprise class PCIe SSDs, the brand new Intrepid line of SSDs and software designed to replace SANs in a network environment. Keep an eye out for more details as Computex grows nigh.
SAN JOSE, CA—May 31, 2012—OCZ Technology Group, Inc. (Nasdaq:OCZ), a leading provider of high-performance solid-state drives (SSDs) for computing devices and systems, will showcase the Company's latest client and enterprise storage solutions at Computex 2012 in Taipei, Taiwan June 6 through June 9 at the Taipei International Convention Center.
Continuing to demonstrate leadership in both the enterprise and consumer markets, OCZ will display a comprehensive lineup of its innovative SSD products. For high-end business, server, and OEM clients, OCZ will showcase PCI Express (PCIe) SAN acceleration and replacement solutions, and unveil the impending Intrepid 3 SATA III SSD Series based on the Everest 2 architecture. Live demos at the booth will include both the current industry-leading Z-Drive R4 CloudServ PCIe SSD that delivers over one million IOPS, and the highly anticipated Z-Drive R5 Series based on the co-developed OCZ-Marvell Kilimanjaro platform that raises the bar in performance, reliability, and endurance. OCZ will also showcase the VXL Storage Accelerator software that enables large scale deployment of a virtualized environment for businesses to eliminate the need for costly tier-1 SANs in a wide range of enterprise IT infrastructures.
For client storage, OCZ will showcase the flagship Vertex 4 SATA III SSD, along with the upcoming ‘Lightfoot’ portable SSD designed with the Intel Thunderbolt platform that excels in data transfer speeds and offers high capacity for multimedia professionals.
Subject: General Tech, Shows and Expos | May 18, 2012 - 04:24 AM | Scott Michaud
Tagged: E3, unreal engine 4, ue4
Epic Games has demonstrated Unreal Engine 4 behind closed doors at GDC a few months ago. First screenshots have been released from that demo although not much more has been made public about it. While not completely epic, it definitely is exciting. Unreal Engine 4 is expected to be further unveiled at or near E3 in June.
Epic has been quiet about the next generation of their game development platform. Only a handful of lucky individuals were shown the demo at the GDC and those who did could not share their experience. Epic has said that they would have liked to publicly demonstrate their product, but were unable to due to non-disclosure agreements that they themselves were placed under.
I think that guy needs some thixomolded magnesium alloy. He seems to be running a little hot.
Either he’ll cool down, or produce a beautiful white bloom.
(Screenshot Credit: PC Gamer)
Wired claims that Epic will unveil the rest of Unreal Engine 4 in June which likely means that it will occur on or around the E3 press conference.
It is thus easy to speculate that whatever gagged Epic will likely be unveiled at E3 too.
The major hook of the demo was that it was running in the editor and not in a baked game executable. This means that developers will have a much easier time creating their game and will also have to spend much less time preparing to work. About the only concrete tidbit in the article is that Unreal Engine 4 will not have baked lighting. Unreal Engine 4 will likely use a technology similar to Battlefield 3 where global illumination is calculated at runtime -- nearly a must for properly lit destructibility.
Subject: Shows and Expos | May 15, 2012 - 04:12 PM | Jeremy Hellstrom
Tagged: NVIDIA VGX, nvidia, GTC 2012, virtual graphics, virtual machine
One of the more interesting announcements so far at the GTC has been NVIDIA's wholehearted leap into desktop virtualization with NVIDIA VGX series of add on cards. Not really a graphics card and more specialized than the Tesla, GPU VDI will give you a GPU accelerated virtual machine. If you are wondering why you would need that consider a VM which can handle an Aero desktop and stream live HD video where the processing power comes not from the CPU but from a virtual GPU. They've partnered it with Hypervisor which can integrate with existing VM platforms to provide virtual GPU control as well as another piece of software which allows you to pick and choose what graphics resources your users get.
SAN JOSE, Calif.—GPU Technology Conference—May 15, 2012—NVIDIA today unveiled the NVIDIA VGX platform, which enables IT departments to deliver a virtualized desktop with the graphics and GPU computing performance of a PC or workstation to employees using any connected device.
With the NVIDIA VGX platform in the data center, employees can now access a true cloud PC from any device – thin client, laptop, tablet or smartphone – regardless of its operating system, and enjoy a responsive experience for the full spectrum of applications previously only available on an office PC.
NVIDIA VGX enables knowledge workers for the first time to access a GPU-accelerated desktop similar to a traditional local PC. The platform’s manageability options and ultra-low latency remote display capabilities extend this convenience to those using 3D design and simulation tools, which had previously been too intensive for a virtualized desktop.
Integrating the VGX platform into the corporate network also enables enterprise IT departments to address the complex challenges of “BYOD” – employees bringing their own computing device to work. It delivers a remote desktop to these devices, providing users the same access they have on their desktop terminal. At the same time, it helps reduce overall IT spend, improve data security and minimize data center complexity.
“NVIDIA VGX represents a new era in desktop virtualization,” said Jeff Brown, general manager of the Professional Solutions Group at NVIDIA. “It delivers an experience nearly indistinguishable from a full desktop while substantially lowering the cost of a virtualized PC.”
The NVIDIA VGX platform is part of a series of announcements NVIDIA is making today at the GPU Technology Conference (GTC), all of which can be accessed in the GTC online press room.
The VGX platform addresses key challenges faced by global enterprises, which are under constant pressure both to control operating costs and to use IT as a competitive edge that allows their workforces to achieve greater productivity and deliver new products faster. Delivering virtualized desktops can also minimize the security risks inherent in sharing critical data and intellectual property with an increasingly internationalized workforce.
NVIDIA VGX is based on three key technology breakthroughs:
- NVIDIA VGX Boards. These are designed for hosting large numbers of users in an energy-efficient way. The first NVIDA VGX board is configured with four GPUs and 16 GB of memory, and fits into the industry-standard PCI Express interface in servers. ·
- NVIDIA VGX GPU Hypervisor. This software layer integrates into commercial hypervisors, such as the Citrix XenServer, enabling virtualization of the GPU.
- NVIDIA User Selectable Machines (USMs). This manageability option allows enterprises to configure the graphics capabilities delivered to individual users in the network, based on their demands. Capabilities range from true PC experiences available with the NVIDIA standard USM to enhanced professional 3D design and engineering experiences with NVIDIA Quadro or NVIDIA NVS GPUs.
The NVIDIA VGX platform enables up to 100 users to be served from a single server powered by one VGX board, dramatically improving user density on a single server compared with traditional virtual desktop infrastructure (VDI) solutions. It sharply reduces such issues as latency, sluggish interaction and limited application support, all of which are associated with traditional VDI solutions.
With the NVIDIA VGX platform, IT departments can serve every user in the organization – from knowledge workers to designers – with true PC-like interactive desktops and applications.
NVIDIA VGX Boards
NVIDIA VGX boards are the world’s first GPU boards designed for data centers. The initial NVIDIA VGX board features four GPUs, each with 192 NVIDIA CUDA architecture cores and 4 GB of frame buffer. Designed to be passively cooled, the board fits within existing server-based platforms.
The boards benefit from a range of advancements, including hardware virtualization, which enables many users who are running hosted virtual desktops to share a single GPU and enjoy a rich, interactive graphics experience; support for low-latency remote display, which greatly reduces the lag currently experienced by users; and, redesigned shader technology to deliver higher power efficiency.
NVIDIA VGX GPU Hypervisor
The NVIDIA VGX GPU Hypervisor is a software layer that integrates into a commercial hypervisor, enabling access to virtualized GPU resources. This allows multiple users to share common hardware and ensure virtual machines running on a single server have protected access to critical resources. As a result, a single server can now economically support a higher density of users, while providing native graphics and GPU computing performance.
This new technology is being integrated by leading virtualization companies, such as Citrix, to add full hardware graphics acceleration to their full range of VDI products.
NVIDIA User Selectable Machines
NVIDIA USMs allow the NVIDIA VGX platform to deliver the advanced experience of professional GPUs to those requiring them across an enterprise. This enables IT departments to easily support multiple types of users from a single server.
USMs allow better utilization of hardware resources, with the flexibility to configure and deploy new users’ desktops based on changing enterprise needs. This is particularly valuable for companies providing infrastructure as a service, as they can repurpose GPU-accelerated servers to meet changing demand throughout the day, week or season.
Subject: Shows and Expos | May 15, 2012 - 03:43 PM | Jeremy Hellstrom
Tagged: tesla, nvidia, GTC 2012, kepler, CUDA
SAN JOSE, Calif.—GPU Technology Conference—May 15, 2012—NVIDIA today unveiled a new family of Tesla GPUs based on the revolutionary NVIDIA Kepler GPU computing architecture, which makes GPU-accelerated computing easier and more accessible for a broader range of high performance computing (HPC) scientific and technical applications.
The new NVIDIA Tesla K10 and K20 GPUs are computing accelerators built to handle the most complex HPC problems in the world. Designed with an intense focus on high performance and extreme power efficiency, Kepler is three times as efficient as its predecessor, the NVIDIA Fermi architecture, which itself established a new standard for parallel computing when introduced two years ago.
“Fermi was a major step forward in computing,” said Bill Dally, chief scientist and senior vice president of research at NVIDIA. “It established GPU-accelerated computing in the top tier of high performance computing and attracted hundreds of thousands of developers to the GPU computing platform. Kepler will be equally disruptive, establishing GPUs broadly into technical computing, due to their ease of use, broad applicability and efficiency.”
The Tesla K10 and K20 GPUs were introduced at the GPU Technology Conference (GTC), as part of a series of announcements from NVIDIA, all of which can be accessed in the GTC online press room.
NVIDIA developed a set of innovative architectural technologies that make the Kepler GPUs high performing and highly energy efficient, as well as more applicable to a wider set of developers and applications. Among the major innovations are:
- SMX Streaming Multiprocessor – The basic building block of every GPU, the SMX streaming multiprocessor was redesigned from the ground up for high performance and energy efficiency. It delivers up to three times more performance per watt than the Fermi streaming multiprocessor, making it possible to build a supercomputer that delivers one petaflop of computing performance in just 10 server racks. SMX’s energy efficiency was achieved by increasing its number of CUDA architecture cores by four times, while reducing the clock speed of each core, power-gating parts of the GPU when idle and maximizing the GPU area devoted to parallel-processing cores instead of control logic.
- Dynamic Parallelism – This capability enables GPU threads to dynamically spawn new threads, allowing the GPU to adapt dynamically to the data. It greatly simplifies parallel programming, enabling GPU acceleration of a broader set of popular algorithms, such as adaptive mesh refinement, fast multipole methods and multigrid methods.
- Hyper-Q – This enables multiple CPU cores to simultaneously use the CUDA architecture cores on a single Kepler GPU. This dramatically increases GPU utilization, slashing CPU idle times and advancing programmability. Hyper-Q is ideal for cluster applications that use MPI.
“We designed Kepler with an eye towards three things: performance, efficiency and accessibility,” said Jonah Alben, senior vice president of GPU Engineering and principal architect of Kepler at NVIDIA. “It represents an important milestone in GPU-accelerated computing and should foster the next wave of breakthroughs in computational research.”
NVIDIA Tesla K10 and K20 GPUs
The NVIDIA Tesla K10 GPU delivers the world’s highest throughput for signal, image and seismic processing applications. Optimized for customers in oil and gas exploration and the defense industry, a single Tesla K10 accelerator board features two GK104 Kepler GPUs that deliver an aggregate performance of 4.58 teraflops of peak single-precision floating point and 320 GB per second memory bandwidth.
The NVIDIA Tesla K20 GPU is the new flagship of the Tesla GPU product family, designed for the most computationally intensive HPC environments. Expected to be the world’s highest-performance, most energy-efficient GPU, the Tesla K20 is planned to be available in the fourth quarter of 2012.
The Tesla K20 is based on the GK110 Kepler GPU. This GPU delivers three times more double precision compared to Fermi architecture-based Tesla products and it supports the Hyper-Q and dynamic parallelism capabilities. The GK110 GPU is expected to be incorporated into the new Titan supercomputer at the Oak Ridge National Laboratory in Tennessee and the Blue Waters system at the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign.
“In the two years since Fermi was launched, hybrid computing has become a widely adopted way to achieve higher performance for a number of critical HPC applications,” said Earl C. Joseph, program vice president of High-Performance Computing at IDC. “Over the next two years, we expect that GPUs will be increasingly used to provide higher performance on many applications.”
Preview of CUDA 5 Parallel Programming Platform
In addition to the Kepler architecture, NVIDIA today released a preview of the CUDA 5 parallel programming platform. Available to more than 20,000 members of NVIDIA’s GPU Computing Registered Developer program, the platform will enable developers to begin exploring ways to take advantage of the new Kepler GPUs, including dynamic parallelism.
The CUDA 5 parallel programming model is planned to be widely available in the third quarter of 2012. Developers can get access to the preview release by signing up for the GPU Computing Registered Developer program on the CUDA website.
Subject: General Tech, Graphics Cards, Shows and Expos | May 15, 2012 - 10:14 AM | Ryan Shrout
Tagged: nvidia, GTC 2012, live
Are you interested in GPUs? Maybe GPU computing or even some cloud-based GeForce announcements? Chances are then you'll want to tune in to the NVIDIA GPU Technology Conference keynote today at 10:30am PT / 1:30pm ET.
NVIDIA CEO Jen-Hsun Huang is expected to be on stage with three new announcements, one of which will likely be the the GK110 GPU we have all been waiting to hear about. Another has been teased as "a new major cloud gaming technology" while the third...well I really have no idea. It should be exciting though so tune in and watch along with us!
You can catch it all at http://www.gputechconf.com/!
Subject: General Tech, Systems, Shows and Expos | April 17, 2012 - 04:55 PM | Scott Michaud
Tagged: NAB 12, ACME
ACME Portable Machines showed off their Seahawk 100 computer on the show floor of the National Association of Broadcasters 2012 show. Multiple monitors, ruggedized, semi-portable, but slightly out of date on the hardware side.
When you think about portable computing: do you think about a laptop or a tablet? Either way you probably do not think about this product. But, should you?
Well if you did you would probably know it.
ACME Portable Machines is showing off the Seahawk 100 at NAB this week. The purpose for the device is to bring a fully functional multi-monitor computer where you need it, to plug it in, and to be assured that it will work.
Just don't give in to the temptation to make people call you the operator...
Functionally the device is slightly out of date with an Intel Core 2 Quad Q9550S 2.83 GHz processor, NVIDIA GeForce GTX 260 video card, and 2-8 GB of RAM. If your desire is to play Starcraft 2 on the three monitors than you should have no problems, but that is not why you are purchasing this PC. If you are the type of person to visit the NAB show you probably will wish to include much more RAM than the default 2GB -- or even if you are not, 2GB is quite low nowadays.
It's not a tumah!
Price is only available by quote, but check out their website for more information. The design definitely looks interesting for users of its niche -- professionals in the field who just cannot live without the flexibility of multiple screens.
Thanks to our friend Colleen for the heads up and photos!
Subject: Editorial, Shows and Expos | March 6, 2012 - 04:45 AM | Scott Michaud
Tagged: GDC 12, GDC
The Game Developers Conference (GDC) has a long history of being underappreciated by the general public. GDC has become more mainstream than it once was. Five years ago, a panel called “Programmer’s Challenge” -- Jeopardy for videogame programmers -- was in its fifth iteration and submitted to Google Video. Check out what GDC once was.
Take a bunch of programmers and ask them what happens when you XOR Frosted Flakes and Frosted Cheerios
I'm not kidding.
Questions from the Programmer’s Challenge are very entertaining and well worth the 45 minutes it takes to watch. It is exactly what you should expect from a Jeopardy game with “Blizzard Sues Everyone” as an example category title.
You are a high level EA executive. You have 327,600 man hours of game development to complete in the 12 weeks before Christmas. If you have 300 employees working 40 hours a week, how many hours of unpaid overtime per week should you force each employee to do before laying them off in January?
Part of the fun is keeping up with the logic puzzles which get quite difficult. The game rounds out near the end with binary algebra of breakfast cereals. Put a little smile in your Tuesday.
Subject: General Tech, Shows and Expos | March 6, 2012 - 03:41 AM | Scott Michaud
Tagged: valve, Steam Box, GDC, GDC 12
Valve and Razer formally agree to support Razer Hydra motion controller in Valve’s four most popular titles and two upcoming ones.
A little over two years ago, Valve and Razer announced a partnership for their Sixense high-precision motion controllers. During CES 2010, attendees were able to experiment with a prototype motion controller from Sixense to control Left 4 Dead 2. Sixense TrueMotion controllers were later released by Razer last June as the Razer Hydra.
Now you're thinking with controllers.
This Game Developers Conference (GDC) fast forwards us to almost a year after the launch of the Razer Hydra. The price for the controller has dropped $40 to $99.99 at some point between then and now. Valve has also announced that support would be extended from Portal 2 and Left 4 Dead 2 to include Half-Life 2, Team Fortress 2 and upcoming Dota 2 and Counter-Strike: Global Offensive.
The fishiest part of this whole announcement involves the Steam Box rumor from a few days ago. Valve appears to be very focused on the best portions of console gaming for the PC all of a sudden. I could easily see motion controls be used to support The Steam Box or whatever it might be called -- especially if it were used for more than just gaming and by more than just gamers.
So what do you all think?
Subject: General Tech, Shows and Expos | March 3, 2012 - 09:18 PM | Scott Michaud
Tagged: GDC 12, GDC, crytek
Crytek unveils their large presence at Game Developers’ Conference (GDC 2012) occurring next week: what projects will be on the show floor and what projects will be discussed privately by appointment.
The Electronic Entertainment Expo (E3) tends to be where most gamers get their overdose of gaming news. Much fewer gamers know of the Game Developers’ Conference which occurs about three months earlier. Especially over recent years, GDC coverage sometimes ends up more exciting than E3 with announcements being more technical and oriented to developers.
A call out to interested developers.
Crytek published a press release on their website outlining their products. The release is quite cryptic in its wording, but more information should be available soon.
GFACE, our recently announced social entertainment service, and its business development team is on the lookout for fun third-party social, casual, core free2play games that can complement our launch line up. Everyone interested in becoming part of GFACE should contact us at firstname.lastname@example.org to set up an appointment to learn more about the GFACE Social Media Publishing Platform to “Play.Together.Live.”
Crytek’s first freemium PC Online FPS Game Service Warface invites players to check out our PVE and PVP gameplay.
GDC attendees can participate in CryENGINE presentations every full hour. Topics that will be covered are next-generation DX 11 graphics and tools upgrades, Cinebox, creating characters for CryENGINE, AI Systems, UI Actions and Flow Graph and After Action feature set for Serious Games.
CryENGINE®3 Cinebox™ will also be on the showfloor and we’d love to show you more about it. For more information, please visit mycryengine.com or contact us at email@example.com
Real Time Immersive, Inc. (RTI) is a simulation and serious games studio established to support CryENGINE® licensees in the serious game and simulation market space. The team will be present on the show floor and show their latest developments.
Crytek uses their own vocabulary to categories projects which use their engine. Your project is a “Game” if it is a typical videogame such as Crysis or Mechwarrior Online. Your project is a “Serious Game” if you use their game technology for professional applications such as Lockheed Martin developing or demonstrating aircraft technology. Your project is a “Visualization” if you use game technology to demonstrate architecture or produce TV, film, and similar content in the engine.
I am most interested to find out more details about Warface and specifically find out what they could possibly be describing as a FPS Game Service with PVE gameplay. How about you? Comment away.
Subject: Editorial, General Tech, Systems, Shows and Expos | March 3, 2012 - 05:16 PM | Scott Michaud
Tagged: valve, Steam Box, steam, GDC 12
It is rumored that Valve will announce a Steam hardware platform as early as GDC next week although that could be pushed back as late as E3 in June.
Steam has grown atop the PC platform and consists of over 40 million active user accounts. For perspective, the Xbox 360 has sold 65.8 million units to date and that includes units sold to users whose older Xbox 360s died and they did not go the cardboard coffin route. Of course the study does not account for the level of hardware performance each user can utilize although Valve does keep regular surveys of that.
A console with admined dedicated servers to kick the teabagging and griefing Steam punks.
Within the last couple of years, Valve has been popping in to news seemingly out of the blue. Allow me to draw your attention to three main events.
At the last GDC, Valve announced “The Big Picture” mode for their Steam software. The Big Picture is an interface for Steam which is friendly to users seated on a couch several feet away from a large screen TV. While “The Big Picture” has yet to be released it does set the stage for a great Home Theatre PC user interface for PC games as well as potentially other media.
I must admit, that controller does not look the most ergonomic... but it is just a patent filing.
Last year, Valve also filed a patent with the US Patent Office for a video game controller with user swappable control components. Their patent filings show a controller which looks quite similar to an Xbox 360 controller where the thumbsticks can be replaced with touch pads as well as a trackball and potentially other devices. Return of Missile Command anyone?
Also a little over two years ago, Valve announced a partnership with Razer for their Sixense high-precision motion controllers. It is possible that Valve was supporting this technology for this future all along. While motion controllers have not proven to be successful for gaming, they are accepted as a method to control a device. Perhaps The Big Picture will be optimized to support Sixense and compatible devices?
The Verge goes beyond their claims that Valve will announce The Steam Box and has included specifications for a closed-doors prototype of the system. The system was rumored to be used to present to partners at CES contained an Intel Core i7 CPU, 8GB of RAM, and an NVIDIA GPU.
You know if Microsoft had focused on Media Center for gaming rather than the Xbox...
It is very unclear whether Valve will attempt to take a loss on the platform in hopes to make it back up in Steam commissions. It is possible that Valve will just push the platform to OEM partners, but it is possible that they will release and market their own canon device.
I am interested to see how Valve will push the Home Theatre PC market. The main disadvantage that the PC platform has at the moment is simply marketing and development money. It is also possible that they wish to expand out and support other media through their Steam service as well.
At the very least, we should have a viable Home Theatre PC user interface as well as sharp lines between hardware profiles. A developer on the PC would love to know the exact number of potential users they should expect if they were to support a certain hardware configuration. Valve was always keen on supplying hardware profile statistics, and this is certainly a harsh evolution of that.
Subject: General Tech, Mobile, Shows and Expos | February 28, 2012 - 07:27 PM | Scott Michaud
Tagged: MWC 12, Android 5.0, Android
Android Ice Cream Sandwich is currently getting rolled out to compatible devices at a leisurely pace. The OS itself is for the most part well appreciated by both developers and end-users. As the rollout progresses and minor maintenance patches are created: Google is looking forward to the next major version.
Just get Ice Cream Sandwich and they already talking about the future. U Jelly? : D
ComputerWorld went out to Barcelona to check out Mobile World Congress and of course could not resist reporting on Android. In an interview with Hiroshi Lockheimer, Google VP of Engineering for Mobile, we are treated to a few indirect statements about the next major version of Android.
The major release timeframe for Android is said to continue to be an annual endeavor. An annual release schedule would slate Android J (5.0) to an autumn timeframe. During the discussion, Lockheimer noted that there is flexibility with when developers wish to roll out updated. While that personally sounds like Google is allowing OEMs and carriers to take as long as they desire to implement the new Android releases it appears as if ComputerWorld has heard rumors of Android 5-power phones appearing as early as summer.
Despite ComputerWorld’s best effort, Google would not confirm the dessert associated with Android 5. Best guesses point to the name Jelly Bean, which are supported by a glass jar of Jelly Beans on the show floor.
Subject: General Tech, Processors, Mobile, Shows and Expos | February 25, 2012 - 07:06 PM | Scott Michaud
Tagged: texas instruments, MWC 12, arm, A9, A15
Texas Instruments could not wait until Mobile World Congress to start throwing punches. Despite their recent financial problems resulting in the closure of two fabrication plants TI believes that their product should speak for itself. Texas Instruments recently released a video showing their dual-core OMAP5 processor based on the ARM Cortex-A15 besting a quad-core ARM Cortex-A9 in rendering websites.
Chuck Norris joke.
Even with being at a two core disadvantage the 800 MHz OMAP5 processor was clocked 40 percent slower than the 1.3 GHz Cortex A9. The OMAP5 is said to be able to reach 2.5 GHz if necessary when released commercially.
Certain portions of the video did look a bit fishy however. Firstly, CNet actually loaded quicker on the A9 processor but it idled a bit before advancing to the second page. The A9 could have been stuck loading an object that the OMAP 5 did not have an issue with, but it does seem a bit weird.
About the fishiest part of the video is that the Quad-Core A9, which we assume to be a Tegra 3, is running on Honeycomb where the OMAP5 is running Ice Cream Sandwich. Ice Cream Sandwich has been much enhanced for performance over Honeycomb.
We have no doubt that the ARM Cortex-A15 will be much improved over the current A9. The issue here is that TI cannot successfully prove that with this demonstration.
Subject: General Tech, Mobile, Shows and Expos | February 24, 2012 - 06:18 PM | Scott Michaud
Tagged: nvidia, DirectTouch, MWC 12
As a part of their Tegra 3 product, NVIDIA embedded the ability to control some of the touchscreen processing onto the CPU. The offloading allows for increased power efficiency by reducing the number of powered components as well as increased touch responsivity. Atmel, Cypress, and Synaptics are three leading touch-controller companies who join N-Trig, Raydium, and Focaltech in supporting the DirectTouch architecture.
Touchy subject, I know -- but...
Advancements in touch technology are definitely welcome especially when the words power efficiency or responsiveness are involved. Both NVIDIA and Intel have been looking for ways to reduce the number of electronics behind your phone or tablet. The less required to do the most the better we are. It is great to see NVIDIA taking the lead in innovation when it is needed the most.
While I do not mean to rain upon NVIDIA’s bright blue skies -- I must make a note. Despite the precision brought by high sample rate, there does appear to be quite a bit of latency between where his finger is and where the touch is reported. I would be curious to see where that latency occurs.
Of course this issue probably has nothing to do with NVIDIA. Videogames, particularly on consoles, are known to have latencies floating up to 100ms as the input device does not influence the frames being rendered often enough. The latency could come in from the touch device itself, from the software, the operating system, and/or whatever else.
We do not know where the latency occurs, but I expect that whoever crushes it will have a throne awaiting them somewhere in Silicon Valley.
Subject: General Tech, Mobile, Shows and Expos | February 24, 2012 - 04:29 PM | Scott Michaud
Tagged: MWC 12, mozilla, B2G, LG
Mozilla will show off their marketplace for web apps at Mobile World Congress 2012. Mozilla Marketplace will support the upcoming Boot to Gecko (B2G) operating system for mobile devices such as smartphones and tablets. It is rumored that they will announce LG as a partner to develop either a tablet or a phone for developers of the B2G platform.
I ~ <3 Paypal... I guess.
Paypal has been announced as the payment processor for the Mozilla Marketplace. Paypal is not universally adored although we can understand why Mozilla would need to use an existing package. Prices are locked to one of 30 tiers so pricing is not entirely flexible but does run the gamut from 99-cents to $50 as well as of course free.
Hopefully we will get more details about Boot to Gecko or the Mozilla-powered LG phone at MWC in the coming days.
Subject: General Tech, Processors, Systems, Mobile, Shows and Expos | February 20, 2012 - 01:50 AM | Scott Michaud
Tagged: Rosepoint, ISSCC 2012, ISSCC, Intel
If there is one thing that Intel is good at, it is writing a really big check to go in a new direction right when absolutely needed. Intel has released press information on what should be expected from their presence at the International Solid-State Circuits Conference which is currently in progress until the 23rd. The headliner for Intel at this event is their Rosepoint System on a Chip (SoC) which looks to lower power consumption by rethinking the RF transceiver and including it on the die itself. While the research has been underway for over a decade at this point, pressure from ARM has pushed Intel to, once again, throw money at R&D until their problems go away.
Intel could have easily trolled us all and have named this SoC "Centrino".
Almost ten years ago, AMD had Intel in a very difficult position. Intel fought to keep clock-rates high until AMD changed their numbering scheme to give proper credit to their higher performance-per-clock components. Intel dominated, legally or otherwise, the lower end market with their Celeron line of processors.
AMD responded with series of well-timed attacks against Intel. AMD jabbed Intel in the face and punched them in the gut with the release of the Sempron processor line nearby filing for anti-trust against Intel to allow them to more easily sell their processors in mainstream PCs.
At around this time, Intel decided to entirely pivot their product direction and made plans to take their Netburst architecture behind the shed. AMD has yet to recover from the tidal wave which the Core architectures crashed upon them.
Intel wishes to stop assaulting your battery indicator.
With the surge of ARM processors that have been fundamentally designed for lower power consumption than Intel’s x86-based competition, things look bleak for the expanding mobile market. Leave it to Intel to, once again, simply cut a gigantic check.
Intel is in the process of cutting power wherever possible in their mobile offerings. To remain competitive with ARM, Intel is not above outside-the-box solutions including the integration of more power-hungry components directly into the main processor. Similar to NVIDIA’s recent integration of touchscreen hardware into their Tegra 3 SoC, Intel will push the traditionally very power-hungry Wi-Fi transceivers into the SoC and supposedly eliminate all analog portions of the component in the process.
I am not too knowledgeable about Wi-Fi transceivers so I am not entirely sure how big of a jump Intel has made in their development, but it appears to be very significant. Intel is said to discuss this technology more closely during their talk on Tuesday morning titled, “A 20dBm 2.4GHz Digital Outphasing Transmitter for WLAN Application in 32nm CMOS.”
This paper is about a WiFi-compliant (802.11g/n) transmitter using Intel’s 32nm process and techniques leveraging Intel transistors to achieve record performance (power consumption per transmitted data better than state-of-the art). These techniques are expected to yield even better results when moved to Intel’s 22nm process and beyond.
What we do know is that the Rosepoint SoC will be manufactured at 32nm and is allegedly quite easy to scale down to smaller processes when necessary. Intel has also stated that while only Wi-Fi is currently supported, other frequencies including cellular bands could be developed in the future.
We will need to wait until later to see how this will affect the real world products, but either way -- this certainly is a testament to how much change a dollar can be broken into.