All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Subject: General Tech, Shows and Expos | May 18, 2012 - 04:24 AM | Scott Michaud
Tagged: E3, unreal engine 4, ue4
Epic Games has demonstrated Unreal Engine 4 behind closed doors at GDC a few months ago. First screenshots have been released from that demo although not much more has been made public about it. While not completely epic, it definitely is exciting. Unreal Engine 4 is expected to be further unveiled at or near E3 in June.
Epic has been quiet about the next generation of their game development platform. Only a handful of lucky individuals were shown the demo at the GDC and those who did could not share their experience. Epic has said that they would have liked to publicly demonstrate their product, but were unable to due to non-disclosure agreements that they themselves were placed under.
I think that guy needs some thixomolded magnesium alloy. He seems to be running a little hot.
Either he’ll cool down, or produce a beautiful white bloom.
(Screenshot Credit: PC Gamer)
Wired claims that Epic will unveil the rest of Unreal Engine 4 in June which likely means that it will occur on or around the E3 press conference.
It is thus easy to speculate that whatever gagged Epic will likely be unveiled at E3 too.
The major hook of the demo was that it was running in the editor and not in a baked game executable. This means that developers will have a much easier time creating their game and will also have to spend much less time preparing to work. About the only concrete tidbit in the article is that Unreal Engine 4 will not have baked lighting. Unreal Engine 4 will likely use a technology similar to Battlefield 3 where global illumination is calculated at runtime -- nearly a must for properly lit destructibility.
Subject: Shows and Expos | May 15, 2012 - 04:12 PM | Jeremy Hellstrom
Tagged: NVIDIA VGX, nvidia, GTC 2012, virtual graphics, virtual machine
One of the more interesting announcements so far at the GTC has been NVIDIA's wholehearted leap into desktop virtualization with NVIDIA VGX series of add on cards. Not really a graphics card and more specialized than the Tesla, GPU VDI will give you a GPU accelerated virtual machine. If you are wondering why you would need that consider a VM which can handle an Aero desktop and stream live HD video where the processing power comes not from the CPU but from a virtual GPU. They've partnered it with Hypervisor which can integrate with existing VM platforms to provide virtual GPU control as well as another piece of software which allows you to pick and choose what graphics resources your users get.
SAN JOSE, Calif.—GPU Technology Conference—May 15, 2012—NVIDIA today unveiled the NVIDIA VGX platform, which enables IT departments to deliver a virtualized desktop with the graphics and GPU computing performance of a PC or workstation to employees using any connected device.
With the NVIDIA VGX platform in the data center, employees can now access a true cloud PC from any device – thin client, laptop, tablet or smartphone – regardless of its operating system, and enjoy a responsive experience for the full spectrum of applications previously only available on an office PC.
NVIDIA VGX enables knowledge workers for the first time to access a GPU-accelerated desktop similar to a traditional local PC. The platform’s manageability options and ultra-low latency remote display capabilities extend this convenience to those using 3D design and simulation tools, which had previously been too intensive for a virtualized desktop.
Integrating the VGX platform into the corporate network also enables enterprise IT departments to address the complex challenges of “BYOD” – employees bringing their own computing device to work. It delivers a remote desktop to these devices, providing users the same access they have on their desktop terminal. At the same time, it helps reduce overall IT spend, improve data security and minimize data center complexity.
“NVIDIA VGX represents a new era in desktop virtualization,” said Jeff Brown, general manager of the Professional Solutions Group at NVIDIA. “It delivers an experience nearly indistinguishable from a full desktop while substantially lowering the cost of a virtualized PC.”
The NVIDIA VGX platform is part of a series of announcements NVIDIA is making today at the GPU Technology Conference (GTC), all of which can be accessed in the GTC online press room.
The VGX platform addresses key challenges faced by global enterprises, which are under constant pressure both to control operating costs and to use IT as a competitive edge that allows their workforces to achieve greater productivity and deliver new products faster. Delivering virtualized desktops can also minimize the security risks inherent in sharing critical data and intellectual property with an increasingly internationalized workforce.
NVIDIA VGX is based on three key technology breakthroughs:
- NVIDIA VGX Boards. These are designed for hosting large numbers of users in an energy-efficient way. The first NVIDA VGX board is configured with four GPUs and 16 GB of memory, and fits into the industry-standard PCI Express interface in servers. ·
- NVIDIA VGX GPU Hypervisor. This software layer integrates into commercial hypervisors, such as the Citrix XenServer, enabling virtualization of the GPU.
- NVIDIA User Selectable Machines (USMs). This manageability option allows enterprises to configure the graphics capabilities delivered to individual users in the network, based on their demands. Capabilities range from true PC experiences available with the NVIDIA standard USM to enhanced professional 3D design and engineering experiences with NVIDIA Quadro or NVIDIA NVS GPUs.
The NVIDIA VGX platform enables up to 100 users to be served from a single server powered by one VGX board, dramatically improving user density on a single server compared with traditional virtual desktop infrastructure (VDI) solutions. It sharply reduces such issues as latency, sluggish interaction and limited application support, all of which are associated with traditional VDI solutions.
With the NVIDIA VGX platform, IT departments can serve every user in the organization – from knowledge workers to designers – with true PC-like interactive desktops and applications.
NVIDIA VGX Boards
NVIDIA VGX boards are the world’s first GPU boards designed for data centers. The initial NVIDIA VGX board features four GPUs, each with 192 NVIDIA CUDA architecture cores and 4 GB of frame buffer. Designed to be passively cooled, the board fits within existing server-based platforms.
The boards benefit from a range of advancements, including hardware virtualization, which enables many users who are running hosted virtual desktops to share a single GPU and enjoy a rich, interactive graphics experience; support for low-latency remote display, which greatly reduces the lag currently experienced by users; and, redesigned shader technology to deliver higher power efficiency.
NVIDIA VGX GPU Hypervisor
The NVIDIA VGX GPU Hypervisor is a software layer that integrates into a commercial hypervisor, enabling access to virtualized GPU resources. This allows multiple users to share common hardware and ensure virtual machines running on a single server have protected access to critical resources. As a result, a single server can now economically support a higher density of users, while providing native graphics and GPU computing performance.
This new technology is being integrated by leading virtualization companies, such as Citrix, to add full hardware graphics acceleration to their full range of VDI products.
NVIDIA User Selectable Machines
NVIDIA USMs allow the NVIDIA VGX platform to deliver the advanced experience of professional GPUs to those requiring them across an enterprise. This enables IT departments to easily support multiple types of users from a single server.
USMs allow better utilization of hardware resources, with the flexibility to configure and deploy new users’ desktops based on changing enterprise needs. This is particularly valuable for companies providing infrastructure as a service, as they can repurpose GPU-accelerated servers to meet changing demand throughout the day, week or season.
Subject: Shows and Expos | May 15, 2012 - 03:43 PM | Jeremy Hellstrom
Tagged: tesla, nvidia, GTC 2012, kepler, CUDA
SAN JOSE, Calif.—GPU Technology Conference—May 15, 2012—NVIDIA today unveiled a new family of Tesla GPUs based on the revolutionary NVIDIA Kepler GPU computing architecture, which makes GPU-accelerated computing easier and more accessible for a broader range of high performance computing (HPC) scientific and technical applications.
The new NVIDIA Tesla K10 and K20 GPUs are computing accelerators built to handle the most complex HPC problems in the world. Designed with an intense focus on high performance and extreme power efficiency, Kepler is three times as efficient as its predecessor, the NVIDIA Fermi architecture, which itself established a new standard for parallel computing when introduced two years ago.
“Fermi was a major step forward in computing,” said Bill Dally, chief scientist and senior vice president of research at NVIDIA. “It established GPU-accelerated computing in the top tier of high performance computing and attracted hundreds of thousands of developers to the GPU computing platform. Kepler will be equally disruptive, establishing GPUs broadly into technical computing, due to their ease of use, broad applicability and efficiency.”
The Tesla K10 and K20 GPUs were introduced at the GPU Technology Conference (GTC), as part of a series of announcements from NVIDIA, all of which can be accessed in the GTC online press room.
NVIDIA developed a set of innovative architectural technologies that make the Kepler GPUs high performing and highly energy efficient, as well as more applicable to a wider set of developers and applications. Among the major innovations are:
- SMX Streaming Multiprocessor – The basic building block of every GPU, the SMX streaming multiprocessor was redesigned from the ground up for high performance and energy efficiency. It delivers up to three times more performance per watt than the Fermi streaming multiprocessor, making it possible to build a supercomputer that delivers one petaflop of computing performance in just 10 server racks. SMX’s energy efficiency was achieved by increasing its number of CUDA architecture cores by four times, while reducing the clock speed of each core, power-gating parts of the GPU when idle and maximizing the GPU area devoted to parallel-processing cores instead of control logic.
- Dynamic Parallelism – This capability enables GPU threads to dynamically spawn new threads, allowing the GPU to adapt dynamically to the data. It greatly simplifies parallel programming, enabling GPU acceleration of a broader set of popular algorithms, such as adaptive mesh refinement, fast multipole methods and multigrid methods.
- Hyper-Q – This enables multiple CPU cores to simultaneously use the CUDA architecture cores on a single Kepler GPU. This dramatically increases GPU utilization, slashing CPU idle times and advancing programmability. Hyper-Q is ideal for cluster applications that use MPI.
“We designed Kepler with an eye towards three things: performance, efficiency and accessibility,” said Jonah Alben, senior vice president of GPU Engineering and principal architect of Kepler at NVIDIA. “It represents an important milestone in GPU-accelerated computing and should foster the next wave of breakthroughs in computational research.”
NVIDIA Tesla K10 and K20 GPUs
The NVIDIA Tesla K10 GPU delivers the world’s highest throughput for signal, image and seismic processing applications. Optimized for customers in oil and gas exploration and the defense industry, a single Tesla K10 accelerator board features two GK104 Kepler GPUs that deliver an aggregate performance of 4.58 teraflops of peak single-precision floating point and 320 GB per second memory bandwidth.
The NVIDIA Tesla K20 GPU is the new flagship of the Tesla GPU product family, designed for the most computationally intensive HPC environments. Expected to be the world’s highest-performance, most energy-efficient GPU, the Tesla K20 is planned to be available in the fourth quarter of 2012.
The Tesla K20 is based on the GK110 Kepler GPU. This GPU delivers three times more double precision compared to Fermi architecture-based Tesla products and it supports the Hyper-Q and dynamic parallelism capabilities. The GK110 GPU is expected to be incorporated into the new Titan supercomputer at the Oak Ridge National Laboratory in Tennessee and the Blue Waters system at the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign.
“In the two years since Fermi was launched, hybrid computing has become a widely adopted way to achieve higher performance for a number of critical HPC applications,” said Earl C. Joseph, program vice president of High-Performance Computing at IDC. “Over the next two years, we expect that GPUs will be increasingly used to provide higher performance on many applications.”
Preview of CUDA 5 Parallel Programming Platform
In addition to the Kepler architecture, NVIDIA today released a preview of the CUDA 5 parallel programming platform. Available to more than 20,000 members of NVIDIA’s GPU Computing Registered Developer program, the platform will enable developers to begin exploring ways to take advantage of the new Kepler GPUs, including dynamic parallelism.
The CUDA 5 parallel programming model is planned to be widely available in the third quarter of 2012. Developers can get access to the preview release by signing up for the GPU Computing Registered Developer program on the CUDA website.
Subject: General Tech, Graphics Cards, Shows and Expos | May 15, 2012 - 10:14 AM | Ryan Shrout
Tagged: nvidia, GTC 2012, live
Are you interested in GPUs? Maybe GPU computing or even some cloud-based GeForce announcements? Chances are then you'll want to tune in to the NVIDIA GPU Technology Conference keynote today at 10:30am PT / 1:30pm ET.
NVIDIA CEO Jen-Hsun Huang is expected to be on stage with three new announcements, one of which will likely be the the GK110 GPU we have all been waiting to hear about. Another has been teased as "a new major cloud gaming technology" while the third...well I really have no idea. It should be exciting though so tune in and watch along with us!
You can catch it all at http://www.gputechconf.com/!
Subject: General Tech, Systems, Shows and Expos | April 17, 2012 - 04:55 PM | Scott Michaud
Tagged: NAB 12, ACME
ACME Portable Machines showed off their Seahawk 100 computer on the show floor of the National Association of Broadcasters 2012 show. Multiple monitors, ruggedized, semi-portable, but slightly out of date on the hardware side.
When you think about portable computing: do you think about a laptop or a tablet? Either way you probably do not think about this product. But, should you?
Well if you did you would probably know it.
ACME Portable Machines is showing off the Seahawk 100 at NAB this week. The purpose for the device is to bring a fully functional multi-monitor computer where you need it, to plug it in, and to be assured that it will work.
Just don't give in to the temptation to make people call you the operator...
Functionally the device is slightly out of date with an Intel Core 2 Quad Q9550S 2.83 GHz processor, NVIDIA GeForce GTX 260 video card, and 2-8 GB of RAM. If your desire is to play Starcraft 2 on the three monitors than you should have no problems, but that is not why you are purchasing this PC. If you are the type of person to visit the NAB show you probably will wish to include much more RAM than the default 2GB -- or even if you are not, 2GB is quite low nowadays.
It's not a tumah!
Price is only available by quote, but check out their website for more information. The design definitely looks interesting for users of its niche -- professionals in the field who just cannot live without the flexibility of multiple screens.
Thanks to our friend Colleen for the heads up and photos!
Subject: Editorial, Shows and Expos | March 6, 2012 - 04:45 AM | Scott Michaud
Tagged: GDC 12, GDC
The Game Developers Conference (GDC) has a long history of being underappreciated by the general public. GDC has become more mainstream than it once was. Five years ago, a panel called “Programmer’s Challenge” -- Jeopardy for videogame programmers -- was in its fifth iteration and submitted to Google Video. Check out what GDC once was.
Take a bunch of programmers and ask them what happens when you XOR Frosted Flakes and Frosted Cheerios
I'm not kidding.
Questions from the Programmer’s Challenge are very entertaining and well worth the 45 minutes it takes to watch. It is exactly what you should expect from a Jeopardy game with “Blizzard Sues Everyone” as an example category title.
You are a high level EA executive. You have 327,600 man hours of game development to complete in the 12 weeks before Christmas. If you have 300 employees working 40 hours a week, how many hours of unpaid overtime per week should you force each employee to do before laying them off in January?
Part of the fun is keeping up with the logic puzzles which get quite difficult. The game rounds out near the end with binary algebra of breakfast cereals. Put a little smile in your Tuesday.
Subject: General Tech, Shows and Expos | March 6, 2012 - 03:41 AM | Scott Michaud
Tagged: valve, Steam Box, GDC, GDC 12
Valve and Razer formally agree to support Razer Hydra motion controller in Valve’s four most popular titles and two upcoming ones.
A little over two years ago, Valve and Razer announced a partnership for their Sixense high-precision motion controllers. During CES 2010, attendees were able to experiment with a prototype motion controller from Sixense to control Left 4 Dead 2. Sixense TrueMotion controllers were later released by Razer last June as the Razer Hydra.
Now you're thinking with controllers.
This Game Developers Conference (GDC) fast forwards us to almost a year after the launch of the Razer Hydra. The price for the controller has dropped $40 to $99.99 at some point between then and now. Valve has also announced that support would be extended from Portal 2 and Left 4 Dead 2 to include Half-Life 2, Team Fortress 2 and upcoming Dota 2 and Counter-Strike: Global Offensive.
The fishiest part of this whole announcement involves the Steam Box rumor from a few days ago. Valve appears to be very focused on the best portions of console gaming for the PC all of a sudden. I could easily see motion controls be used to support The Steam Box or whatever it might be called -- especially if it were used for more than just gaming and by more than just gamers.
So what do you all think?
Subject: General Tech, Shows and Expos | March 3, 2012 - 09:18 PM | Scott Michaud
Tagged: GDC 12, GDC, crytek
Crytek unveils their large presence at Game Developers’ Conference (GDC 2012) occurring next week: what projects will be on the show floor and what projects will be discussed privately by appointment.
The Electronic Entertainment Expo (E3) tends to be where most gamers get their overdose of gaming news. Much fewer gamers know of the Game Developers’ Conference which occurs about three months earlier. Especially over recent years, GDC coverage sometimes ends up more exciting than E3 with announcements being more technical and oriented to developers.
A call out to interested developers.
Crytek published a press release on their website outlining their products. The release is quite cryptic in its wording, but more information should be available soon.
GFACE, our recently announced social entertainment service, and its business development team is on the lookout for fun third-party social, casual, core free2play games that can complement our launch line up. Everyone interested in becoming part of GFACE should contact us at email@example.com to set up an appointment to learn more about the GFACE Social Media Publishing Platform to “Play.Together.Live.”
Crytek’s first freemium PC Online FPS Game Service Warface invites players to check out our PVE and PVP gameplay.
GDC attendees can participate in CryENGINE presentations every full hour. Topics that will be covered are next-generation DX 11 graphics and tools upgrades, Cinebox, creating characters for CryENGINE, AI Systems, UI Actions and Flow Graph and After Action feature set for Serious Games.
CryENGINE®3 Cinebox™ will also be on the showfloor and we’d love to show you more about it. For more information, please visit mycryengine.com or contact us at firstname.lastname@example.org
Real Time Immersive, Inc. (RTI) is a simulation and serious games studio established to support CryENGINE® licensees in the serious game and simulation market space. The team will be present on the show floor and show their latest developments.
Crytek uses their own vocabulary to categories projects which use their engine. Your project is a “Game” if it is a typical videogame such as Crysis or Mechwarrior Online. Your project is a “Serious Game” if you use their game technology for professional applications such as Lockheed Martin developing or demonstrating aircraft technology. Your project is a “Visualization” if you use game technology to demonstrate architecture or produce TV, film, and similar content in the engine.
I am most interested to find out more details about Warface and specifically find out what they could possibly be describing as a FPS Game Service with PVE gameplay. How about you? Comment away.
Subject: Editorial, General Tech, Systems, Shows and Expos | March 3, 2012 - 05:16 PM | Scott Michaud
Tagged: valve, Steam Box, steam, GDC 12
It is rumored that Valve will announce a Steam hardware platform as early as GDC next week although that could be pushed back as late as E3 in June.
Steam has grown atop the PC platform and consists of over 40 million active user accounts. For perspective, the Xbox 360 has sold 65.8 million units to date and that includes units sold to users whose older Xbox 360s died and they did not go the cardboard coffin route. Of course the study does not account for the level of hardware performance each user can utilize although Valve does keep regular surveys of that.
A console with admined dedicated servers to kick the teabagging and griefing Steam punks.
Within the last couple of years, Valve has been popping in to news seemingly out of the blue. Allow me to draw your attention to three main events.
At the last GDC, Valve announced “The Big Picture” mode for their Steam software. The Big Picture is an interface for Steam which is friendly to users seated on a couch several feet away from a large screen TV. While “The Big Picture” has yet to be released it does set the stage for a great Home Theatre PC user interface for PC games as well as potentially other media.
I must admit, that controller does not look the most ergonomic... but it is just a patent filing.
Last year, Valve also filed a patent with the US Patent Office for a video game controller with user swappable control components. Their patent filings show a controller which looks quite similar to an Xbox 360 controller where the thumbsticks can be replaced with touch pads as well as a trackball and potentially other devices. Return of Missile Command anyone?
Also a little over two years ago, Valve announced a partnership with Razer for their Sixense high-precision motion controllers. It is possible that Valve was supporting this technology for this future all along. While motion controllers have not proven to be successful for gaming, they are accepted as a method to control a device. Perhaps The Big Picture will be optimized to support Sixense and compatible devices?
The Verge goes beyond their claims that Valve will announce The Steam Box and has included specifications for a closed-doors prototype of the system. The system was rumored to be used to present to partners at CES contained an Intel Core i7 CPU, 8GB of RAM, and an NVIDIA GPU.
You know if Microsoft had focused on Media Center for gaming rather than the Xbox...
It is very unclear whether Valve will attempt to take a loss on the platform in hopes to make it back up in Steam commissions. It is possible that Valve will just push the platform to OEM partners, but it is possible that they will release and market their own canon device.
I am interested to see how Valve will push the Home Theatre PC market. The main disadvantage that the PC platform has at the moment is simply marketing and development money. It is also possible that they wish to expand out and support other media through their Steam service as well.
At the very least, we should have a viable Home Theatre PC user interface as well as sharp lines between hardware profiles. A developer on the PC would love to know the exact number of potential users they should expect if they were to support a certain hardware configuration. Valve was always keen on supplying hardware profile statistics, and this is certainly a harsh evolution of that.
Subject: General Tech, Mobile, Shows and Expos | February 28, 2012 - 07:27 PM | Scott Michaud
Tagged: MWC 12, Android 5.0, Android
Android Ice Cream Sandwich is currently getting rolled out to compatible devices at a leisurely pace. The OS itself is for the most part well appreciated by both developers and end-users. As the rollout progresses and minor maintenance patches are created: Google is looking forward to the next major version.
Just get Ice Cream Sandwich and they already talking about the future. U Jelly? : D
ComputerWorld went out to Barcelona to check out Mobile World Congress and of course could not resist reporting on Android. In an interview with Hiroshi Lockheimer, Google VP of Engineering for Mobile, we are treated to a few indirect statements about the next major version of Android.
The major release timeframe for Android is said to continue to be an annual endeavor. An annual release schedule would slate Android J (5.0) to an autumn timeframe. During the discussion, Lockheimer noted that there is flexibility with when developers wish to roll out updated. While that personally sounds like Google is allowing OEMs and carriers to take as long as they desire to implement the new Android releases it appears as if ComputerWorld has heard rumors of Android 5-power phones appearing as early as summer.
Despite ComputerWorld’s best effort, Google would not confirm the dessert associated with Android 5. Best guesses point to the name Jelly Bean, which are supported by a glass jar of Jelly Beans on the show floor.
Subject: General Tech, Processors, Mobile, Shows and Expos | February 25, 2012 - 07:06 PM | Scott Michaud
Tagged: texas instruments, MWC 12, arm, A9, A15
Texas Instruments could not wait until Mobile World Congress to start throwing punches. Despite their recent financial problems resulting in the closure of two fabrication plants TI believes that their product should speak for itself. Texas Instruments recently released a video showing their dual-core OMAP5 processor based on the ARM Cortex-A15 besting a quad-core ARM Cortex-A9 in rendering websites.
Chuck Norris joke.
Even with being at a two core disadvantage the 800 MHz OMAP5 processor was clocked 40 percent slower than the 1.3 GHz Cortex A9. The OMAP5 is said to be able to reach 2.5 GHz if necessary when released commercially.
Certain portions of the video did look a bit fishy however. Firstly, CNet actually loaded quicker on the A9 processor but it idled a bit before advancing to the second page. The A9 could have been stuck loading an object that the OMAP 5 did not have an issue with, but it does seem a bit weird.
About the fishiest part of the video is that the Quad-Core A9, which we assume to be a Tegra 3, is running on Honeycomb where the OMAP5 is running Ice Cream Sandwich. Ice Cream Sandwich has been much enhanced for performance over Honeycomb.
We have no doubt that the ARM Cortex-A15 will be much improved over the current A9. The issue here is that TI cannot successfully prove that with this demonstration.
Subject: General Tech, Mobile, Shows and Expos | February 24, 2012 - 06:18 PM | Scott Michaud
Tagged: nvidia, DirectTouch, MWC 12
As a part of their Tegra 3 product, NVIDIA embedded the ability to control some of the touchscreen processing onto the CPU. The offloading allows for increased power efficiency by reducing the number of powered components as well as increased touch responsivity. Atmel, Cypress, and Synaptics are three leading touch-controller companies who join N-Trig, Raydium, and Focaltech in supporting the DirectTouch architecture.
Touchy subject, I know -- but...
Advancements in touch technology are definitely welcome especially when the words power efficiency or responsiveness are involved. Both NVIDIA and Intel have been looking for ways to reduce the number of electronics behind your phone or tablet. The less required to do the most the better we are. It is great to see NVIDIA taking the lead in innovation when it is needed the most.
While I do not mean to rain upon NVIDIA’s bright blue skies -- I must make a note. Despite the precision brought by high sample rate, there does appear to be quite a bit of latency between where his finger is and where the touch is reported. I would be curious to see where that latency occurs.
Of course this issue probably has nothing to do with NVIDIA. Videogames, particularly on consoles, are known to have latencies floating up to 100ms as the input device does not influence the frames being rendered often enough. The latency could come in from the touch device itself, from the software, the operating system, and/or whatever else.
We do not know where the latency occurs, but I expect that whoever crushes it will have a throne awaiting them somewhere in Silicon Valley.
Subject: General Tech, Mobile, Shows and Expos | February 24, 2012 - 04:29 PM | Scott Michaud
Tagged: MWC 12, mozilla, B2G, LG
Mozilla will show off their marketplace for web apps at Mobile World Congress 2012. Mozilla Marketplace will support the upcoming Boot to Gecko (B2G) operating system for mobile devices such as smartphones and tablets. It is rumored that they will announce LG as a partner to develop either a tablet or a phone for developers of the B2G platform.
I ~ <3 Paypal... I guess.
Paypal has been announced as the payment processor for the Mozilla Marketplace. Paypal is not universally adored although we can understand why Mozilla would need to use an existing package. Prices are locked to one of 30 tiers so pricing is not entirely flexible but does run the gamut from 99-cents to $50 as well as of course free.
Hopefully we will get more details about Boot to Gecko or the Mozilla-powered LG phone at MWC in the coming days.
Subject: General Tech, Processors, Systems, Mobile, Shows and Expos | February 20, 2012 - 01:50 AM | Scott Michaud
Tagged: Rosepoint, ISSCC 2012, ISSCC, Intel
If there is one thing that Intel is good at, it is writing a really big check to go in a new direction right when absolutely needed. Intel has released press information on what should be expected from their presence at the International Solid-State Circuits Conference which is currently in progress until the 23rd. The headliner for Intel at this event is their Rosepoint System on a Chip (SoC) which looks to lower power consumption by rethinking the RF transceiver and including it on the die itself. While the research has been underway for over a decade at this point, pressure from ARM has pushed Intel to, once again, throw money at R&D until their problems go away.
Intel could have easily trolled us all and have named this SoC "Centrino".
Almost ten years ago, AMD had Intel in a very difficult position. Intel fought to keep clock-rates high until AMD changed their numbering scheme to give proper credit to their higher performance-per-clock components. Intel dominated, legally or otherwise, the lower end market with their Celeron line of processors.
AMD responded with series of well-timed attacks against Intel. AMD jabbed Intel in the face and punched them in the gut with the release of the Sempron processor line nearby filing for anti-trust against Intel to allow them to more easily sell their processors in mainstream PCs.
At around this time, Intel decided to entirely pivot their product direction and made plans to take their Netburst architecture behind the shed. AMD has yet to recover from the tidal wave which the Core architectures crashed upon them.
Intel wishes to stop assaulting your battery indicator.
With the surge of ARM processors that have been fundamentally designed for lower power consumption than Intel’s x86-based competition, things look bleak for the expanding mobile market. Leave it to Intel to, once again, simply cut a gigantic check.
Intel is in the process of cutting power wherever possible in their mobile offerings. To remain competitive with ARM, Intel is not above outside-the-box solutions including the integration of more power-hungry components directly into the main processor. Similar to NVIDIA’s recent integration of touchscreen hardware into their Tegra 3 SoC, Intel will push the traditionally very power-hungry Wi-Fi transceivers into the SoC and supposedly eliminate all analog portions of the component in the process.
I am not too knowledgeable about Wi-Fi transceivers so I am not entirely sure how big of a jump Intel has made in their development, but it appears to be very significant. Intel is said to discuss this technology more closely during their talk on Tuesday morning titled, “A 20dBm 2.4GHz Digital Outphasing Transmitter for WLAN Application in 32nm CMOS.”
This paper is about a WiFi-compliant (802.11g/n) transmitter using Intel’s 32nm process and techniques leveraging Intel transistors to achieve record performance (power consumption per transmitted data better than state-of-the art). These techniques are expected to yield even better results when moved to Intel’s 22nm process and beyond.
What we do know is that the Rosepoint SoC will be manufactured at 32nm and is allegedly quite easy to scale down to smaller processes when necessary. Intel has also stated that while only Wi-Fi is currently supported, other frequencies including cellular bands could be developed in the future.
We will need to wait until later to see how this will affect the real world products, but either way -- this certainly is a testament to how much change a dollar can be broken into.
Subject: General Tech, Mobile, Shows and Expos | February 17, 2012 - 06:33 PM | Scott Michaud
Tagged: tegra 3, MWC, htc
Mobile World Congress (MWC) is approaching and you should expect our coverage of the event from a hardware perspective. The actual event occurs from February 27th through March 1st although, like most events, we expect that the news coverage will begin a couple of days before that time. Rumors about what will appear at the show are already surfacing and include a few leaks about upcoming HTC releases.
Probably there's a very simple answer to it... still curious though.
(Update: As pointed out in the comments, one of the phones actually IS Tegra 3 powered. I read it as including some other processor... and somehow I only found the LG X3 when looking for Tegra 3 powered phones.)
TechCrunch rounded up details from a few sources about several phones from HTC that are expected at MWC. Ice Cream Sandwich appears to be the common thread amongst each of the leaks. Of particular note, HTC appears to be demonstrating a 10.1” tablet running an NVIDIA Tegra 3 processor. Their phones, on the other hand, do not. (Update: Yeah they do, my mistake.)
Unlike (Update: Actually, like) HTC, LG is expected to reveal a Tegra-3 powered phone, the LG X3, at Mobile World Congress -- so Tegra 3 phones are not nonexistent -- just seemingly a scarce commodity. It would be interesting to know why NVIDIA’s Tegra 3 technology appears, at least from my standpoint, so common on tablets yet so scarce on phones.
Be sure to keep checking back for our coverage of the event and all of your other hardware needs.
Subject: General Tech, Shows and Expos | February 2, 2012 - 02:23 PM | Jeremy Hellstrom
Tagged: amd, Texas GamExperience
While we at PC Perspective wait for Quakecon before heading to Texas, [H]ard|OCP hosts the AMD GamExperience in Dallas during January. This is the third year in a row they've given gamers the chance to experience the newest in games and gaming peripherals and it must have been a good one since they only managed to recover enough to post the pictures today. As you can see below they are just as hard on the audience as Ryan and the crew, but with $55,000+ worth of prizes to give out it is possible to get gamers to do pushups. Check out what you missed here.
We did have a man on the scene, if you haven't seen Steve's coverage you really should.
"We recently put on the third "GamExperience" here in Dallas, TX! We invited 600 of our closet friends and 20 companies that crank out some of the best computer hardware in the world and put them all in one room together for some gaming and geek talk. And yeah, free stuff too, about $50,000 worth!"
Here is some more Tech News from around the web:
- Physics hardware makes Kepler/GK104 fast @ SemiAccurate
- Kinect for Windows released @ Hack a Day
- How-To: Make Your Own Aerogel @ MAKE:Blog
- Asus denies problems with Transformer Prime again @ The Inquirer
- HTC sends out a fix for WiFi security problem @ The Inquirer
- Ex-Apple engineer emits Zevo ZFS for Mac OS @ The Register
- OC3D: 2012 Competition MADNESS
Subject: Graphics Cards, Processors, Shows and Expos | January 22, 2012 - 01:09 AM | Steve Grever
Tagged: overclocking, hardocp, ati, amd, 7970
More than 500 people made the trip to Eddie Deen’s Ranch in Dallas, Texas to attend AMD and HardOCP’s FX Game Experience event today. Many of the tech industry’s heavy hitters were on hand with interactive booths to showcase their latest PC hardware and provide people with around $50,000 in giveaways and prizes.
Check out our video coverage of the AMD [H]ardOCP FX GamExperience 2012 event!
ASUS, MSI, and Sapphire each brought their latest respective AMD-based motherboards and performance graphics cards to showcase at the event, including their HD Radeon 7950 and 7970 offerings. ASUS also gave the audience a closer look at some of their other PC gaming peripherals, wireless routers, and Blu ray burners.
HardOCP founder Kyle Bennett put on a show for the crowd with numerous raffle drawings and crazy contests for people to win new AMD processors and other hardware from MSI, Gigabyte, ASUS, Corsair, Ergotech, Antec, Maingear, Optoma, Patriot Memory, Astro, Sapphire, Western Digital, ArcSoft, ASRock, vReveal, Diamond Multimedia, and Zotac.
Subject: General Tech, Graphics Cards, Shows and Expos | January 18, 2012 - 03:29 PM | Ryan Shrout
Tagged: CES, lucid, xlr8, vgware, cloud
Even though CES 2012 is behind us, there are still some things we took photos or video of that we wanted to show you. First up, Lucid had a suite off the strip to demonstrate a couple of new technologies coming from the company in 2012. VGWare is the current name for the cloud-based gaming technology based on the Lucid GPU virtualization technology that allows for games to be rendered on a server and played on a remote machine with only minimal hardware.
In the video above you see two integrated-GPU based notebooks playing Modern Warfare 2 (two instances) and Madagascar being rendered on a machine running an NVIDIA GTX 480 GPU.
Lucid intends to offer this technology to larger-scale companies that would want to compete with someone like OnLive or maybe even software developers directly. While that is what we expected, I told them that I would like to see a consumer version of this application - have a single high-powered gaming PC in your home and play games on multiple "thin client" PCs for LAN parties, etc. What do you all think - is that something you see as useful?
The second demo was for Lucid's XLR8 software that promises to improve performance of gaming on PCs, phones and tablets by intelligently managing display synchronization and GPU performance.
The really interesting part about XLR8 is the flexibility it offers - in our video you see it running on an ASUS Transformer tablet via the NVIDIA Tegra 2 SoC. Frame rates jumped about 40% but we didn't get enough hands on time with the configuration to truly make a decision on whether or not it was an improved gaming experience. Hopefully Lucid will get this technology to us soon for some hands-on time.
PC Perspective's CES 2012 coverage is sponsored by MSI Computer.
Follow all of our coverage of the show at http://pcper.com/ces!
Subject: Shows and Expos | January 18, 2012 - 01:22 PM | Jeremy Hellstrom
Tagged: CES, lightning bolt, amd, razer, fiona, lucid, Silverstone
The Tech Report still has more to say about what they saw in Los Vegas this year, as they covered quite a bit of ground. AMD's Lightning Bolt connector, their competition for Thunderbolt, which is much less expensive to integrate into a system especially considering it uses DisplayPort 1.2 style ports. They also played with Razer's popular Project Fiona which is probably what Nokia wished they had released instead of the N-Gage. SandyBridge features in their coverage of Zotac and EVGA and the next generation of that chip showed up at MSI. There is plenty more coverage over at The Tech Report so check it out and don't forget all of our coverage at pcper.com/ces.
Here is some more Tech News from around the web:
- 2012 CES: ioSafe @ OCIA
- The Best of CES 2012 @ AnandTech
- CES 2012 in Pictures: Part 2
- CES 2012: Razer Fiona Tablet Project @ Benchmark Reviews
- HTL CES Live Coverage Part 3 @ Hi Tech Legion
- Kingston latest gadgets at CES 2012 @ Bjorn3D
- 2012 CES: Zotac @ OCIA
- CES 2012 Day 1 Coverage @ Neoseeker
- CES 2012 Day 2 Coverage @ Neoseeker
- HTL CES Live Coverage Part 2 @ Hi Tech Legion
PC Perspective's CES 2012 coverage is sponsored by MSI Computer.
Follow all of our coverage of the show at http://pcper.com/ces!
Subject: Storage, Shows and Expos | January 16, 2012 - 06:36 PM | Allyn Malventano
Tagged: ssd, sandisk, PQI, memory, flash, CES
Sandisk had a booth with a large array of small nand flash storage devices, though most of it appeared to be SD, CF, or for embedded mobile applications:
One of the more interesting pieces was a 64GB e.MMC nand flash part that fit *within* the dimensions of a penny! This is not a plug-in module - it's the type that would be soldered onto the mainboard of a cell phone or other small mobile device:
While the booth was generally light on SSD's, there were a couple on display, namely the U100, in both 7mm (left) and 9.5mm (right) form factors:
The U100 is also available in even smaller form factor. We're currently taking a look at an Ultrabook equipped with the same Sandisk U100 SSD - mounted to an even smaller PCB.
PQI has been a favorite of mine for years. They were among the first to make a really tiny thumb drive, and I'm glad to see they continue to make a versatile line of products:
A little known fact is that PQI also has a line of SATA SSD's:
The S525 Series (also available as the S518 - 1.8" form factor), is a bit long in the tooth and uses a dated JMicron controller, but PQI made the extra effort to include the optional USB 2.0 interface that most other manufacturers chose to omit.
More to follow
I've still got some pics to sift through, so stay tuned for more CES Storage goodies!
PC Perspective's CES 2012 coverage is sponsored by MSI Computer.
Follow all of our coverage of the show at http://pcper.com/ces!