Nobody likes NVIDIA, even Apple won't play with them

Subject: General Tech | March 13, 2012 - 12:15 PM |
Tagged: TSMC, nvidia, macbook, kelper, Ivy Bridge, fermi, apple

NVIDIA has been having a rough life lately with problems besetting them on all sides.  Their IGP business has been disembowelled by AMD's Llano and even Intel is now offering usable graphics with the HD3000 on higher end Sandy Bridge chips.   The console makers seem to have decided on AMD as the provider of choice for the next generation of products which locks NVIDIA out of that market for years to come, as console generations tend to last significantly longer than PC components.  The delays at TSMC have enabled AMD to launch three families of next generation GPU without NVIDIA being able to respond, which not only hurts NVIDIA's bottom line but lets AMD set their own pricing until NVIDIA can finally release Kepler, at a price that will not be wholly of their choosing. 

Now according to SemiAccurate they are losing a goodly portion of Apple's MacBook business as well.  The supply issues which will be the result of the fabrication problems were likely a big factor in Apple's decision to trim back GPU orders but there is also the fact that the low to mid range GPU could well be going extinct.  With the power of the forthcoming Intel HD4000 and AMD's Trinity line of APUs it will become hard for laptop and system makers to justify putting in a discrete GPU since they will have to choose relatively expensive parts to have the discrete GPU contribute to performance. That leaves NVIDIA only providing GPUs for high end MacBooks, a much less lucrative market than the mid range.  Don't even mention the previous issue of overheating GPUs.

ENGAD_macbookproboom.jpg

"That is exactly what SemiAccurate moles are telling us is going on. Nvidia can’t supply, so Apple threw them out on their proverbial magical experience. This doesn’t mean that Nvidia is completely out at Apple, the Intel GPUs are too awful to satisfy the higher end laptops, so there will need to be something in those. What that something is, we don’t definitively know yet, but the possibilities are vanishingly small."

Here is some more Tech News from around the web:

Tech Talk

 

Source: SemiAccurate

Google Rolling Out SSL Encrypted Search for International Users

Subject: General Tech | March 12, 2012 - 10:01 PM |
Tagged: SSL, search, international, google, encryption

Google recently announced on their Inside Search blog that the company would be rolling out the default SSL encrypted search option for users signed in with a Google account internationally. Previously, the company made SSL encryption the default setting for Gmail and provided an alternative encrypted.google.com webpage for users that wanted to opt in to encrypted search. Earlier this year, they began testing SSL encrypted search and search results pages for users signed into Google in the US, and they are now ready to expand the default setting to international users.

google_padlock.png

They announced that over the next few weeks, they will begin introducing an SSL (secure socket layer) encrypted search page for localized international google pages such as google.co.uk (United Kingdom) and google.fr (France) among others. Further, they hope that their increased SSL commitment will encourage other websites to enable SSL on their domains to protect users from MITM (man in the middle) attacks and to ensure their sessions stay private.

More encryption is a good thing, and international users will be pleased to finally get a taste of it for their google search queries, especially now that the big G has enabled personalized search results.

Source: Google

GoFlow To Offer $99 tDCS Brain Augmentation Kit

Subject: General Tech | March 12, 2012 - 09:54 PM |
Tagged: tDCS, overclocking, DIY, brain, augmentation

The new Extreme Tech (RIP ET Classic) recently ran an article that talks about turbo boosting, overclocking goodness, but with a twist. Instead of the typical CPU or GPU hardware, the article talks about overclocking some wetware in the form of the human brain. More specifically, a DIY kit called the GoFlow is in the works to enable affordable tDCS, or transcranial direct-current stimulation, to stimulate the brain into a "state of flow" enabling quicker learning and faster response times.

tDCS_schematic.jpg

The GoFlow β1 will be a $99 do it yourself tDCS kit that will direct you in placing electrodes on the appropriate areas of your scalp and then pumping some direct current from a 9V battery at 2 milliamps through your brain, enticing the neurons into a state of flow. This makes them more malleable to creating new pathways and increasing learning speed as well as allowing them to fire more rapidly, bolstering thought processes. The kit, which is not available for purchase yet, will not require any knowledge of soldering, and will be housed a plastic case along with wires, schematics, and a potentiometer to dial in the right amount of power.

The company behind the GoFlow β1 has further referenced cases of successful tDCS short term testing including tests of UAV Drone pilots and professional gamers all learning their respective trades more quickly than the average. Extremetech also mentions that tDCS can have therapeutic effects for people effected by Parkinson’s or post stoke motor dysfunction.

Right now, the kit is still in the works, but interested users can sign up to be notified when it becomes available on their website. At $99, is this something work a shot, or do you prefer not to void the warranty on your brain? (heh)  Personally, statements such as "our tDCS kit is the shit" and "get one of the first β1's and will help us develop β2" on the webstie are not exactly instilling confidence to me, but if you're big into the early adopter adventure, GoFlow may have something for you to test.

Source: Extreme Tech

So when is that GTX680 coming out anyways?

Subject: General Tech | March 12, 2012 - 03:20 PM |
Tagged: nvidia, kelper, GK104, gtx680

Whether you are an NVIDIA fan or not you are likely anxiously awaiting the release of NVIDIA's new GPU.  Since AMD is currently holding the lead on graphics cards they have no competition to lower the price of their high end cards which can beat anything NVIDIA currently has on the market.  Depending on the price and performance of the new Kepler chip, its release should have an effect on the pricing of at least one line of AMD card, be it Pitcairn or Cape Verde.  This is why SemiAccurate's pegging of the expected release dates, both paper and physical, are worth taking a look at, especially if your GPU recently died and you are looking at getting a new one.  We should have a good idea of the price, performance and availability by the end of March.

Nvidia Fermi1.jpg

"Today, March 8, is the day where the press that Nvidia flew in get their ‘tech day’, basically the deep dive on Kepler. Then, like we said a few days ago, Nvidia will paper launch the cards next Monday, March 12. From that point, things get a little hazy because people are arguing over two different days. Some are saying Friday March 23, others Monday March 26, with a few more saying 23 than 26. It could be none of the above though, but if you bet on two weeks out, you won’t be far off."

Here is some more Tech News from around the web:

Tech Talk

 

Source: SemiAccurate

NVIDIA Joins Linux Foundation

Subject: General Tech | March 10, 2012 - 12:37 AM |
Tagged: software, OS, nvidia, linux

In a recent press release, the Linux Foundation added four new members, one of which is a big deal in the graphics card industry. In addition to the new members of Fluendo, Lineo Solutions, and Mocana is the green GPU powerhouse NVIDIA. According to Maximum PC, there is talk around the web of the company moving to open source graphics drivers; however, NVIDIA has not released anything to officially confirm or deny.

Linux Foundation Logo.GIF

The Linux Foundation's Logo

Such a move would be rather extreme and unlikely, but it would certainly be one that is welcomed by the Linux community. Officially, the Vice President of Linux Platform Software Scott Pritchett stated the company is "strongly committed" to delivering quality software/hardware experiences and they hope their membership in the Linux Foundation will "accelerate our collaboration with the organizations and individuals instrumental in shaping the future of Linux." Further, they hope to be able to add to and enhance the user and development experience of the open source operating system.

The three other members to join the Linux Foundation specialize in multimedia software (Fluendo), embedded system development (Lineo Solutions), and device-agnostic security (Mocana) but the green giant that is NVIDIA has certainly stolen the show and is the big announcement for them (which isn't a bad thing that they joined, it is kind of a big deal to have them). Amanda McPherson, VP of Marketing and Developer Services for the Linux Foundation wrapped up the press release by saying that all of the new members "represent important areas of the Linux ecosystem and their contributions will immediately help advance the operating system.”

NVIDIA has generally enjoyed good support on the major Linux distributions, but now that they are a member here's hoping they can further improve their Linux graphics card drivers. What is your take on the Linux Foundation's new members, will they make a difference?

Podcast #192 - AMD Radeon HD 7870 and 7850, Z77 Motherboard previews, and a Steam console

Subject: Editorial, General Tech | March 8, 2012 - 04:00 PM |
Tagged: Z77, ssd, podcsat, podcast, msi, Intel, gpu, cpu, asus, amd, 7870, 7850

PC Perspective Podcast #192 - 03/08/2012

Join us this week as we talk about the AMD Radeon HD 7870 and 7850, Z77 Motherboard previews, and a Steam console?

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath, and Allyn Malvantano

This Podcast is brought to you by MSI Computer, and their all new Sandy Bridge Motherboards!

Program length: 54:59

Program Schedule: 

  1. 0:00:42 Introduction
  2. 1-888-38-PCPER or podcast@pcper.com
  3. http://pcper.com/podcast
  4. http://twitter.com/ryanshrout and http://twitter.com/pcper
  5. 0:02:00 Installing Windows 8 Consumer Preview In A VirtualBox Virtual Machine
  6. 0:05:00 AMD Radeon HD 7870 2GB and HD 7850 2GB Pitcairn Review
    1. Radeon HD 7870 2GB vs HD 7850 2GB vs HD 5870 1GB Quick Look
  7. 0:18:30 ASUS Z77 Chipset Motherboard Preview: Formula, Gene, mini-ITX
  8. 0:24:30 MSI X79A-GD65 (8D) LGA 2011 ATX Motherboard Review
  9. 0:26:00 Visual Computing Still Decades from Computational Apex
  10. 0:36:00 This Podcast is brought to you by MSI Computer, and their all new Sandy Bridge Motherboards!
  11. 0:37:00 GDC 12: The bigger big picture, Steam Box to be announced?
  12. 0:40:30 MSI Shows of Next Generation Twin Frozr IV Cards at Cebit
  13. 0:42:30 Peter Pan presents a stylish mouse at CeBIT; Thermaltake's Level 10 M
  14. 0:45:45 Apple Launching Quad Core Graphics A5X Powered iPad 3 With Retina Display
  15. 0:49:00 Hardware / Software Pick of the Week
    1. Ryan: How about a new laptop?  Time for shopping!!  Ultrabook?  IVB maybe?
    2. Jeremy: Ever heard of the brown note?
    3. Josh: They have arrived
    4. Allyn: sleep lulz
  16. 1-888-38-PCPER or podcast@pcper.com
  17. http://pcper.com/podcast   
  18. http://twitter.com/ryanshrout and http://twitter.com/pcper
  19. Closing

Source:

Maybe that is why TSMC stopped production on 28nm?

Subject: General Tech | March 8, 2012 - 03:09 PM |
Tagged: TSMC, 28nm

The news broke yesterday that TSMC had halted production on 28nm production in mid February for an undisclosed reason.  SemiAccurate thought this strange as the only company that is admitting to problems with TSMC's 28nm process is NVIDIA, but at least production was scheduled to restart by the end of March.  Today from DigiTimes we hear a more positive story about not only the 28nm but also the 20nm process line as TSMC predicts that the demand for chips from their customers could go as high as 95% of their capacity.  Perhaps because of the lack of pressure because of a currently well stocked channel they shut down the line to ensure no preventable problems would prevent them from meeting this high demand?

tsmc_wafer.jpg

"Taiwan Semiconductor Manufacturing Company (TSMC) chairman and CEO Morris Chang has commented that development of 20nm and 28nm processes is progressing well. The market believes TSMC's capacity utilization rate in second-quarter 2012 should be close to 95%.

Chang indicated that 28nm processes will likely contribute 10% of 2012 revenues.

Industry sources pointed out that TSMC's capacity utilization of its 28nm and 45nm processes at its 12-inch wafer plant has not decreased and delivery usually takes 8-10 weeks.

Equipment makers also indicated that TSMC has been ordering equipment and installations will be completed in first-half 2012.

Industry sources added that other Taiwan-based IC design houses have been experiencing increasing orders. The market was pessimistic before the Lunar New Year holidays causing inventory levels to be low. This has induced the recent wave of increased orders."

Here is some more Tech News from around the web:

Tech Talk

 

Source: DigiTimes

Intel in the Cloud with Ray-traces

Subject: General Tech, Graphics Cards, Processors, Mobile | March 8, 2012 - 04:02 AM |
Tagged: ray tracing, tablet, tablets, knight's ferry, Intel

Intel looks to bring ray-tracing from their Many Integrated Core (Intel MIC) architecture to your tablet… by remotely streaming from a server loaded with one or more Knight’s Ferry cards.

The anticipation of ray-tracing engulfed almost the entirety of 3D video gaming history. The reasonable support of ray-tracing is very seductive for games as it enables easier access to effects such as global illumination, reflections, and so forth. Ray-tracing is well deserved of its status as a buzzword.

intel-ray.jpg

Render yourself in what Knight’s Ferry delivered… with scaling linearly and ray-traced Wolfenstein

Screenshot from Intel Blogs.

Obviously Intel would love to make headway into the graphics market. In the past Intel has struggled to put forth an acceptable offering for graphics. It is my personal belief that Intel did not take graphics seriously when they were content selling cheap GPUs to be packed in with PCs. While the short term easy money flowed in, the industry slipped far enough ahead of them that they could not just easily pounce back into contention with a single huge R&D check.

Intel obviously cares about graphics now, and has been relentless at their research into the field. Their CPUs are far ahead of any competition in terms of serial performance -- and power consumption is getting plenty of attention itself.

Intel has long ago acknowledged the importance of massively parallel computing but was never quite able to bring products like Larabee against anything the companies they once ignored could retaliate with. This brings us back to ray-tracing: what is the ultimate advantage of ray-tracing?

 

Ray-tracing is a dead simple algorithm.

 

A ray-trace renderer is programmed very simply and elegantly. Effects are often added directly and without much approximation necessary. No hacking around is required in the numerous caveats within graphics APIs in order to get a functional render on screen. If you can keep throwing enough coal on the fire, it will burn without much effort -- so to speak. Intel just needs to put a fast enough processor behind it, and away they go.

Throughout the article, Daniel Pohl has in fact discussed numerous enhancements that they have made to their ray-tracing engine to improve performance. One of the most interesting improvements is their approach to antialiasing. If the rays from two neighboring pixels strike different meshes or strike the same mesh at the point of a sharp change in direction, denoted by color, between pixels then they are flagged for supersampling. The combination of that shortcut with MLAA will also be explored by Intel at some point.

intel-41914.png

A little behind-the-scenes trickery...

Screenshot from Intel Blogs.

Intel claims that they were able to achieve 20-30 FPS at 1024x600 resolutions streaming from a server with a single Knight’s Ferry card installed to an Intel Atom-based tablet. They were able to scale to within a couple percent of theoretical 8x performance with 8 Knight’s Ferry cards installed.

I very much dislike trusting my content to online streaming services as I am an art nut. I value the preservation of content which just is not possible if you are only able to access it through some remote third party -- can you guess my stance on DRM? That aside, I understand that Intel and others will regularly find ways to push content to where there just should not be enough computational horsepower to accept it.

Ray-tracing might be Intel’s attempt to circumvent all of the years of research that they ignored with conventional real-time rendering technologies. Either way, gaming engines are going the way of simpler rendering algorithms as GPUs become more generalized and less reliant on fixed-function hardware assigned to some arbitrary DirectX or OpenGL specification.

Intel just hopes that they can have a compelling product at that destination whenever the rest of the industry arrives.

Source: Intel Blog

Unreal Engine Samaritan Demo Running On Single NVIDIA Kepler GPU

Subject: General Tech | March 8, 2012 - 12:38 AM |
Tagged: unreal, udk, samaritan, nvidia, fxaa

Last year we saw Unreal unviel their Samaritan demo which showed off next generation gaming graphics using three NVIDIA 580 GTX graphics cards in SLI.  Epic games showed off realistic hair and cloth physics along with improved lighting, shadows, anti-aliasing, and more bokeh effects than gamers could shake a controller at with their Samaritan demo, and I have to say it was pretty impressive stuff a year ago, and it still is today. What makes this round special is that hardware has advanced such that the Samaritan level graphics can be achieved in real time with a single graphics card, a big leap from last year's required three SLI'd NVIDIA GTX 580s!

The Samaritan demo was shown at this years' GDC 2012 (Games Developers Conference) to be running on a single NVIDIA "Kepler" graphics card in real time, which is pretty exciting. Epic did not state any further details on the upcoming NVIDIA graphics card; however, the knowledge that the single GPU was able to pull off what it took three Fermi cards to do certainly holds promise.

According to GeForce; however, it was not merely the NVIDIA Kepler GPU that made the Samaritan demo on a single GPU possible. The article states that it was the inclusion of NVIDIA's method for anti-aliasing known as FXAA, or Fast Approximate Anti-Aliasing that enabled it. Unlike the popular MSAA option employed by (many of) today's games, FXAA uses much less memory, enabling single graphics cards to avoid being bogged down by memory thrashing. They further state that the reason MSAA is not ideal for the Samaritan demo is because the demo uses deferred shading to provide the "complex, realistic lighting effects that would be otherwise impossible using forward rendering," a method employed by many game engines. The downside to the arguably better lighting in the Samaritan demo is that it requires four times as much memory. This is because the GPU RAM needs to hold four samples per pixel, and the workload is magnified four times in areas of the game where there are multiple intersecting pieces of geometry.

MSAAvsFXAA.png

FXAA vs MSAA

They go on to state that without AA turned on, the lighting in the Samaritan demo uses approximately 120 MB of GPU RAM, and with 4x MSAA turned on it uses about 500 MB. That's 500 MB of memory dedicated just to lighting when it could be used to hold more of the level and physics, for example and would require a GPU to swap more data that it should have to (using FXAA). They state that FXAA on the other hand, is a shader based AA method that does not require additional memory, making it "much more performance friendly for deferred renderers such as Samaritan."

Without anti-aliasing, the game world would look much more jagged and not realistic. AA seeks to smooth out the jagged edges, and FXAA enabled Epic to run their Samaritan demo on a single next generation NVIDIA graphics card. Pretty impressive if you ask me, and I'm excited to see game developers roll some of the Samaritan graphical effects into their games. Knowing that Epic Game's engine can be run on a single graphics card implies that this future is all that much closer. More information is available here, and if you have not already seen it the Samaritan demo is shown in the video below.

Source: GeForce

The SSD market gets passed a TRIM command

Subject: General Tech, Storage | March 7, 2012 - 01:39 PM |
Tagged: western digital, ssd, hitachi, flash, EMC

Hitachi Global Storage Technologies, which was the result of the merger of Hitachi and IBM's HDD businesses, is likely being purchased by Western Digital tomorrow for about $4.3 billion.  This makes sense as WD has been using Hitachi GST as a sales partner when providing  EMC with high end flash disks.  This deal comes on the heels of a major sell, the SSD400S flash disk which uses Intel's 34nm SLC NAND and the SSD400S-B which utilizes the new 25nm NAND developed by Intel.  Check out the specifications of the flash drives as well as the new SSD company over at The Register.

intel-nand.jpg

"WD is buying Hitachi GST and the acquisition is expected to be formally announced tomorrow with a condition of two years of independence for Hitachi GST - imposed by a Chinese anti-competition regulator. EMC has certified Hitachi GST's SSD400S flash disks for use in its VNX mid-range unified storage arrays, including the all-flash VNX5500-F, so WD will effectively fulfil this deal once the acquisition is announced."

Here is some more Tech News from around the web:

Tech Talk

 

Source: The Register