Big Blue likes RIM's infrastructure ... the phones not so much

Subject: General Tech | August 13, 2012 - 03:53 PM |
Tagged: RIM, blackberry, IBM

One of the scariest things about the failure of RIM to recover from its attempts to move into the consumer market is the damage being done to the services they supply to businesses.  The Enterprise Services Division of RIM handles the servers which ensure secure delivery of messages over the cellular network and is one of the main reasons that RIM devices and the BlackBerry Enterprise Server are the preferred choice of many institutions.  If RIM goes down then that ability to ensure security and to remotely administrate devices will go down with them.  That is why this story on The Register will make many sysadmins very happy, not only is someone interested in purchasing that business unit, the company that is interested is IBM.  They do not have any interest in the actual BlackBerry phones, so this could mean that BES type management could be expanded to more devices and the death of RIM may not mean the death of secure delivery of business emails.  Pity about the CPP though.

IBM.jpg

"IBM is reportedly interested in snapping up the enterprise services division of troubled BlackBerry-maker Research in Motion.

Well-placed sources whispered to Bloomberg that Big Blue could help Canadian mobile biz RIM by taking the unit off its hands, and has already made an informal approach about it."

Here is some more Tech News from around the web:

Tech Talk

Source: The Register

AMD's new Embedded Solutions Group aims at a new market

Subject: General Tech | June 12, 2012 - 01:01 PM |
Tagged: amd, arm, IBM, Freescale, AFDS, ESG

According to the VDC Research Group's findings, the embedded market will hit $6bn in sales in 2012 and keep growing at a rate of 12%-15% per year.  AMD seems poised to move into this market with the formation of their Embedded Solution Group and the changes we have been seeing to their processor lines.  Current Opteron HE and EE chips consume between 35W and 65W depending on the number of cores and that amount might be trimmed down as new models come out.  They also have lines of embedded Athlon, Turion, Sempron, and Geode LX based chips and have hired an FPGA veteran, Arun Iyengar, to manage the ESG though The Register expresses doubt that AMD is thinking of developing it's own FPGA business.  More likely they hope to provide powerful alternatives for those in the market that now need a little more from their embedded products.  Read the full story here and keep your eyes peeled for more news coming out of the AMD Fusion Developer Summit.

DPX-S430_Front_B_PLCCsocket.jpg

"The new management team at Advanced Micro Devices is looking everywhere, including under the couch cushions, to find some money so it can afford to explore the embedded systems market again. The chip biz hopes rivals Intel and the ARM collective are too distracted to notice the foray as they fight over each others' territories in PCs, servers and mobile devices."

Here is some more Tech News from around the web:

Tech Talk

 

Source: The Register

Good things come in interesting packages; IBM's new chip substrate

Subject: General Tech | March 21, 2012 - 01:14 PM |
Tagged: IBM, power 7+, interposer, packaging

In the typically bass ackwards way of technology, an interposer actually acts as an interface for electrical signals to be routed or spread as opposed to something which acts as a barrier between two objectsToday SemiAccurate's camera caught a picture of an engineering sample of IBM's Power 7+ chip which, according to them, represents a huge step forward in a direction only IBM is going in.  That interposer allows a huge amount of bandwidth between the four cores on the larger chip below, without specifications it is hard to say how much but it is quite possibly be more effective than either Intel or AMD's current solutions.  As SemiAccurate points out, the interposer is just begging to be filled with cache memory.

SA_IBM_Power_7_plus.jpg

"Every once in a while, a company will do something really unexpected, like IBM’s laying down the law in packaging last week. Yes, they showed off a chip, two actually, that does things no one else is even talking about doing."

Here is some more Tech News from around the web:

Tech Talk

Source: SemiAccurate

AMD and IBM inside the Xbox Next?

Subject: Processors | January 18, 2012 - 04:10 PM |
Tagged: xbox next, IBM, amd, Power PC, southern islands, xbox 720, oban

SemiAccurate has been doing some digging into the hardware that will power the next XBox, perhaps a bit more successfully than Microsoft would like.  This builds on the rumours that they had collected in December of 2011 and confirms that the next generation console is only a partial win for AMD.  Oban is the code name for the CPU, which is being fabbed by GLOBALFOUNDRIES for the most part and will be a variant model of IBM's Power PC architecture and not an x86 based chip.  AMD will provide a Graphics Core Next Southern Islands GPU to provide the graphical power, terrible news for NVIDIA's bottom line over the next several years as they lose out on at least one platform of the coming generation.  This will continue to sting as unlike PCs, consoles are not refreshed several times over a year and the current hardware will likely be powering the XBox Next for years to come.

From what SemiAccurate has gathered, Microsoft have ordered a huge run of the chips which will power the console and should guarantee availability in the Spring of 2013 which is the current predicted release date for the console.  Considering the low yields from GLOBALFOUNDRIES lately this seems likely a move to ensure that even a large amount of bad silicon will not have a major impact on their ability to provide deep supplies of XBox Next for retailers.  

Xbox.jpg

"If you crave more info about the upcoming XBox 720/Next, there is finally some concrete info. The one nice thing about this job is that proud parents like to talk, and that is exactly where this story begins."

Here are some more Processor articles from around the web:

Processors

 

Source: SemiAccurate

IBM Developing 120 Petabyte Water Cooled Storage Array

Subject: Storage | August 26, 2011 - 01:04 PM |
Tagged: storage, Hard Drive, IBM, array

IBM knows how to go big or go home, and their Almaden, California research lab’s current storage project exemplifies that quite nicely. With a data repository that dwarfs anything we have today, IBM is designing a 120 Petabyte storage container. Comprised of 200,000 hard drives, the new storage device is expected to house approximately 1 trillion files or 24 billion 5MB MP3 files. To put that in perspective, Apple has sold 10 billion songs as of February 24, 2010; therefore, you could store every song sold since the Itunes Store’s inception twice and still have room for more!

harddrive.jpg

More specifically, the Almaden engineers have designed new hardware and software techniques to combine all 200,000 hard drives into horizontal drawers that are then all placed into rack mounts. In order to properly cool the drives, IBM had to make the drawers “significantly wider than usual” to cram as many disks as possible into a vertical rack in addition to cooling the disks with circulating water. On the software side of things, IBM has refined their disk parity and mirroring algorithms such that a computer can continue working at near-full speed in the event a drive fails. If a single disk fails, the system begins to pull data from other drives that held copies of the data to write to the replacement disk, allowing the supercomputer to keep processing data. The algorithms control the speed of data rebuilding, and are able to adapt in the event multiple drives begin failing.

In addition to physically spreading data across the drives, IBM is also using a new file system to keep track of all the files across the array. Known as the General Parallel File System (GPFS), it stripes files across multiple disks so that many parts of a files can be written to and read from simultaneously, resulting in massive speed increasing when reading. In addition, the file system uses a new method of indexing that enables it to keep track of billions of files without needing to scan through every one. GPFS has already blown past the previous indexing record of one billion files in three hours with an impressive indexing of 10 billion files in 43 minutes.

The director of storage research for IBM, Bruce Hillsberg stated to Technology Review that the results of their algorithms enables a storage system that should not lose any data for a million years without compromising performance. Hillsberg further indicated that while this 120 Petabyte storage array was on the “lunatic fringe” today, storage is becoming more and more important for cloud computing, and just keeping track of the file names, type, and attributes will use approximately 2 Terabytes of storage.

The array is currently being built for a yet-to-be-announced client, and will likely be used for High Performance Computing (HPC) projects to store massive amounts of modeling and simulation data. Project that could benefit from increased storage include global weather patterns, seismic graphing, Lard Hadron Collider (LHC), and molecular data simulations

Storage research has an amazing pacing, and seems to constantly advance despite pesky details like heat, fault tolerance, aerial density walls, and storage mediums. While this 120 Petabyte array comprised of 200,000 hard drives is out of reach for just about everyone without federal funding or a Fortune 500 company's expense account, the technology itself is definitely interesting and will trickle down advancements to the consumer drives.

Image Copyright comedy_nose via Flickr Creative Commons