Subject: General Tech | May 4, 2011 - 05:28 PM | Scott Michaud
Tagged: mse, Malware, antivirus
One of the major drawbacks of having general purpose computation devices is malware. Your computers are designed to manipulate and store instructions and information and they do that amazingly. Your computers, however, cannot tell who gave what instruction; they follow a set of instructions until it links to another, which they follow, ad infinitum. When someone who wants to use your computer can get their series of instructions run by your computer: that is when you got a problem.
Subject: General Tech, Mobile | May 4, 2011 - 02:32 PM | Scott Michaud
Tagged: tablet, kindle, amazon
Amazon certainly has a knack for causing a ruckus in just about any industry they step into. Their inception placed them in stiff competition with bookstores and mail-order catalogs; since then they have branched out even as far as rental computing and storage, content production and publishing, and consumer electronics.
A recently rumored OEM order to Quanta Computer, already an OEM partner of RIM and Sony, proposes that Amazon is looking to beef up their portfolio to include Tablet PCs.
Could Amazon be Kindling for a much bigger fire?
Subject: General Tech, Processors | May 4, 2011 - 01:55 PM | Tim Verry
Tagged: transistor, Intel
"After a decade of research, Intel has unveiled the world's first three dimensional transistor" states Mark Bohr, a Senior Fellow for Intel. Silicon based transistors in computers, mobile devices, vehicles, and embedded equipment have only existed in a planar, or two dimensional, form until today.
The new three dimensional transistor, dubbed "Tri-Gate," is now ready for high volume production, and will be included in Intel's new Ivy Bridge 22nm processors. This new Tri-Gate transistor is a huge deal for Intel as it will enable them to maintain the pace of current chip evolution as outlined by Moore's Law. If you are not familiar with Moore's Law, it states that approximately every 18 months, transistor density will double, bringing with it increases in performance and yeild while decreasing cost of production. Intel states that "It has become the basic business model for the semiconductor industry for more than 40 years."
As processors become smaller and smaller, the electric current becomes more and more difficult to contain. There are hundreds of thousands of minute connections and switches inside today's processors, and as manufacturing processes shrink, the amount of current leakage increases. With Intel's Core 2 Duo processors, Intel created a new "high-k"(high dielectric constant, which is a property of matter relating to the amount of charge it can hold) metal gate transistor using a material called Hafnium. The new material replaced the silicon dioxide dielectric gate of the transistor to combat the current leakage problem at 32nm. This allowed the chip process to shrink while scaling to produce less current leakage and heat. To be more specific, Intel states that "because high-k gate dielectrics can be several times thicker, they reduce gate leakage by over 100 times. As a result, these devices run cooler."
Unfortunately, at the much smaller 22nm process, Intel was not achieving results congruent with Moore's Law using even their high-k gate transistors. In order to maintain the scaling predicted in Moore's Law, Intel had to once again re-invent their transistors. In order to create a smaller manufacturing process while overcoming current leakage, Intel had to develop a way to use more of what little space they had available to them. It is here that they entered the third dimension. By designing a transistor that is able to control the electrical current on three sides instead of a single plane, they are able to shrink the transistor while ending up with more surface area to "control the stream" as Mark Bohr puts it.
The proposed benefits of Tri-Gate lie in it's ability to operate at lower voltages, with higher energy efficiency, all while running cooler and faster than ever before. More specifically, up to 37 percent increases in performance at low voltages versus Intel's current line of 32nm processors. Intel further states that "the new transistors consume less than half the power when at the same performance as 2-D planar transistors on 32nm chips." This means that at the same performance level of the current crop of Intel CPUs, Ivy Bridge will be able to do the same calculations either while using half the power needed of Sandy Bridge or nearly twice as fast (it is unlikely to scale perfectly as there is overhead and other elements of the chip that will not be as radically revamped) at the same level of power consumption. If this sort of scaling turns out to be true for the majority of Ivy Bridge chips, the overclocking abilities and resulting performance should be of unprecedented levels.
The use of Tri-Gate transistors is also mentioned as being beneficial for mobile and handheld devices as the power efficiency should allow increases in battery life. This is due to the chip running at decreased voltages while maintaining (at least) the same level of performance as current mobile chips. While Intel did not demo any mobile CPUs, they did state that Tri-Gate transistors may be integrated into future Atom chips.
Subject: Editorial, General Tech | May 4, 2011 - 01:36 PM | Tim Verry
Tagged: Law, Copyright, Bit Torrent
This past year has seen a surge of copyright infringement cases where copyright holders have brought suits against not one, but hundreds or even thousands of defendants. These kinds of wide sweeping cases are highly controversial, and, according to TorrentFreak opponents have even gone so far as to call these types of cases as "extortion".
The main reason for the controversy is that rights-holders are acquiring lists of IP addresses that connect to, download, and/or share illegal files that they own the original copyright for. They are then bringing lawsuits against the so called John Does listed in the IP addresses, and using legal subpoenas to force ISPs to release personal information of the account holder(s) connected to that IP at the times the IP address was logged downloading and/or sharing their files. While many may not realize the flaw in this logic, it seems as though a District Court judge by the name of Harold Baker has questioned the legality and implications of assuming an IP address is grounds enough to obtain further personal information.
The issue of connecting solely an IP address to a person is that while a log with an IP address along with specific dates and times can be connected to an ISP’s subscriber and their Internet connection, there is no way to know that it was that particular person who represented that IP address in that matter. It could just as easily have been another person living in the household, a friend or visitor who used the wireless connection, or a malicious individual piggy-backing on that subscriber’s Internet connection (and thus the IP address).
TorrentFreak reports that “Judge Baker cited a recent child porn case where the U.S. authorities raided the wrong people, because the real offenders were piggybacking on their Wi-Fi connections.” They also state that Judge Baker believes that these types of cases, particularly when it involves adult entertainment, assuming an IP address is enough material to subpoena for further personally identifiable information could obstruct a “‘fair’ legal process.” This is because, bringing a suit against someone by connecting them to solely an IP address, especially when it involves adult entertainment, could irreparably defame an innocent persons character.
Judge Baker goes on to say that rights-holders could potentially use the delicate issue of the accusation of allegedly sharing adult material to encourage even innocent people to settle out of court. TorrentFreak reports that “Baker conlcudes [sic] by saying that his Court is not supporting a “fishing expedition” for subscribers’ details if there is no evidence that it has jurisdiction over the defendants.”
There is no question that Judge Baker’s ruling could potentially change the landscape of bit torrent related lawsuits throughout the United States. Rights-holders are no doubt going to aggressively combat this ruling; however, civil rights groups and countless innocent people are rejoicing at the knowledge that it may very well be the beginning of the end for John Doe bit torrent lawsuits in the Unite States.
Image courtesy MikeBlogs via Flickr (creative commons 2.0 w/attribution).
Subject: General Tech | May 4, 2011 - 11:56 AM | Jeremy Hellstrom
Tagged: lag, buffer, bloat, input lag, gaming, online
Packet loss, network latency and input lag are often blamed for the reason your character is now a corpse and why your opponent is doing a happy dance on your naughty bits but there is another target to blame for your lousy online gaming skills, buffer bloat. It seems that larger storage space is not always a good thing as TCP/IP needs dropped packets to tell it to slow down and when a network sports a buffer that can hold 10 seconds or so of data in its buffer before dropping a packet and informing the connection that there is a problem. If you've ever played a game which slows down and then does a quick speed up for a few seconds you have probably met buffer bloat. Slashdot doesn't have a solution but they do have more information for you.
"Gamers often find 'input lag' annoying, but over the years, delay has crept into many other gadgets with equally painful results. Something as simple as mobile communication or changing TV channels can suffer. Software too is far from innocent (Java or Visual Studio 2010 anyone?), and even the desktop itself is riddled with 'invisible' latencies which can frustrate users (take the new Launcher bar in Ubuntu 11 for example). More worryingly, Bufferbloat is a problem that plagues the internet, but has only recently hit the news. Half of the problem is that it's often difficult to pin down unless you look out for it. As Mick West pointed out: 'Players, and sometimes even designers, cannot always put into words what they feel is wrong with a particular game's controls ... Or they might not be able to tell you anything, and simply say the game sucked, without really understanding why it sucked.'"
Here is some more Tech News from around the web:
- Taking Up Space: Mass Effect 3 Screens @ Rock, Paper, SHOTGUN
- Portal 2 DLC coming soon @ HEXUS
- Total War: Shogun 2 Performance w/ AMD Radeon HD 6850 @ Legit Reviews
- Portal 2 Review @ Techgage
- Artistic trickery: Ars looks at indie mech game Hawken
- Thinking on rails: why Portal 2 isn't as good as the original @ Ars Technica
- Can you really learn to race by playing racing games? Ars takes to the track
- Section 8 Prejudice Launch, Unlockable Mode @ Rock, Paper, SHOTGUN
- Portal 2 Trickshots @ Rock, Paper, SHOTGUN
- Pilotwings Resort Nintendo 3DS @ Tweaktown
- Gears of War 3 beta: senior gameplay designer offers tips @ Ars Technica
- Rocksmith Preview @ Computing on Demand
- Operation Flashpoint Red River (XBOX 360) Review @ GamingHeaven
Subject: General Tech | May 4, 2011 - 11:19 AM | Jeremy Hellstrom
Tagged: peddie, nvidia, market share, Intel, gpu, amd
SemiAccurate got hold of Jon Peddie's most recent look at the GPU market and how it is divvied up between the major competitors; which doesn't include SIS who hit 0% this year. The two current discreet GPU makers swapped positions last quarter with AMD in the lead and that remains true this quarter as they have grown to 24.8% while NVIDIA fell to 20%. Last year at this time NVIDIA had a comfortable 8% more of the market than AMD, but with a Fermi launch that just didn't go as well as hoped and AMD coming out strong and generally less expensive, that lead has evaporated thanks not only to the discreet GPUs but also Brazos.
Speaking of APUs, the more mathematically inclined readers may notice that a large chunk of the graphics market is missing in those figures. 54.4% of that missing market belongs to Intel who have seen their share of the market jump by alnost 10% since Q1 2010. The vast majority of their market share belongs to the embedded GPU present in many Intel systems but at least some of that growth is thanks to the new SandyBridge platform which many enthusiasts are purchasing and which counts towards market share even if it is only being used for transcoding in a system with a discreet GPU.
"The latest GPU marketshare numbers from Jon Peddie are out, and it looks like we have a new leader in GPUs, AMD. According to the numbers released today, Q1 saw AMD overtake Nvidia in year over year GPU marketshare, and the turn-around promised last February fizzle."
Here is some more Tech News from around the web:
- The HTML5 future of the web starts to take shape @ The Inquirer
- HP engineering veep spills cloud plans onto LinkedIn @ The Register
- How-To: Portal Sentry Turret Egg Cup @ Make:Blog
- Seagate to control 40% of HDD market with Samsung acquisition, says IHS iSuppli @ DigiTimes
- Canon PowerShot SX230 HS Review @ TechReviewSource
- Level One Wireless 300Mbps N_Max Ceiling PoE Access and 4 GE PoE + 1 GE Switch Review @ OverclockersHQ
- DemoCamp April 2011 Coverage @ t-break
- Win a MSI N550GTX-Ti graphics card @ t-break
- Win a Linksys E3000 wireless router @ t-break
- Win a ECS Black GTX460 graphics card @ t-break
Subject: General Tech | May 4, 2011 - 08:52 AM | Tim Verry
Tagged: PC, gaming, First Person Shooter
Brink is a new first person shooter developed by Splash Damage, and powered by a revamped id Tech 4 engine with a strong multi player focus. It is set to release on May 10, 2011 for the PC as well as the Xbox 360 and Sony PlayStation 3.
A web video series dubbed "Get SMART," is running up to the game's release date to both get gamers excited about the game and show them how to navigate the environment of The Ark and give them that extra bit of edge in the first days of battle. The full series can be found on the game's website here, and shows off everything from HUD design to story and plot mechanics. The following video; however, details a new movement system that the developers hope will cause players to rethink the way they play a first person shooter.
In an age where multi player shooters are flooding the market, Brink may appear to be "just another multi player shooter;" however, with Brink, the developers are attempting to differentiate themselves by implementing a new movement system and making combat even more customizable with deploy-able items, character buffs, wall hopping of all things and 4 different character classes.
With what they dub the "SMART" (Smooth Movement Across Random Terrain) system, you are able to point your reticle at an area and by using the sprint key, have your character move there wether that be by vaulting, sliding, or wall hopping. The added dimensions for movement should help encourage new play styles to the traditional team multi player FPS gameplay. For example, characters are no longer stopped dead in their tracks by a waist high wall, or are not able to flank their enemies due to a hole in a bombed out fence being too low to the ground.
After watching the movement system demonstration, do you think SMART will shake up the multi player genre or is it just a gimmick?
Subject: General Tech, Graphics Cards | May 3, 2011 - 11:54 AM | Jeremy Hellstrom
Tagged: e6760, embedded, gpu, amd, eyefinity
Usually reading off a list of the abilities of an embedded GPU are fairly quick ... determine if it can handle YouTube in high definition and maybe play WoW and move on. APUs offer a bit more interest for enthusiasts with interesting load sharing applications with a discreet GPU and the rise of SandyBridge and Bobcat seem to spell the end of the GPU embedded on a motherboard. However there are still a few tricks left before the end of the line, the new Radeon E6760 isn't going to win many speed races but it can support up to 6 monitors, a nice trick when you consider that many of these chips will be running displays in casinos, airports and medical imaging. The E4690 is finally retiring, meet the new E6760 at AnandTech.
"Kicking off our coverage of embedded GPUs is AMD’s Radeon E6760, which is launching today. The E6760 is the latest and greatest AMD embedded video card, utilizing the Turks GPU (6600/6700M) from AMD’s value lineup. The E6760 isn’t a product most of us will be buying directly, but if AMD has it their way it’s a product a lot of us will be seeing in action in the years to come in embedded devices."
Here is some more Tech News from around the web:
- SandForce SF-2141 Controller & Intel Z68 Chipset: Destined to be Together @ Tweaktown
- It's Official: AT&T Broadband Subscribers Wake Up Today with Data Caps @ Techgage
- The day before BlackBerry World 2011 kicks in @ t-break
- Nvidia offers low-end laptop as replacement for Bumpgate victims @ The Inquirer
- Android Ice Cream Sandwich OS draws closer @ The Inquirer
- Win a GIGIABYTE 2GB Radeon HD6950 @ Bjorn3D
Subject: General Tech, Graphics Cards, Chipsets | May 3, 2011 - 11:54 AM | John Davis
Tagged: ubuntu, rhel, Red Hat, opensuse, linux, driver, catalyst, ati, amd
In a previous article we stated:
"Highlights of the Linux AMD Catalyst™ 11.4 release include: This release of AMD Catalyst™ Linux introduces support for the following new operating systems Ubuntu 11.04 support (early look) SLED/SLES 10 SP4 support (early look) RHEL 5.6 support (production)"
AMD introduced a new feature into Linux with Catalyst™ 11.4, PowerXpress.
- PowerXpress: Will enable certain mainstream mobile chipsets to seemlessly switch from integrated graphics to the dedicated graphics. *note: This only applies to Intel Processors with on chip graphics and AMD dedicated graphics and must be switched on by invoking switchlibGL and switchlibglx and restarting the Xorg server.
If you are running RHEL 5.6 or SLED/SLES 10 SP4 and need the driver you can get it here.
If you are running Ubuntu 11.04, install the driver under the "Additional Drivers" program.
If you are running a BSD variant you must still use the Open-Source driver "Radeon" and "RadeonHD" as AMD has yet to release a BSD driver.
Be sure to check back to PCPer for my complete review of the 11.4 driver and PowerXpress.
Subject: Editorial, General Tech | May 3, 2011 - 09:40 AM | Tim Verry
Tagged: Internet, Information, Filtering
TED talks are very similar to the motivational speeches that kids everywhere have had to endure throughout their junior high and high school years. The only real difference is that the talks are made available online to millions of people instead of a few thousand at a time. That said, if you are at all interested in the technology world, TED talks are usually both enlightening and relevant to present issues in the industry.
If that preface has not already scared you off of this article, I encourage you to watch this particular TED talk (which is embedded below), where Eli Pariser demonstrates just what a "filter bubble" is, and what repercussions the once ever-interconnected Internet world faces as more and more websites make personalization take priority over discovery.
Eli uses a search on Google for the subject "Egypt" to show that the results two people get can be drastically different. In an even more "close to home" example, by being a part of a social network like Facebook, you may already be inside a filter bubble and not even know it! This filter bubble is in the form of the "news feed" on Facebook. If you have not talked to, as an example, your best friends from college or high school in a few months, it likely will appear to you that according to their lack of any posts showing on your news feed, they have dropped off the face of the planet and have not updated their Facebook status since the last time you talked to them. More than likely; however, you are part of a filter bubble and simply were not aware of it.
Facebook has somewhat recently modified the way its news feed shows statuses of your Facebook friends to show only statuses of friends with whom you have a certain number of interactions with. This may seem like a good thing at first, as it leaves more room for the people that you talk with most often. Think for a second; however, if you missed your little brother or only nephew's first winning football game score status and photos of him during the winning play because you haven't talked to them in a few weeks. While that may be something you would consider to be big news and something that you would likely want to know about, Facebook's computer algorithms may just decide the exact opposite for you.
In practice, filter bubbles and personalization on the web are likely to be more subtle occurrences. Eli Pariser's talk does beg the question of whether or not filter bubbles are the right for the Internet and its users in any capacity. Is individual personalization worth people giving up the freedom to stumble upon new information and the opportunity to get the same exposure to the world as everyone else if they so choose? Do you see the personalized web as a positive or a negative thing for the world? What are your thoughts on users being led into a "web of one" as Eli cautions?