Your wireless signal getting tangled? WiFox might be the cure

Subject: General Tech, Networking | November 15, 2012 - 12:49 PM |
Tagged: WiFox, wireless

The short blurb at Slashdot makes WiFox sound more impressive than the actual implementation will be, but for anyone who has attempted to connect to WiFi at a tech show would love to see this new software appear on wireless access points.   Once a channel becomes crowded by multiple users, especially if they use devices which like to have a constant connection, the network will hit a point of saturation and the performance of every device on the network will suffer; not just the most recently connected devices.  WiFox is a piece of software which can monitor channel saturation and when it is reached it will immediately assign all available bandwidth to the WAP and allow it to transmit all backlogged data before then allocating bandwidth back to the devices ... clearing the tubes as it were.  It sounds like this will be easily added to existing WAPs instead of only being available to the next generation of devices so while your WAN will not suddenly become 700 times faster, the WiFi at next conference you go to might just be usable for a change.

SeriesOfTubes.png

"Engineers at NC State University (NCSU) have discovered a way of boosting the throughput of busy WiFi networks by up to 700%. Perhaps most importantly, the breakthrough is purely software-based, meaning it could be rolled out to existing WiFi networks relatively easily — instantly improving the throughput and latency of the network. As wireless networking becomes ever more prevalent, you may have noticed that your home network is much faster than the WiFi network at the airport or a busy conference center. The primary reason for this is that a WiFi access point, along with every device connected to it, operates on the same wireless channel."

Here is some more Tech News from around the web:

Tech Talk

Source: Slashdot

Intel's Crystal Forest, coming to a network appliance near you

Subject: General Tech | November 1, 2012 - 01:50 PM |
Tagged: Intel, crystal forest, networking

Intel's new Crystal Forest chipset includes an undisclosed embedded CPU, an 89XX-series chipset and their Data Plane Development Kit which is the SDK they've created for designing fast path network processing.  This is not a competitor to AMD's Freedom Fabric which is designed for communication within a large series of processing nodes, instead you will see Crystal Forest powering high end routers and web appliances.  Intel has designed this new chipset to increase the performance of cryptography and compression on network packets and claims it will increase speed as well as security, along with the benefits of support coming directly from Intel.  The Inquirer reports a long list of vendors who have signed on, including Dell, Wind River Systems and Emerson Network Power.

Intel_CrystalForest_1.jpg

"CHIPMAKER Intel has launched its Crystal Forest chipset for network infrastructure.

Intel might be best known for its X86 desktop, laptop and server chips but the firm does a pretty good trade in embedded chips and has announced that a number of large vendors have pledged their support for its new Crystal Forest chipset. The firm said its Crystal Forest chipset will enable networking equipment vendors to shift and control more data using its chips."

Here is some more Tech News from around the web:

Tech Talk

Source: The Inquirer

Netgear Announces New 802.11ac Gear, Launches New Router

Subject: Networking | May 16, 2012 - 09:57 PM |
Tagged: wifi, router, networking, netgear, 802.11ac

Following up on the announcement by Buffalo Technology, Netgear has released their own 802.11ac wireless router, the R6300. (PC Perspective recently ran a giveaway for the R6300 which you can read about here). In addition to the flagship 802.11ac router, Netgear announced a slimmed down version–the R6200–and the A6200 WiFi USB dongle.

R6300-Product-Image18-51162.png

The Netgear R6300 is their highest end wireless router supporting the 802.11ac WiFi standard. It supports 802.11ac speeds up to 1300 Mbps (450 Mbps over wireless n) and is backwards compatible with the 802.11 a/b/g/n standards. It also has two USB 2.0 ports that can be used to share hard drive and printers across the network. Further, the “5G WiFI” router is powered by a Broadcom chipset, which should open the door to third part firmware(s).

r6200-pg-img18-52080.jpg

In addition to the above router, Netgear has announced the R6200 wireless router. It is compatible with the upcoming 802.11ac standard, but at reduced speeds. It features approximately 900 Mbps transfer rates over the “ac” standard and up to 300 Mbps over the 802.11n standard. The router is backwards compatible with all the older consumer standards (a/b/g/n), and it features a single USB 2.0 port to share a printer or hard drive to computers on the LAN.

A6200.jpg

Last up in the announcement is the Netgear A6200. This device is a USB WiFi dongle that supports the 802.11ac standard as well as existing a/b/g/n networks. It claims to deliver enough speed for HD streaming of videos, though Netgear has not stated if it will be able to take advantage of the full 1300 Mbps theoretical maximum connection. The WiFi adapter features a swiveling antenna and a docking station for use with desktop systems.

The other neat feature that the new routers support is the Netgear Genie application, which allows users to monitor and control the network using an application on their computer or smartphone (iOS and Android). They also feature Netgear MyMedia, printer sharing, guest network access, a DLNA server, parental controls, and automatic WiFi security.

The Netgear R6300 router is available for purchase now with an MSRP of $199.99. The R6200 router and A6200 WiFi dongle will be available for purchase in Q3 2012 with suggested retail prices of $179.99 and $69.99 respectively.

Source: Netgear

Buffalo First To Market With 802.11ac Gigabit Wi-Fi Router

Subject: Networking | May 15, 2012 - 05:38 PM |
Tagged: wireless, router, networking, ethernet bridge, buffalo, 802.11ac

Netgear and Buffalo have been working hard to build and get to market new wireless routers based on the 802.11ac (pending ratification) standard. PC Perspective recently ran a giveaway for the Netgear 802.11ac router, but it seems that Buffalo has managed to beat them to market with their new gear. In fact, Buffalo yesterday released two 802.11ac devices with the AirStation™ WZR-D1800H wireless router and WLI-H4-D1300 wireless Ethernet bridge. Both devices are powered by Broadcom’s 5G WiFi chips (what Broadcom refers to 802.11ac as–the fifth generation of consumer WiFi) and based around the IEEE standard that is set to become an official standard early next year.

Buffalo_Router.jpg

The Buffalo 802.11ac Router (left: front, right: rear view)

The router and Ethernet bridge both support the upcoming 802.11ac standard as well as the current 802.11 b, g, and n standards so they are backwards compatible with all your devices. They also support all the normal functions of any other router or bridge device–the draft support for 802.11ac is what differentiates these products. The router stands vertically and has a router reset and USB eject buttons, one USB 2.0 port, four Gigabit Ethernet LAN ports, and one Gigabit Ethernet WAN port. Below the WAN port is a power button and DC in jack. The Buffalo Ethernet bridge allows users to connect Ethernet devices to a network over WiFi. It looks very similar to the router but does not have a WAN port or USB port on the back. It also does not act as a router, only a bridge to a larger network. The largest downside to the Ethernet bridge is pricing: (although out of stock now) Newegg has the bridge listed for the same price as the full fledged router. At that point, it does not have much value–users would be better off buying two routers and disabling the router features on one (and because the Broadcom chipset should enable custom firmwares, this should be possible soon).

Ethernet_bridge.jpg

The Buffalo 802.11ac Ethernet Bridge (left: front, right: rear view)

What makes these two devices interesting though is the support for the “5G WiFi” 802.11ac wireless technology. This is the first time that the Wireless connections have a (granted, theoretical) higher transfer speed than the wired connections, which is quite the feat. 802.11ac is essentially 802.11n but with several improvements and only operating on channels in the 5GHz spectrum. The pending standard also uses wider 80 Mhz or 160 MHz channels, 256 QAM modulation, and up to eight antennas (much like 802.11n’s MIMO technology) to deliver much faster wireless transfer rates than consumers have had available previously. The other big technology with the upcoming WiFi standard is beamforming. This allows wireless devices to communicate with their access point(s) to determine relative spatial position. That data is then used to adjust the transmitted signals such that it is sent in the direction of the access point at the optimum power levels. This approach is different to traditional WiFi devices that broadcast omni-directionally (think big circular waves coming out of your router) because the signals are more focused. By focusing the signals, users get better range and can avoid WiFi deadspots.

Hajime Nakai, Chief Executive Officer at Buffalo Technology stated that “along with Broadcom, we continue to demonstrate our commitment to innovation by providing a no-compromise, future proofed wireless infrastructure for consumers’ digital worlds.”

The Buffalo AirStation™ WZR-D1800H router and WLI-H4-D1300 Ethernet bridge are available for purchase now for around $179.99 USD. The Ethernet bridge is listed as out of stock on Newegg; however, the router is still available (and the better value).

Qualcomm Atheros Launches Two New Killer Networking Cards

Subject: Networking | April 18, 2012 - 10:33 PM |
Tagged: wi-fi, qualcomm, networking, killer, Ethernet

Qualcomm Atheros today launched two new networking cards for the desktop and laptop markets. A subsidiary company of Qualcomm (formerly Killer Networks), the Wireless-N 1202 and E2200 provides Wi-Fi and Ethernet connectivity based on Killer Networks’ technology.

Qualcomm_Atheros_blue_big.jpg

The Wireless-N 1202 is a 802.11n Wi-Fi and Bluetooth module with 2x2 MIMO antennas which should provide plenty of Wireless N range. On the wired side of things the E2200 is a Gigabit Ethernet network card for desktop computers. Both modules are powered by Killer Network’s chip and the Killer Network Manager software. The software will allow users to prioritize gaming, audio, and video packets over other network traffic to deliver the best performance. Director of Business Development Mike Cubbage had the following to say.

“These products create an unprecedented entertainment and real-time communications experience for the end user by ensuring that critical online applications get the bandwidth and priority they need, when they need it.”

The E2200 Gigabit Ethernet NIC is available for purchase now, and the Wireless-N 1202 module will go on sale in May. More specific information on the products will be available after the official launch date (today) so stay tuned to PC Perspective.

ZTE Shows Off 1.7 Tbps Fiber Optic Network

Subject: Networking | March 16, 2012 - 05:58 AM |
Tagged: zte, wdm, networking, fiber optics, 1.7tbps

Chinese telecommunications provider ZTE showed off a new fiber optic network capable of 1.7 Tbps over a single fiber cable. Computer World reports that the ZTE network trial utilizes Wavelength Division Multiplexing technology to pack more information through a single cable by employing multiple wavelengths that comprise different channels.

fiber optic.jpg

The ZTE fiber network runs 1,750 kilometers (just over 1,087 miles) and uses eight channels- each capable of 216.4 Gbps- to send data at speeds up to 1.7312 Tbps. The company has no immediate plans to implement such a network. Rather, they wanted to prove that an upgrade to 200 Gbps per channel speeds is possible. To put their achievement in perspective, Comcast currently has fiber networks running at 10 Gbps, 40 Gbps, and 100 Gbps channel speeds, according to an article on Viodi.

And to think that I only recently upgraded to a Gigabit router! I can't wait to see this technology trickle down towards a time when home networks are running through fiber optic cables and doing so at terabit per second speeds!

Image courtesy kainet via Flickr Creative Commons.

Counter-terrorism Expert States Cyberthreats Should Never Be Taken Lightly

Subject: Networking | August 4, 2011 - 02:01 AM |
Tagged: security, networking, cyber warfare

Computer World posted a short news piece quoting the former director of the CIA’s Counter-terrorism Center Cofer Black as he explained why Cyberthreats needs to be taken more seriously by the nation. Cofer Black played a key role during the first term of the George W. Bush administration and was one of the counter-terrorism experts made aware of a likely attack on American soil prior to the September 11th attacks.

Black noted that the people in a position with the power to act on these warnings were unwilling to act without some measure of validation. He goes on to say that while the general public was blindsided by the September 11th attacks, “I can tell that neither myself nor my people in counter-terrorism were surprised at all.”

binary numbers

With cyber warfare becoming increasingly utilized as an attack vector to foreign adversaries, the need for quick responses to threats will only increase. Further, the demand on security professionals to search for and validate threats for those in power to enact a response will be a major issue in the coming years. “The escalatory nature of such threats is often not understood or appreciated until they are validate,” Black offered in regards to the challenges decision makers face. He believes that the decision makers do listen to the threats; however, they do not believe them. This behavior, he believes, will hinder the US’ ability to properly respond to likely threats.

With the recent announcement by the Department of Defense that physical retaliation to Internet based attacks (in addition to counter attacks) may be necessary, the need to quickly respond to likely threats proactively is all the more imperative.  Do you believe tomorrows battles will encompass the digital plane as much as real life?

18,592 Academic Papers Released To Public Via Torrent

Subject: General Tech | July 21, 2011 - 07:29 PM |
Tagged: torrent, tech, networking, jstor

In light of Aaron Swartz’s recent legal trouble involving charges being brought against him for downloading academic papers from the online pay-walled database called JSTOR using MIT’s computer network, a bittorrent user named Greg Maxwell has decided to fight back against publishers who charge for access to academic papers by releasing 18,592 academic papers to the public in a 32.48 gigabyte torrent uploaded to The Pirate Bay.

library.png

Maxwell claims that the torrent consists of documents from the Philosophical Transactions of the Royal Society journal. According to Gigaom, the copyrights on these academic papers have been expired for some time; however, the only way to access these documents have been through the pay-walled JSTOR database where individual articles can cost as much as $19. While Maxwell claims to have gained access to the papers many years prior through legal means (likely through a college or library’s database access), he has been fearful of releasing the documents due to legal repercussions from the journal’s publishers. He claims that the legal troubles that Swartz is facing for (allegedly) downloading the JSTOR library has fueled his passion and changed his mind about not releasing them.

Maxwell justifies the release by stating that the authors and universities do not benefit from their work, and the move to a digital distribution method has yet to coincided with a reduction in prices. In the past the high cost (sometimes paid by the authors) has been such to cover the mechanical process of binding and printing the journals. Maxwell further states that to his knowledge, the money those wishing to verify their facts and learn more from these academic works “serves little significant purpose except to perpetuate dead business models.” The pressure and expectation that authors must publish or face irrelevancy further entrenches the publisher’s business models.

Further, GigaOm quoted Maxwell in stating:

“If I can remove even one dollar of ill-gained income from a poisonous industry which acts to suppress scientific and historic understanding, then whatever personal cost I suffer will be justified . . . it will be one less dollar spent in the war against knowledge. One less dollar spent lobbying for laws that make downloading too many scientific papers a crime.”

Personally, I’m torn on the ethics of the issue. On one hand, these academic papers should be made available for free (or at least at cost of production) to anyone that wants them as they are written for the betterment of humanity and pursuit of knowledge (or at least as a thought provoking final paper). On the other hand, releasing the database via a torrent has it’s own issues. As far as non-violent protests go, this is certainly interesting and likely to get the attention of the publishers and academics. Whether it will cause them to reevaluate their business models; however, is rather doubtful (and unfortunate).

Image courtesy Isabelle Palatin.

Source: GigaOm

Gmail Now Supports Multiple Calls and Placing Calls On Hold

Subject: General Tech | July 21, 2011 - 04:27 PM |
Tagged: networking, voip, google

The Gmail blog recently showed off a new feature that allows you to put one call on hold while accepting another, a feature that standard phones have had for a long time now. Inside Gmail, you are able to start a call to another computer or a physical phone and then you are free to place this call on hold by hitting the “hold” button. When you wish to return to the call, you simply hit the “Resume” button- just like a normal phone. When a second person calls you, you will be asked to accept or reject it, and if you accept the call the first call will automatically be placed on hold.

multiplecalls.png

According to Google, the call hold feature “works across all call types (voice, video, and phone)” and the only caveat is a limit of two outgoing calls to physical phones can be active at a time. The only feature I see missing from this function is integration with Google Music that would allow me to set up custom hold music to the chagrin to telemarketers and customer support everywhere. After all, it is almost a Friday and everyone would just love to hear some Rebecca Black, right!?

Source: Gmail Blog

US Pentagon To Test Cyber Warfare Tactics Using Internet Simulator

Subject: Editorial, General Tech | June 20, 2011 - 03:24 AM |
Tagged: simulator, networking, Internet, cyber warfare

Our world is the host to numerous physical acts of aggression every day, and until a few years ago those acts have remained in the (relatively) easily comprehensible physical world. However, the millions of connected servers and clients that overlay the numerous nations around the world have rapidly become host to what is known as “cyber warfare,” which amounts to subversion and attacks against another people or nation through electronic means-- by attacking its people or its electronic and Internet-based infrastructure.

While physical acts of aggression are easier to examine (and gather evidence) and attribute to the responsible parties, attacks on the Internet are generally the exact opposite. Thanks to the anonymity of the Internet, it is much more difficult to determine the originator of the attack. Further, the ethical debate of whether physical actions in the form of military action is appropriate in response to online attacks comes into question.

It seems as though the Pentagon is seeking the answers to the issues of attack attribution and appropriate retaliation methods through the usage of an Internet simulator dubbed the National Cyber Range. According to Computer World, two designs for the simulator are being constructed by Lockheed Martin with a $30.8 million USD grant and Johns Hopkins University Applied Physics Laboratory with a $24.7 million USD grant provided by DARPA.

The National Cyber Range is to be designed to mimic human behavior in response to various DefCon and InfoCon (Informational Operations Condition) levels. It will allow the Pentagon and authorized parties to study the effectiveness of war plan execution as it simulates offensive and defensive actions on the scale of nation-backed levels of cyber warfare. Once the final National Cyber Range design has been chosen by DARPA from the two competing projects (by Johns Hopkins and Lockheed Martin), the government would be able to construct a toolkit that would allow them to easily transfer and conduct cyber warfare testing from any facility.

 

Image cortesy Kurtis Scaletta via Flickr Creative Commons.

 

Demand For IT Workers Remains High In US Despite Economy

Subject: General Tech | June 12, 2011 - 01:12 PM |
Tagged: US, technology, networking, IT

The US has seen a rather rapid rise in unemployment in the last few years as companies cut back on staff and computing costs. According to Computer World, Tom Silver has been quoted in saying “several years ago companies cut back pretty far, particularly in infrastructure and technology development.” Silver further believes that the tech unemployment rate is half that of the national unemployment rate due to companies needing to replace aging hardware, software, and deal with increased security threats. 65% of 900 respondents in a recent biannual hiring survey conducted by Dice.com found that hiring managers and head hunters plan on bringing even more new workers into their businesses in the second half of 2011 versus the first half.

Workers with mobile operating system, hardware, and ecosystem expertise and java development skills are the most desirable technology workers, according to Computer World. Although anyone with an IT background and recent programming skills have a fairly good chance of acquiring jobs in a market that is demanding now-rare talent. Employers are starting to be more confident in the economy and thus are more willing to invest in new workers. In an era where Internet security is more important that ever, skilled enterprise IT workers are becoming a valuable asset to employers, who are increasingly fighting for rare talent and incentivizing new workers with increased salaries.

Even though businesses are still remaining cautious in their new hiring endeavors, it is definitely a good sign for people with tech backgrounds who are looking for work as the market is ever so slowly starting to bounce back. For further information on the study, Computer World has the full scoop here.

Are you in or studying to enter into the IT profession? Do you feel confident in the US employers' valuation of their IT workers?

Dell Survey Suggests CEOs Believe Cloud Computing Is A Fad

Subject: General Tech | June 12, 2011 - 10:36 AM |
Tagged: networking, dell, cloud computing

A recent survey conducted during the first two days of the Cloud Expo by Marketing Solutions and sponsored by Dell suggests that IT professionals believe that their less technical CEOs believe cloud computing to be a "fad" that will soon pass. On the other hand, IT departments see the opportunities and potential of the technology. This gap between the two professions, according to Dell, lies in "the tendency of some enthusiasts to overhype the cloud and its capacity for radical change." Especially with a complex and still evolving technology like cloud computing, CEOs are less likely to see the potential benefits and moreso the obstacles and cost to adopt the methods.

The study surveyed 223 respondents from various industries (excluding technology providers), and found that the attitudes of IT professionals and what they felt their respective CEOs' attitudes were regarding "the cloud" were rather different. The pie graphs in figure 1 below illustrate the gap between the two professions mentioned earlier. Where 47% of those in IT see cloud computing as a natural evolution of the trend towards remote networks and virtualization, only 26% of IT believed that CEOs agreed. Also, while 37% of IT professions stated that cloud computing is a new way to think about their function in IT, "37 percent deemed their business leaders mostly likely to describe the cloud as having “immense potential,” contrasted with only 22 percent of the IT pros who said that was their own top descriptor."

Further, the survey examined what both IT professionals and CEOs believed to be obstacles in the way of adopting cloud computing. On the IT professionals' front, 57% believed data security to be the biggest issue, 32% stated industry compliance and governance as the largest obstacle, and 27% thought disaster recovery options to be the most important barrier, contrasted with 51%, 30%, and 22% of CEOs. This comparison can be seen in figure 2 below.

While the survey has handily indicated that enterprises' IT departments are the most comfortable with the idea of adopting cloud computing, other areas of the business could greatly benefit from the technology but are much more opposed to the technology. As seen in figure 3, 66% of IT departments are willing to advocate for cloud computing, only 13% of Research and Development, 13% of Strategy and Business Development, and a mere 5% of Supply Chain Management departments feel that they would move to cloud computing and benefit from the technology.

Dell stated that IT may be able to help in many more functions and departments by advocating for and implementing cloud computing strategies in information-gathering and data-analyzation departments. In doing so, IT could likely benefit the entire company and further educate their CEOs in cloud computing's usefulness to close the gap between the IT professionals' and CEO's beliefs.

You can read more about the Dell study here. How do you feel about cloud computing?

Source: Dell

Cisco Faces Second Lawsuit For Allegedly Supplying Technology To Track Chinese Dissidents

Subject: Networking | June 12, 2011 - 04:24 AM |
Tagged: networking, Lawsuit, Internet, Cisco

Cisco, the worldwide networking heavyweight, is now facing a lawsuit from three Chinese website authors for Harry Wu. Mr. Wu is a Chinese political activist who spent 19 years in Chinese forced-labor prison camps, according to Network World. The charges raised against Cisco allege that Cisco optimized its networking equipment and worked with the Chinese government to train them to identify and track individuals on the Internet that speak out against the Chinese government with pro-democratic speech.

In a similar vein, the networking company was presented with an additional lawsuit last month by members of the Falun Gong religious group. This previous lawsuit claims that Cisco supplied networking technology to the Chinese government with the knowledge that the technology would be used to oppress the religious movement. Falun Gong is religious and spiritual movement that emphasizes morality and the theoretical nature of life. It was banned in July 1999 by the Communist Party Of China for being a “heretical organization”. Practitioners have been the victims of numerous human rights violations throughout the years.

Cisco has stated on the company’s blog that they strongly support free expression on the Internet. Further, they have responded to the allegations by stating that “Our company has been accused in a pair of lawsuits of contributing to the mistreatment of dissidents in China, based on the assertion that we customize our equipment to participate in tracking of dissidents. The lawsuits are inaccurate and entirely without foundation,” as well as “We have never customized our equipment to help the Chinese government—or any government—censor content, track Internet use by individuals or intercept Internet communications.”

It remains to be seen whether the allegations hold any truth; however, Cisco has been here before and are likely to see further lawsuits in the future. How do you feel about the Cisco allegations?

World IPv6 Day Goes Off Without A Hitch

Subject: Networking | June 8, 2011 - 10:19 PM |
Tagged: networking, ipv6, Internet

As has been said numerous times throughout the Internet, the pool of available IPv4 (32 bit) addresses are running out. This is due to an ever increasing number of Internet users all over the world. In response to this, the standards for IPv4’s successor were developed by the Internet Engineering Task Force. The new IPv6 standard uses 128-bit addresses, which supports 2^128 (approximately 340 undecillion) individual addresses. Compared to IPv4’s 32-bit addresses, which can support up a little under 4.30 billion addresses (4,294,967,296 to be exact), the new Internet protocol will easily be able to assign everyone a unique IP address and will enable support for multiple devices per user without the specific need for network address translation (NAT).

Today is World IPv6 Day, and is the first widespread (live) trial of IPv6. Over a 24 hour period numerous large and small websites plan to switch on IPv6 to test for consumer readiness for the protocol. Big companies such as Google, Facebook, Yahoo, Youtube, and Bing are participating by enabling IPv6 alongside IPv4 to test how many users are able to connect to IPv6 versus IPv4. Led by the Internet Society, consumers and businesses are encouraged to help test the new IP protocol by visiting the participants sites and running readiness tests.

According to Network World, the trial has been going very well. They state that, according to Arbor Networks, overall IPv6 traffic volume has “doubled during the first 12 hours.” Further, Akamai experienced a tenfold increase in IPv6 traffic just before the event. Fortunately, this increase did not result in an increase of DDoS attacks, which Akami states were minimal. The June 8, 2011 event was used as a deadline of sorts by many businesses, and resulted in many large corporations getting their IPv6 protocol up and running.

While the event only lasts 24 hours, some large websites likely will continue to enable IPv6 alongside IPv4 addressing. Network Wold quotes Champagne in hoping more businesses will move to IPv6 after seeing the successes of the World IPv6 Day participants now that “everybody went into the water today and found out that the water is fine.”

It will certainly be interesting to see if the success continues and if consumers still on IPv4 can be made ready before the 32-bit address well runs dry, much like the move to digital TV broadcasting in the US saw many deadline push-backs.  Are you ready for IPv6?

It's world IPv6 Day

Subject: General Tech, Networking | June 8, 2011 - 11:38 AM |
Tagged: what could go wrong, networking, ipv6, 404

On February 3 of this year, the last block of IPv4 addresses were allocated which brought IPv6 to the forefront of the minds of many network heads.  NAT and internal LANs can extend the usage of IPv4 for quite a while and many of the allocated addresses are not actually in use which is a good thing as not many OSes support IPv6 natively, nor do many network appliances.

That brings us to today, where many major websites are cumulating all of the internal testing they have been performning by doing a 24 hour dry run of IPv6.  Companies like Juniper and Cisco have been working to ensure their portion of the Internet's backbone will be able to handle the new addressing scheme so that clients can connect to the sites that are testing IPv6.  Google, Facebook, Yahoo and Bing have all turned on IPv6 as have several ISPs including Comcast, AT&T and Verizon, with Verizon's LTE mobile network are also testing IPv6.  You can see a full list of the participants here.

This will of course involve a little pain, as new technology does tend to have sharp edges.  You may well see a few 404's or have other problems when surfing the net today but overall it should not be too bad, Google predicts about a 1% failure rate.  The hackers will also be out to play today, likely using the larger sized packets for DDoS attacks.  Since the IPv6 packets are four times larger than an IPv6 packet, a flood of the new protocol will be super effective at DDoS attacks.  As well, most of the IPv6 packets will be bypassing companies current deep packet inspection hardware and software, IPv6 is not backwards compatible with IPv4 so the network appliances used for that type of scan simply cannot inspect IPv6 packets.  That is not to say that these devices cannot inspect IPv6 packets, simply that for a one day test major providers are reluctant to completely reprogram the devices.   In the case of an attack, most of the participants have a plan in place to revert immediately back to IPv4.

So, if possible on your machine, fire up IPv6 and give it a whirl.  There is a simple test here to see if you are IPv6 compliant and if it is enabled or a more comprehesive test here.

ipv6.png

"Sponsored by the Internet Society, World IPv6 Day runs from 8 p.m. EST Tuesday until 7:59 p.m. EST Wednesday. The IT departments in the participating organizations have spent the last five months preparing their websites for an anticipated rise in IPv6-based traffic, more tech support calls and possible hacking attacks prompted by this largest-ever trial of IPv6."

Here is some more Tech News from around the web:

Tech Talk

Scientists Set New Download Speed Record Using Fiber And Single Laser

Subject: Networking | May 24, 2011 - 06:52 PM |
Tagged: networking, Internet, fiber

Using a single laser, scientists were able to encode data and transmit it over 50 km of single-node fiber using “325 optical frequencies within a narrow spectral band of laser wavelengths.” The single laser was capable of handling 26 terabits of information per second in an energy efficient manner, which is equivalent to the amount of data used by 400 million phone calls.

The technique used to encode and decode the optical data is called orthogonal frequency-division multiplexing (OFDM). It is a modulation technology that can be applied to both optical and electrical based transmission methods. The data is broken down into numerous parallel streams of data (using mathematics) that greatly increases the transmission speed and amount of bandwidth available. While electrical/copper based systems are not able to transmit 26 terabits of information using OFDM, optical systems are able encode the amount of data in their experiments without speed restrictions and while using “negligible energy.” Dr. Leuthold stated “we had to come up with a technique that can process data about one million times faster than what is common in the mobile communications world.” Further, his stated that his experiment shows that optical technology still has room for transmission speed improvement, and increases in bit-rate do not necessarily result in higher energy usage.

The important aspect of Dr. Leuthold’s research lies in the energy efficiency inherent in reducing the amount of lasers and fiber nodes required to transmit 26 terabits per second of data. Using simple optical technologies, they are able to greatly increase the amount of bandwidth in a single fiber line. Japanese researchers have been able to achieve 109 terabits per second download speeds; however, they had to use multiple lasers to achieve the speeds. Dr. Leuthold iterated that “it’s the fact that it’s one laser,” as being the important results of his research.

Image courtesy Kainet via Flickr Creative Commons

Source: ABC News