Some computer components get all the glory. Your normal lineup of FPS crushing GPU’s, Handbrake dominating CPU’s, and super-fast Memory end up with most of the headlines. Yet behind the scenes, there are some computer components we use that are pivotal in our use and enjoyment of computers and receive very little fanfare. Without networking we wouldn’t have file sharing, LAN parties or even the Internet itself. Without routers and network adapters, we wouldn’t have networking.
ASUS recently sent a whole slew of networking components our way and we’ve decided to take them for a spin and see if they’re worth your hard earned dollars. Our box of ASUS goodies included:
- ASUS RT-N66U Gigabit Router – Dual Band Wireless-N900
- ASUS PCE-N10 - Wireless N PCI-E Adapter Wireless-N
- ASUS PCE-N15 - Wireless N PCI-E Adapter Wireless-N
- ASUS USB-N53 - Dual Band Wireless N Adapter
- ASUS USB-N66 - Dual Band Wireless-N900 Adapter
Without further ado, let’s jump in and tackle each one.
ASUS RT-N66U Gigabit Router – Dual Band Wireless-N900
Routers are one of those components that most of us don’t really think about unless something goes horribly wrong. Most people will buy one they find on a big box store shelf (or even worse, just use their ISP’s router), pull it out of the box, plug a few cables into it and then forget about it in a closet for a few years.
Subject: Networking | May 16, 2012 - 09:57 PM | Tim Verry
Tagged: wifi, router, networking, netgear, 802.11ac
Following up on the announcement by Buffalo Technology, Netgear has released their own 802.11ac wireless router, the R6300. (PC Perspective recently ran a giveaway for the R6300 which you can read about here). In addition to the flagship 802.11ac router, Netgear announced a slimmed down version–the R6200–and the A6200 WiFi USB dongle.
The Netgear R6300 is their highest end wireless router supporting the 802.11ac WiFi standard. It supports 802.11ac speeds up to 1300 Mbps (450 Mbps over wireless n) and is backwards compatible with the 802.11 a/b/g/n standards. It also has two USB 2.0 ports that can be used to share hard drive and printers across the network. Further, the “5G WiFI” router is powered by a Broadcom chipset, which should open the door to third part firmware(s).
In addition to the above router, Netgear has announced the R6200 wireless router. It is compatible with the upcoming 802.11ac standard, but at reduced speeds. It features approximately 900 Mbps transfer rates over the “ac” standard and up to 300 Mbps over the 802.11n standard. The router is backwards compatible with all the older consumer standards (a/b/g/n), and it features a single USB 2.0 port to share a printer or hard drive to computers on the LAN.
Last up in the announcement is the Netgear A6200. This device is a USB WiFi dongle that supports the 802.11ac standard as well as existing a/b/g/n networks. It claims to deliver enough speed for HD streaming of videos, though Netgear has not stated if it will be able to take advantage of the full 1300 Mbps theoretical maximum connection. The WiFi adapter features a swiveling antenna and a docking station for use with desktop systems.
The other neat feature that the new routers support is the Netgear Genie application, which allows users to monitor and control the network using an application on their computer or smartphone (iOS and Android). They also feature Netgear MyMedia, printer sharing, guest network access, a DLNA server, parental controls, and automatic WiFi security.
The Netgear R6300 router is available for purchase now with an MSRP of $199.99. The R6200 router and A6200 WiFi dongle will be available for purchase in Q3 2012 with suggested retail prices of $179.99 and $69.99 respectively.
Subject: Networking | May 15, 2012 - 05:38 PM | Tim Verry
Tagged: wireless, router, networking, ethernet bridge, buffalo, 802.11ac
Netgear and Buffalo have been working hard to build and get to market new wireless routers based on the 802.11ac (pending ratification) standard. PC Perspective recently ran a giveaway for the Netgear 802.11ac router, but it seems that Buffalo has managed to beat them to market with their new gear. In fact, Buffalo yesterday released two 802.11ac devices with the AirStation™ WZR-D1800H wireless router and WLI-H4-D1300 wireless Ethernet bridge. Both devices are powered by Broadcom’s 5G WiFi chips (what Broadcom refers to 802.11ac as–the fifth generation of consumer WiFi) and based around the IEEE standard that is set to become an official standard early next year.
The Buffalo 802.11ac Router (left: front, right: rear view)
The router and Ethernet bridge both support the upcoming 802.11ac standard as well as the current 802.11 b, g, and n standards so they are backwards compatible with all your devices. They also support all the normal functions of any other router or bridge device–the draft support for 802.11ac is what differentiates these products. The router stands vertically and has a router reset and USB eject buttons, one USB 2.0 port, four Gigabit Ethernet LAN ports, and one Gigabit Ethernet WAN port. Below the WAN port is a power button and DC in jack. The Buffalo Ethernet bridge allows users to connect Ethernet devices to a network over WiFi. It looks very similar to the router but does not have a WAN port or USB port on the back. It also does not act as a router, only a bridge to a larger network. The largest downside to the Ethernet bridge is pricing: (although out of stock now) Newegg has the bridge listed for the same price as the full fledged router. At that point, it does not have much value–users would be better off buying two routers and disabling the router features on one (and because the Broadcom chipset should enable custom firmwares, this should be possible soon).
The Buffalo 802.11ac Ethernet Bridge (left: front, right: rear view)
What makes these two devices interesting though is the support for the “5G WiFi” 802.11ac wireless technology. This is the first time that the Wireless connections have a (granted, theoretical) higher transfer speed than the wired connections, which is quite the feat. 802.11ac is essentially 802.11n but with several improvements and only operating on channels in the 5GHz spectrum. The pending standard also uses wider 80 Mhz or 160 MHz channels, 256 QAM modulation, and up to eight antennas (much like 802.11n’s MIMO technology) to deliver much faster wireless transfer rates than consumers have had available previously. The other big technology with the upcoming WiFi standard is beamforming. This allows wireless devices to communicate with their access point(s) to determine relative spatial position. That data is then used to adjust the transmitted signals such that it is sent in the direction of the access point at the optimum power levels. This approach is different to traditional WiFi devices that broadcast omni-directionally (think big circular waves coming out of your router) because the signals are more focused. By focusing the signals, users get better range and can avoid WiFi deadspots.
Hajime Nakai, Chief Executive Officer at Buffalo Technology stated that “along with Broadcom, we continue to demonstrate our commitment to innovation by providing a no-compromise, future proofed wireless infrastructure for consumers’ digital worlds.”
The Buffalo AirStation™ WZR-D1800H router and WLI-H4-D1300 Ethernet bridge are available for purchase now for around $179.99 USD. The Ethernet bridge is listed as out of stock on Newegg; however, the router is still available (and the better value).
Subject: Networking | April 18, 2012 - 10:33 PM | Tim Verry
Tagged: wi-fi, qualcomm, networking, killer, Ethernet
Qualcomm Atheros today launched two new networking cards for the desktop and laptop markets. A subsidiary company of Qualcomm (formerly Killer Networks), the Wireless-N 1202 and E2200 provides Wi-Fi and Ethernet connectivity based on Killer Networks’ technology.
The Wireless-N 1202 is a 802.11n Wi-Fi and Bluetooth module with 2x2 MIMO antennas which should provide plenty of Wireless N range. On the wired side of things the E2200 is a Gigabit Ethernet network card for desktop computers. Both modules are powered by Killer Network’s chip and the Killer Network Manager software. The software will allow users to prioritize gaming, audio, and video packets over other network traffic to deliver the best performance. Director of Business Development Mike Cubbage had the following to say.
“These products create an unprecedented entertainment and real-time communications experience for the end user by ensuring that critical online applications get the bandwidth and priority they need, when they need it.”
The E2200 Gigabit Ethernet NIC is available for purchase now, and the Wireless-N 1202 module will go on sale in May. More specific information on the products will be available after the official launch date (today) so stay tuned to PC Perspective.
Subject: Networking | March 16, 2012 - 05:58 AM | Tim Verry
Tagged: zte, wdm, networking, fiber optics, 1.7tbps
Chinese telecommunications provider ZTE showed off a new fiber optic network capable of 1.7 Tbps over a single fiber cable. Computer World reports that the ZTE network trial utilizes Wavelength Division Multiplexing technology to pack more information through a single cable by employing multiple wavelengths that comprise different channels.
The ZTE fiber network runs 1,750 kilometers (just over 1,087 miles) and uses eight channels- each capable of 216.4 Gbps- to send data at speeds up to 1.7312 Tbps. The company has no immediate plans to implement such a network. Rather, they wanted to prove that an upgrade to 200 Gbps per channel speeds is possible. To put their achievement in perspective, Comcast currently has fiber networks running at 10 Gbps, 40 Gbps, and 100 Gbps channel speeds, according to an article on Viodi.
And to think that I only recently upgraded to a Gigabit router! I can't wait to see this technology trickle down towards a time when home networks are running through fiber optic cables and doing so at terabit per second speeds!
Image courtesy kainet via Flickr Creative Commons.
Subject: Networking | August 4, 2011 - 02:01 AM | Tim Verry
Tagged: security, networking, cyber warfare
Computer World posted a short news piece quoting the former director of the CIA’s Counter-terrorism Center Cofer Black as he explained why Cyberthreats needs to be taken more seriously by the nation. Cofer Black played a key role during the first term of the George W. Bush administration and was one of the counter-terrorism experts made aware of a likely attack on American soil prior to the September 11th attacks.
Black noted that the people in a position with the power to act on these warnings were unwilling to act without some measure of validation. He goes on to say that while the general public was blindsided by the September 11th attacks, “I can tell that neither myself nor my people in counter-terrorism were surprised at all.”
With cyber warfare becoming increasingly utilized as an attack vector to foreign adversaries, the need for quick responses to threats will only increase. Further, the demand on security professionals to search for and validate threats for those in power to enact a response will be a major issue in the coming years. “The escalatory nature of such threats is often not understood or appreciated until they are validate,” Black offered in regards to the challenges decision makers face. He believes that the decision makers do listen to the threats; however, they do not believe them. This behavior, he believes, will hinder the US’ ability to properly respond to likely threats.
With the recent announcement by the Department of Defense that physical retaliation to Internet based attacks (in addition to counter attacks) may be necessary, the need to quickly respond to likely threats proactively is all the more imperative. Do you believe tomorrows battles will encompass the digital plane as much as real life?
Subject: Networking | June 12, 2011 - 04:24 AM | Tim Verry
Tagged: networking, Lawsuit, Internet, Cisco
Cisco, the worldwide networking heavyweight, is now facing a lawsuit from three Chinese website authors for Harry Wu. Mr. Wu is a Chinese political activist who spent 19 years in Chinese forced-labor prison camps, according to Network World. The charges raised against Cisco allege that Cisco optimized its networking equipment and worked with the Chinese government to train them to identify and track individuals on the Internet that speak out against the Chinese government with pro-democratic speech.
In a similar vein, the networking company was presented with an additional lawsuit last month by members of the Falun Gong religious group. This previous lawsuit claims that Cisco supplied networking technology to the Chinese government with the knowledge that the technology would be used to oppress the religious movement. Falun Gong is religious and spiritual movement that emphasizes morality and the theoretical nature of life. It was banned in July 1999 by the Communist Party Of China for being a “heretical organization”. Practitioners have been the victims of numerous human rights violations throughout the years.
Cisco has stated on the company’s blog that they strongly support free expression on the Internet. Further, they have responded to the allegations by stating that “Our company has been accused in a pair of lawsuits of contributing to the mistreatment of dissidents in China, based on the assertion that we customize our equipment to participate in tracking of dissidents. The lawsuits are inaccurate and entirely without foundation,” as well as “We have never customized our equipment to help the Chinese government—or any government—censor content, track Internet use by individuals or intercept Internet communications.”
It remains to be seen whether the allegations hold any truth; however, Cisco has been here before and are likely to see further lawsuits in the future. How do you feel about the Cisco allegations?
Subject: Networking | June 8, 2011 - 10:19 PM | Tim Verry
Tagged: networking, ipv6, Internet
As has been said numerous times throughout the Internet, the pool of available IPv4 (32 bit) addresses are running out. This is due to an ever increasing number of Internet users all over the world. In response to this, the standards for IPv4’s successor were developed by the Internet Engineering Task Force. The new IPv6 standard uses 128-bit addresses, which supports 2^128 (approximately 340 undecillion) individual addresses. Compared to IPv4’s 32-bit addresses, which can support up a little under 4.30 billion addresses (4,294,967,296 to be exact), the new Internet protocol will easily be able to assign everyone a unique IP address and will enable support for multiple devices per user without the specific need for network address translation (NAT).
Today is World IPv6 Day, and is the first widespread (live) trial of IPv6. Over a 24 hour period numerous large and small websites plan to switch on IPv6 to test for consumer readiness for the protocol. Big companies such as Google, Facebook, Yahoo, Youtube, and Bing are participating by enabling IPv6 alongside IPv4 to test how many users are able to connect to IPv6 versus IPv4. Led by the Internet Society, consumers and businesses are encouraged to help test the new IP protocol by visiting the participants sites and running readiness tests.
According to Network World, the trial has been going very well. They state that, according to Arbor Networks, overall IPv6 traffic volume has “doubled during the first 12 hours.” Further, Akamai experienced a tenfold increase in IPv6 traffic just before the event. Fortunately, this increase did not result in an increase of DDoS attacks, which Akami states were minimal. The June 8, 2011 event was used as a deadline of sorts by many businesses, and resulted in many large corporations getting their IPv6 protocol up and running.
While the event only lasts 24 hours, some large websites likely will continue to enable IPv6 alongside IPv4 addressing. Network Wold quotes Champagne in hoping more businesses will move to IPv6 after seeing the successes of the World IPv6 Day participants now that “everybody went into the water today and found out that the water is fine.”
It will certainly be interesting to see if the success continues and if consumers still on IPv4 can be made ready before the 32-bit address well runs dry, much like the move to digital TV broadcasting in the US saw many deadline push-backs. Are you ready for IPv6?
Subject: General Tech, Networking | June 8, 2011 - 11:38 AM | Jeremy Hellstrom
Tagged: what could go wrong, networking, ipv6, 404
On February 3 of this year, the last block of IPv4 addresses were allocated which brought IPv6 to the forefront of the minds of many network heads. NAT and internal LANs can extend the usage of IPv4 for quite a while and many of the allocated addresses are not actually in use which is a good thing as not many OSes support IPv6 natively, nor do many network appliances.
That brings us to today, where many major websites are cumulating all of the internal testing they have been performning by doing a 24 hour dry run of IPv6. Companies like Juniper and Cisco have been working to ensure their portion of the Internet's backbone will be able to handle the new addressing scheme so that clients can connect to the sites that are testing IPv6. Google, Facebook, Yahoo and Bing have all turned on IPv6 as have several ISPs including Comcast, AT&T and Verizon, with Verizon's LTE mobile network are also testing IPv6. You can see a full list of the participants here.
This will of course involve a little pain, as new technology does tend to have sharp edges. You may well see a few 404's or have other problems when surfing the net today but overall it should not be too bad, Google predicts about a 1% failure rate. The hackers will also be out to play today, likely using the larger sized packets for DDoS attacks. Since the IPv6 packets are four times larger than an IPv6 packet, a flood of the new protocol will be super effective at DDoS attacks. As well, most of the IPv6 packets will be bypassing companies current deep packet inspection hardware and software, IPv6 is not backwards compatible with IPv4 so the network appliances used for that type of scan simply cannot inspect IPv6 packets. That is not to say that these devices cannot inspect IPv6 packets, simply that for a one day test major providers are reluctant to completely reprogram the devices. In the case of an attack, most of the participants have a plan in place to revert immediately back to IPv4.
"Sponsored by the Internet Society, World IPv6 Day runs from 8 p.m. EST Tuesday until 7:59 p.m. EST Wednesday. The IT departments in the participating organizations have spent the last five months preparing their websites for an anticipated rise in IPv6-based traffic, more tech support calls and possible hacking attacks prompted by this largest-ever trial of IPv6."
Here is some more Tech News from around the web:
- Skype hangs up on users yet again @ The Register
- Microsoft reportedly considers launching own-brand tablet @ DigiTimes
- New AMD CEO imminent @ SemiAccurate
- Chrome 12 adds a raft of new features @ The Inquirer
- TomTom GO 2535 M LIVE Review @ TechReviewSource
- The Post PC era begins @ t-break
- Win 2 Logitech diNovo Keyboards for Notebooks @ t-break
Subject: Networking | May 24, 2011 - 06:52 PM | Tim Verry
Tagged: networking, Internet, fiber
Using a single laser, scientists were able to encode data and transmit it over 50 km of single-node fiber using “325 optical frequencies within a narrow spectral band of laser wavelengths.” The single laser was capable of handling 26 terabits of information per second in an energy efficient manner, which is equivalent to the amount of data used by 400 million phone calls.
The technique used to encode and decode the optical data is called orthogonal frequency-division multiplexing (OFDM). It is a modulation technology that can be applied to both optical and electrical based transmission methods. The data is broken down into numerous parallel streams of data (using mathematics) that greatly increases the transmission speed and amount of bandwidth available. While electrical/copper based systems are not able to transmit 26 terabits of information using OFDM, optical systems are able encode the amount of data in their experiments without speed restrictions and while using “negligible energy.” Dr. Leuthold stated “we had to come up with a technique that can process data about one million times faster than what is common in the mobile communications world.” Further, his stated that his experiment shows that optical technology still has room for transmission speed improvement, and increases in bit-rate do not necessarily result in higher energy usage.
The important aspect of Dr. Leuthold’s research lies in the energy efficiency inherent in reducing the amount of lasers and fiber nodes required to transmit 26 terabits per second of data. Using simple optical technologies, they are able to greatly increase the amount of bandwidth in a single fiber line. Japanese researchers have been able to achieve 109 terabits per second download speeds; however, they had to use multiple lasers to achieve the speeds. Dr. Leuthold iterated that “it’s the fact that it’s one laser,” as being the important results of his research.
Image courtesy Kainet via Flickr Creative Commons