Back in February we took a quick initial look at the eero Home Wi-Fi System, one of several new entrants in the burgeoning Mesh Networking industry. Like its competitors, eero's goal is to increase home Wi-Fi performance and coverage by switching from a system based upon a powerful standalone router to one which utilizes multiple lower power wireless base stations positioned throughout a home.
The idea is that these multiple wireless access points, which are configured to communicate with each other automatically via proprietary software, can not only increase the range of your home Wi-Fi network, but also reduce the burden of our ever-increasing number of wireless devices on any one single access point.
There are a number of mesh Wi-Fi systems already available from both established networking companies as well as industry newcomers, with more set for release this year. We don't have every system ready to test just yet, but join us as we take a look at three popular options to see if mesh networking performance lives up to the hype.
Subject: General Tech | March 6, 2017 - 01:42 PM | Jeremy Hellstrom
Tagged: wifi, networking
Our own Sebastian Peak has delved into the nightmare world of testing WiFi, specifically MU-MIMO and explained some of the difficulties you encounter when testing wireless networks. It is now Ars Technica's turn to try to explain why your 2.4GHz router never delivers the advertised 1,000 Mbps as well as how to test your actual performance. As with many products, the marketing team has little interest in what the engineers are saying, they simply want phrases they can stick on their packaging and PR materials. While the engineers are still pointing out that even the best case scenarios involving a single user less than 10 feet away, with clear line of sight will not reach the theoretical performance peak, the PR with that high number has already been emailed and packages are printing.
Drop by Ars Technica for a look at how the current state of WiFi has evolved into this mess, as well as a dive into how the new technologies work and what performance you can actually expect from them.
"802.11n was introduced to the consumer public around 2010, promising six hundred Mbps. Wow! Okay, so it's not as fast as the gigabit wired Ethernet that just started getting affordable around the same time, but six times faster than wired Fast Ethernet, right? Once again, a reasonable real-life expectation was around a tenth of that. Maybe. On a good day. To a single device."
Here is some more Tech News from around the web:
- AMD Ryzen with VMWare ESXi: A Pink Screen of Death @ [H]ard|OCP
- Windows 10 Build 15048 Has a Windows Mixed Reality Demo You Can Try @ Slashdot
- User rats out IT team for playing games at work, gets them all fired @ The Register
- If we must have an IoT bog roll holder, can we at least make it secure? @ The Register
- Microsoft wants you to plan a new generation of legacy systems @ The Register
- Nintendo tells Switch users that dead pixels are not its problem @ The Inquirer
- One million decrypted Gmail and Yahoo accounts for sale on the bloody dark web @ The Inquirer
- The iflix HD Streaming Q&A With Ash Crick @ TechARP
Subject: General Tech | January 8, 2017 - 11:58 AM | Tim Verry
Tagged: networking, netgear, CES 2017, CES
Netgear introduced a new semi-managed switch under its Nighthawk brand called the Nighthawk S8000. The new gigabit switch offers eight ports and a GUI web management interface.
The Nighthawk S8000 keeps the stealth bomber design aesthetic of its larger router brethren with clean lines, sharp angles, and a dark zinc alloy housing. The one downside to this design is that these switches are not stackable but if you need that many ports you are probably looking at a bigger single switch anyway.
Exact specifications are not yet available, but the Layer 2 GS808E switch reportedly offers per-port prioritization and QoS (Quality of Service), DoS (Denial of Service) protection, and IGMP snooping (they don't list which version though so I can't say if this would work well with AT&T Uverse and running TV and PCs on). There are reportedly three pre-set modes and two user customizable profiles that can be set for each port depending on usage: gaming, media streaming, and standard LAN. Further, there are four (Netgear’s site lists 3 in some places) levels of prioritization.
The gigabit switch does support link aggregation (port trunking) up to 4 ports for a single 4Gbps connection to devices that also support link aggregation. This can be configured as a single 4Gbps connection or as redundancy in case one port or cable fails. The use case for something like this would be multiple PCs sending and receiving large amounts of data from a NAS at the same time where the wider connection back to the switch can be meaningfully utilized.
The Nighthawk S8000 comes with a 3 year warranty and will be available in March for $99.99.
There may be better options, especially at $99.99 but fans of Netgear’s Nighthawk wireless routers might be interested. It is hard to say if it is worth the price yet as independent reviews are not out yet. For those interested, PC Gamer has more photos of the switch.
Follow all of our coverage of the show at https://pcper.com/ces!
Subject: Networking | September 15, 2016 - 04:42 PM | Sebastian Peak
Tagged: Rivet Networks, NiC, networking, Killer Networking, Killer E2500, Ethernet, controller
Rivet Networks have announced the new Killer E2500 Gigabit Ethernet controller, and they are partnering with MSI and GIGABYTE to bring the new controller to consumer gaming motherboards.
“The Killer E2500 delivers powerful networking technology to gamers and performance users, including significant new enhancements to its Advanced Stream Detect 2.0 Technology and the all new Killer Control Center. In addition to detecting and optimally prioritizing your games, video, and voice applications with Advanced Stream Detect 2.0 Technology, the Killer E2500 also detects and manages 500+ of the top global websites.”
The networking performance is said to be improved considerably with the new controller and software, with "Lag and Latency Reduction Technology":
“Through its patented technology, Killer is able to get network packets to your applications and web browsers up to 25% faster than the competition during single application usage, and potentially by more than 10x faster when multitasking.”
As I quickly realized when reviewing the Killer Wireless-AC 1535 last year, the software is just as important as the hardware with a Killer adapter. For the new E2500, the Killer Control Center has been re-designed, to provide “users full control of all aspects of their system’s networking performance”.
Rivet Networks describes the functionality of this Killer Control Center software, which allows users to control:
- The priority of each application and popular website
- The bandwidth used by each application and popular website
- The Killer interface that each application is going over
- The total bandwidth being used by system
I found that enabling the Killer Software bandwidth management to significantly affect latency when gaming (which you can see here, again revisiting the AC 1535 review), and Rivet Networks is confident that this new system will offer even better performance. We’ll know exactly how this new controller and software performs once we have one of the new motherboards featuring this E2500 controller onboard.
Actiontec MoCA WCB6200Q and ECB6200 Review
Occasionally we’ll get some gear rolling through the PCPer offices that are a bit off the beaten path. The pair of devices on tap today are something you may not come across often, and could very well be something you may not have even heard of. They are niche products serving a niche need, and that niche is “MoCA.” Today we’re looking at the Actiontec WCB6200Q 802.11ac MoCA 2.0 Wireless Network Extender and its partner in crime the Actiontec ECB6200 Bonded MoCA 2.0 Network Adapter.
Even before the formulation of the term "Internet of things", Steve Gibson proposed home networking topology changes designed to deal with this new looming security threat. Unfortunately, little or no thought is given to the security aspects of the devices in this rapidly growing market.
One of Steve's proposed network topology adjustments involved daisy-chaining two routers together. The WAN port of an IOT-purposed router would be attached to the LAN port of the Border/root router.
In this arrangement, only IOT/Smart devices are connected to the internal (or IOT-purposed) router. The idea was to isolate insecure or poorly implemented devices from the more valuable personal local data devices such as a NAS with important files and or backups. Unfortunately this clever arrangement leaves any device directly connected to the “border” router open to attack by infected devices running on the internal/IOT router. Said devices could perform a simple trace-route and identify that an intermediate network exists between it and the public Internet. Any device running under the border router with known (or worse - unknown!) vulnerabilities can be immediately exploited.
Gibson's alternative formula reversed the positioning of the IOT and border router. Unfortunately, this solution also came with a nasty side-effect. The border router (now used as the "secure" or internal router) became subject to all manner of man-in-the-middle attacks. Since the local Ethernet network basically trusts all traffic within its domain, an infected device on the IOT router (now between the internal router and the public Internet) can manipulate or eavesdrop on any traffic emerging from the internal router. The potential consequences of this flaw are obvious.
The third time really is the charm for Steve! On February 2nd of this year (Episode #545 of Security Now!) Gibson presented us with his third (and hopefully final) foray into the magical land of theory-crafting as it related to securing our home networks against the Internet of Things.
Subject: General Tech | May 5, 2016 - 12:19 PM | Jeremy Hellstrom
Tagged: openwrt, LEDE, networking
The Rebel scum known as the LEDE Project have broken away from the OpenWRT project in an unannounced move meant to increase transparency. Jokes aside, The Register named seven of the developers who are part of this forking, a not uncommon practice in open source projects. LEDE will try to bring in fresh enthusiasm to a Linux project which has been losing the interest of programmers, perhaps due to the lack of transparency that they cite or possibly just due to waning interest in a long running project. Pop on over to their page to see their mission statement, rules and processes if you are interested in how they compare to OpenWRT.
"The LEDE Project – Linux Embedded Development Environment – describes itself as a breakaway project that wants to overcome what it sees as faults in OpenWRT."
Here is some more Tech News from around the web:
- HTC sets up new company; may spin off VR business unit @ DigiTimes
- Cisco: Whoops, hackers can commandeer your TelePresence boxes with an evil HTTP poke @ The Register
- Apple patches Xcode dirty git implementation @ The Inquirer
- 'Apple Stole My Music. No, Seriously' @ Slashdot
- Acer to release more ultra-thin notebooks in September @ DigiTimes
- Medical Equipment Crashes During Heart Procedure Because Of Antivirus Scan @ Slashdot
Subject: General Tech | December 1, 2015 - 07:30 PM | Tim Verry
Tagged: networking, cable tv, cable isp
A bit before the week of Thanksgiving and Black Friday, I came across a pair of interesting articles (linked below) over at DSL Reports that had some interesting figures for the state of broadband and cable TV. While cable companies continue to rule the roost when it comes to the ISP subscriber side of things, they are also steadily bleeding cable TV subscribers. According to the numbers (which they got from Leichtman Research), the third quarter of 2015 has been simultaneously the worst quarter ever for telcos who lost both internet and cable TV subscribers, it was the best quarter (of least cable TV losses) since 2006.
On the broadband side of things, of the top seventeen providers Leichtman Research provided numbers for, cable companies brought in 787,629 new subscribers while the telephone companies lost 143,338 of their subscribers (likely customers on older forlorn CO-fed DSL tech). Cable companies are maintaining a healthy lead in total subscribers as well at approximately 54 million versus 25 million telco subscribers.
|Subscribers YTD||Net Subscribers +/- in Q3|
Not too bad considering all the bad press the cable companies have thrust upon themselves with, for example, Comcast rolling out 300GB caps across the US and their notorious (or should I say infamous) customer support departments. Somehow only CableOne and WOW lost subscribers in Q3.
At the end of Q3'15 there were 94 million cable television subscribers shared among the 12 top providers (eight cable, two satellite, and two cable). Collectively, the companies lost 190,693 TV subscribers versus last quarter which is an increased loss YoY as well (155,000 in Q3'14). It should be noted that if Dish's Sling TV subscriber numbers are not taken into account, it is a 345,000 decrease in pay TV subscribers.
|Subscribers||Net Subscribers +/- in Q3|
The cable companies lost 144,693 subscribers in Q3 making it an improvement in that it is the least amount of subscribers lost since 2006. For example, in the same quarter last year the cable companies lost 440,000. Comparatively, the telephone companies only lost 49,000 TV subscribers, but it was their worst quarter yet when it comes to losing TV subscribers. Charter, Direct TV, and Verizon were the only three of the listed companies to actually pick up subscibers this quarter while everyone else lost them.
What do you think about the numbers? Will the cable beheomouths continue being the dominant source of internet for the US? Will traditional cable/paid TV ever make a comeback, and if not just how many subscribers will these providers have to lose before they embrace new models that support à la carte and even cord cutting/streaming only?
Subject: General Tech, Networking | January 6, 2015 - 07:30 AM | Scott Michaud
Tagged: tp-link, powerline networking, networking, ces 2015, CES
Powerline networks are not the most popular, especially with advancements in wireless technology, but they are still being actively developed. TP-LINK specifically mentions a few use cases: going through cement or certain soundproof walls, going across metal beams and studs, and going further than is practical under FCC broadcast power limits.
Today at CES, TP-LINK has announced the TL-PA8030 AV1200 Gigabit Powerline networking adapter. This product differentiates itself from previous offerings with “HomePlug AV2 MIMO”, which is an acronym that is normally applied to wireless technology with multiple antennas. It is basically the same thing in this case, because the adapter uses all three prongs.
Basically, how electrical sockets work is that you have two main prongs, one of which has an alternating voltage applied to it that averages out to about ~115V RMS over a cycle (relative to the other prong). When that wire is connected to a second one, at whatever is considered “neutral” voltage, it creates an electrical current with that drop (or rise) in voltage. A third plug, which is held at the ground's voltage, takes away any excess buildup from friction, wires that are shorted to the case, and so forth.
For this product, this means that one connection will be on the same circuit as a high-voltage, 60Hz signal, and the other will be mixed with ground noise. Keep in mind, the alternative to powerline networking is broadcasting on unregulated, wireless spectrum, so humanity is not afraid to send a signal through some nasty noise. Still, it is good to stop and think about what these engineers have been able to accomplish: broadcasting two signals, down two really nasty (and in different ways) circuits, and combine them for increased performance with multiple devices.
This out of the way, the specifications themselves are brief: it is three Gigabit (1.2 Gbps total) network connections that communicate through A/C plugs. It is backwards compatible with older TP-LINK HomePlug AV adapters (AV1000, AV600, AV500, AV200, and of course other AV1200s).
No pricing information, but TP-LINK is targeting Q3 2015 for this AV1200.
Follow all of our coverage of the show at http://pcper.com/ces!
Subject: General Tech, Networking | October 11, 2014 - 01:42 AM | Tim Verry
Tagged: sdn, nfv, networking, Hierofalcon, arm, amd
AMD, in cooperation with Aricent and Mentor Graphics, recently demonstrated the first ARM-based Network Functions Virtualization (NFV) solution at ARM TechCon. The demonstration employed AMD's Embedded R-Series "Hierofalcon" SoC virtualizing a Mobile Packet Core running subscriber calls. The 64-bit ARM chip is now sampling to customers and will be generally available in the first half of next year (1H 2015). The AMD NFV Reference Solution is aimed at telecoms for use in communications network backbones where AMD believes an ARM solution will offer reduced costs (both initial and operational) and increased network bandwidth.
The NFV demonstration of the Mobile Packet Core entailed virtualizing a Packet Data Network Gateway, Serving Gateway, Mobility Management Entity, and virtualized Wireless Evolved Packet Core (vEPC) applications. AMD further demonstrated live traffic migration between ARM-based Embedded-R and x86-based second generation R-Series APU solutions. NFV is related to, but independent of, software defined networking (SDN). Network Functions Virtualization is essentially the virtualizing of network appliances with specific functions and performing those functions virtually using generic servers. For example, NFV can virtualize firewalls, gateways, load balancers, intrusion detection, DNS, NAT, and caching functions. NFV virtualizes the upper networking layers (layers 4-7) and can allow virtual tunnels through a network that can then be assigned functions (such as those listed above) on a per-VM or per flow basis. NFV eliminates the need for specialized hardware appliances by virtualizing these functions on generic servers which have traditionally been exclusively x86 based. AMD is hoping to push ARM (and it's own ARM-based SoCs) into this market by touting even further capital expenditure and operational costs versus x86 (and, in turn, versus specialized hardware that serves the entire network whereas NFV can be more exactly provisioned).
It is an interesting take on a lucrative networking market which is dealing with 1.4 Zetabytes of global IP traffic per year. I'm interested to see if the telecoms and other enterprise network customers will bite and give AMD a slice of this pie on the low end and low power fronts.
AMD "Hierofalcon" Embedded R Series SoC
Hierofalcon is the code name for AMD's 64-bit SoC with ARM CPU cores intended for the embedded market. The SoC is a 15W to 30W chip featuring up to eight ARM Cortex-A57 CPU cores capable of hitting 2GHz, two 64-bit ECC capable DDR3 or DDR4 memory channels, 10Gb Ethernet, PCI-E 3.0, ARM TrustZone, and a cryptographic security co-processor.The TechCon demonstration was also used to launch the AMD NFV Reference Solution which is compliant with OpenDataPlane platform. The reference platform includes a networking software stack from Aricent and an Embedded Linux OS and software tools (Sourcery CodeBench) from Mentor Graphics. The OpenDataPlane demonstration featured the above mentioned Evolved Packet Core application on the Hierofalcon 64-bit ARM SoC. Additionally, the x86-based R-Series APU, OpenStack, and Data Plane Development Kit all make up the company's NFV reference solution.