Subject: General Tech | September 21, 2012 - 04:24 AM | Tim Verry
Tagged: isp, data cap, Comcast, bandwidth
Comcast’s 250GB per month data cap was proved an unpopular but necessary evil (in the sense that it cannot be avoided). The company suspended its data capping earlier this year to reevaluate its data caps and how they affect users. That temporary freedom is not slated to last, however as Comcast will be re-instating caps in the future (as soon as next year, by many reports). Currently, Comcast is testing a single 300GB cap across all tiers in Nashville, Tennessee, and on October 1st it will begin another data cap strategy in Tucson, Arizona.
In Tuscon, Comcast will be mixing things up a bit by pairing the higher (faster) tiers of service with larger data caps. For example, the Blast tier (25/4 in many markets) will have an additional 50GB for a total cap of 350GB per month. The next highest tier – Extreme 50 – will get a 450GB cap, and so on. This is a good thing, because it allows the caps to scale with speed. Otherwise, the faster the speed tier, the worse value it becomes as it will just allow you to burn through your data cap faster. When the caps scale with speed, that problem is eliminated. Interestingly, this method seems to be the one that Comcast is leaning towards using, because a source –when talking to DSL Reports – has stated that when Comcast reinstates caps nationwide, customers will have a 500GB caps while lower tiers will Performance tier users will receive only a 300GB cap. Specifically, the source stated “faster speed tiers will see higher caps.”
Personally, if I have to endure caps, I would much rather have this scalable cap system rathen than a one-size-fits-all number for every tier like the company is implementing in Nashville (and has in the past with its 250GB per month cap). It will be interesting to see exactly how it will scale the caps once they are official, and how often Comcast will consider raising its caps as more and more services move "to the cloud."
|Comcast Internet Tiers||Data Cap (Nashville, TN)||Data Cap (Tucson, AZ)||Overage Charge (both cities) - $/50GB|
Another bit of (surprising) news is that Comcast is being rather reasonable in its overage policy for customers that exceed the cap in the test markets. Customers that go over the cap in a month will be notified by both an email and webpage notification. Should the customer wish to continue to use the Internet service, Comcast will provide additional 50GB blocks for $10 each, which works out to 20 cents per Gigabyte. That is not bad at all, especially compared to wireless data overages that customers have begrudgingly become accustomed to.
Even better, Comcast will give each customer up to three warnings before charging for additional data. If customers go over their caps for three months in one year, they will not be charged for additional data usage. After the three “courtesy notices,” it’s back to $10/50GB. The overage charge and three warning system applies to both test markets, which seems to suggest that it has a good chance of being implemented nationwide.
At least in the testing markets, Comcast is being much more generous than it has in the past. I’m interested to see what the cable providing-giant that is Comcast actually ends up doing once it puts caps in place around the nation. Specifically, how standardized the caps and overage charges are across all of the markets, and whether it will be more aggressive in areas where it has a monopoly where customers can not fall back to AT&T’s U-Verse or Verizon’s FIOS service (among other options, though I don’t count satellite or dial up as those are not really competitive to wired broadband). Right now though, I think Comcast is moving towards a system that is an acceptable compromise between customer freedom and its business interests. [Please don’t mess this up, Comcast.]
What are your thoughts, do you think the proposed caps are fair? I do concede that data caps suck, and it is definitely possible to exceed the caps with legitimate services, but it does look as though caps are here to stay. Here's hoping Comcast remains at least as reasonable with the entire US as it is with its test markets.
Image courtesy Chauncey Davis via Flickr Creative Commons. Thank you.
NCSU Researchers Tweak Core Prefetching And Bandwidth Allocation to Boost Mult-Core Performance By 40%
Subject: Processors | May 27, 2011 - 11:26 AM | Tim Verry
Tagged: processor, multi-core, efficiency, bandwidth, algorithm
With the clock speed arms race now behind us, the world has turned to increases in the number of processor cores to boost performance. As more applications are becoming multi-threaded, CPU core increases have become even more important. In the consumer space, quad and hexa-core chips are rather popular in the enthusiast segment. On the server side, eight core chips provide extreme levels of performance.
The way that most multi-core processors operate involves the various CPU cores having access to their own cache (Intel’s current gen chips actually have three levels of cache, with the third level being shared between all cores. This specific caching system; however, is beyond the scope of this article). This cache is extremely fast and keeps the processing core(s) fed with data which the processor then feeds through its assembly line-esque instruction pipeline(s). The cache is populated with data through a method called “prefetching.” This prefetching pulls data of running applications from RAM using mathematical algorithms to determine what the processor is likely going to need to process next. Unfortunately, while these predictive algorithms are usually correct, they sometimes make mistakes and the processor is not fed with data from the cache and thus must look for it elsewhere. These instances, called stalls, can severely degrade core performance as the processor must reach out past the cache and into the system memory (RAM), or worse, the even slower hard drive to find the data it needs. When the processor must reach beyond its on-die cache, it is required to use the system bus to query the RAM for data. This processor to RAM bus, while faster than reading from a disk drive, is much slower than the cache. Further, processors are restricted in the amount of available bandwidth between the CPU and the RAM. As the number of included cores increases, the amount of shared bandwidth each core has access to is greatly reduced.
The layout of a current Sandy Bridge Intel processor. Note the Cache and Memory I/O.
A team of researchers at North Carolina State University have been studying the above mentioned issues, which are inherent in multi-core processors. Specifically, the research team was part of North Carolina State University’s Department of Electrical and Computer Engineering, and includes Fang Liu and Yan Solihin who were funded in part by the National Science foundation. In a paper concluding their research that will be presented June 9th, 2011 at the international Conference on Measurement and Modeling of Computer Systems, they detail two methods for improving upon the current bandwidth allocation and cache prefetching implementations.
Dr. Yan Solihin, associate professor and co-author of the paper in question stated that certain processor cores require more bandwidth than others; therefore, by dynamically monitoring the type and amount of data being requested by each core, the amount of bandwidth available can be prioritized by a per-core basis. Solihin further stated that “by better distributing the bandwidth to the appropriate cores, the criteria are able to maximize system performance.”
Further, they have analyzed the data of the processors hardware counters and constructed a set of criteria that seek to improve efficiency by dynamically turning prefetching on and off on a per-core basis. By turning prefetching on and off on a per core basis, this further provides bandwidth to the cores that need it. By implementing both methods, the research team was able to improve multi-core performance by as much as 40 percent versus chips that do not prefetch data, and by 10 percent versus multi-core processors with cores that do prefetch data.
The researchers plan to detail their findings in a paper titled “Studying the Impact of Hardware Prefetching and Bandwidth Partitioning In Chip-Multiprocessors,” which will be publicly available on June 9th. The exact algorithms and criteria that they have determined will decrease the number of processor stalls and increase bandwidth efficiency will be extremely interesting to analyze. Further, it will be interesting to see if any of these improvements will be implemented by Intel or AMD in their future chips.