Subject: General Tech, Graphics Cards | March 11, 2014 - 09:06 PM | Ryan Shrout
Tagged: nvidia, gtx 750 ti, giveaway, geforce, contest
When NVIDIA launched the GeForce GTX 750 Ti this month it convinced us to give this highly efficient graphics card a chance to upgrade some off-the-shelf, under powered PCs. In a story that we published just a week ago, we were able to convert three pretty basic and pretty boring computers into impressive gaming PCs by adding in the $150 Maxwell-based graphics card.
If you missed the video we did on the upgrade process and results, check it out here.
Now we are going to give our readers the chance to do the same thing to their PCs. Do you have a computer in your home that is just not up to the task of playing the latest PC games? Then this contest is right up your alley.
Prizes: 1 of 5 GeForce GTX 750 Ti Graphics Cards
Your Task: You are going to have to do a couple of things to win one of these cards in our "Upgrade Story Giveaway." We want to make sure these cards are going to those of you that can really use it so here is what we are asking for (you can find the form to fill out right here):
- Show us your PC that is in need of an upgrade! Take a picture of your machine with this contest page on the screen or something similar and share it with us. You can use Imgur.com to upload your photo if you need some place to put it. An inside shot would be good as well. Place the URL for your image in the appropriate field in the form below.
- Show us your processor and integrated graphics that need some help! That means you can use a program like CPU-Z to view the processor in your system and then GPU-Z to show us the graphics setup. Take a screenshot of both of these programs so we can see what hardware you have that needs more power for PC gaming! Place the URL for that image in the correct field below.
- Give us your name and email address so we can contact you for more information if you win!
- Leave us a comment below to let me know why you think you should win!!
- Subscribing to our PCPer Live! mailing list or even our PCPer YouTube channel wouldn't hurt either...
That's pretty much it! We'll run this promotion for 2 weeks with a conclusion date of March 13th. That should give you plenty of time to get your entry in.
1920x1080, 2560x1440, 3840x2160
Join us on March 11th at 9pm ET / 6pm PT for a LIVE Titanfall Game Stream! You can find us at http://www.pcper.com/live. You can subscribe to our mailing list to be alerted whenever we have a live event!!
We canceled the event due to the instability of Titanfall servers. We'll reschedule soon!!
With the release of Respawn's Titanfall upon us, many potential PC gamers are going to be looking for suggestions on compiling a list of parts targeted at a perfect Titanfall experience. The good news is, even with a fairly low investment in PC hardware, gamers will find that the PC version of this title is definitely the premiere way to play as the compute power of the Xbox One just can't compete.
In this story we'll present three different build suggestions, each addressing a different target resolution but also better image quality settings than the Xbox One can offer. We have options for 1080p, the best option that the Xbox could offer, 2560x1440 and even 3840x2160, better known as 4K. In truth, the graphics horsepower required by Titanfall isn't overly extreme, and thus an entire PC build coming in under $800, including a full copy of Windows 8.1, is easy to accomplish.
Target 1: 1920x1080
First up is old reliable, the 1920x1080 resolution that most gamers still have on their primary gaming display. That could be a home theater style PC hooked up to a TV or monitors in sizes up to 27-in. Here is our build suggestion, followed by our explanations.
|Titanfall 1080p Build|
|Processor||Intel Core i3-4330 - $137|
|Motherboard||MSI H87-G43 - $96|
|Memory||Corsair Vengeance LP 8GB 1600 MHz (2 x 4GB) - $89|
|Graphics Card||EVGA GeForce GTX 750 Ti - $179|
|Storage||Western Digital Blue 1TB - $59|
|Case||Corsair 200R ATX Mid Tower Case - $72|
|Power Supply||Corsair CX 500 watt - $49|
|OS||Windows 8.1 OEM - $96|
|Total Price||$781 - Amazon Full Cart|
Our first build comes in at $781 and includes some incredibly competent gaming hardware for that price. The Intel Core i3-4330 is a dual-core, HyperThreaded processor that provides more than enough capability to push Titanfall any all other major PC games on the market. The MSI H87 motherboard lacks some of the advanced features of the Z87 platform but does the job at a lower cost. 8GB of Corsair memory, though not running at a high clock speed, provides more than enough capacity for all the programs and applications you could want to run.
Subject: Graphics Cards | March 13, 2014 - 10:52 AM | Ryan Shrout
Tagged: radeon, amd
This morning I had an interesting delivery on my door step.
The only thing inside it was an envelope stamped TOP SECRET and this photo. Coming from AMD's PR department, the hashtag #2betterthan1 adorned the back of the picture.
This original photo is from like....2004. Nice, very nice AMD.
With all the rumors circling around the release of a new dual-GPU graphics card based on Hawaii, it seems that AMD is stepping up the viral marketing campaign a bit early. Code named 'Vesuvius', the idea of a dual R9 290X single card seems crazy due to high power consumption but maybe AMD has been holding back the best, most power efficient GPUs for such a release.
What do you think? Can AMD make a dual-GPU Hawaii card happen? How will this affect or be affected by the GPU shortages and price hikes still plaguing the R9 290 and R9 290X? How much would you be willing to PAY for something like this?
When the Radeon R9 290 and R9 290X first launched last year, they were plagued by issues of overheating and variable clock speeds. We looked at the situation several times over the course of a couple months and AMD tried to address the problem with newer drivers. These drivers did help stabilize clock speeds (and thus performance) of the reference built R9 290 and R9 290X cards but caused noise levels to increase as well.
The real solution was the release of custom cooled versions of the R9 290 and R9 290X from AMD partners like ASUS, MSI and others. The ASUS R9 290X DirectCU II model for example, ran cooler, quieter and more consistently than any of the numerous reference models we had our hands on.
But what about all those buyers that are still purchasing, or have already purchased, reference style R9 290 and 290X cards? Replacing the cooler on the card is the best choice and thanks to our friends at NZXT we have a unique solution that combines standard self contained water coolers meant for CPUs with a custom built GPU bracket.
Our quick test will utilize one of the reference R9 290 cards AMD sent along at launch and two specific NZXT products. The Kraken X40 is a standard CPU self contained water cooler that sells for $100 on Amazon.com. For our purposes though we are going to team it up with the Kraken G10, a $30 GPU-specific bracket that allows you to use the X40 (and other water coolers) on the Radeon R9 290.
Inside the box of the G10 you'll find an 80mm fan, a back plate, the bracket to attach the cooler to the GPU and all necessary installation hardware. The G10 will support a wide range of GPUs, though they are targeted towards the reference designs of each:
NVIDIA : GTX 780 Ti, 780, 770, 760, Titan, 680, 670, 660Ti, 660, 580, 570, 560Ti, 560, 560SE
AMD : R9 290X, 290, 280X*, 280*, 270X, 270 HD7970*, 7950*, 7870, 7850, 6970, 6950, 6870, 6850, 6790, 6770, 5870, 5850, 5830
That is pretty impressive but NZXT will caution you that custom designed boards may interfere.
The installation process begins by removing the original cooler which in this case just means a lot of small screws. Be careful when removing the screws on the actual heatsink retention bracket and alternate between screws to take it off evenly.
Maxwell and Kepler and...Fermi?
Covering the landscape of mobile GPUs can be a harrowing experience. Brands, specifications, performance, features and architectures can all vary from product to product, even inside the same family. Rebranding is rampant from both AMD and NVIDIA and, in general, we are met with one of the most confusing segments of the PC hardware market.
Today, with the release of the GeForce GTX 800M series from NVIDIA, we are getting all of the above in one form or another. We will also see performance improvements and the introduction of the new Maxwell architecture (in a few parts at least). Along with the GeForce GTX 800M parts, you will also find the GeForce 840M, 830M and 820M offerings at lower performance, wattage and price levels.
With some new hardware comes a collection of new software for mobile users, including the innovative Battery Boost that can increase unplugged gaming time by using frame rate limiting and other "magic" bits that NVIDIA isn't talking about yet. ShadowPlay and GameStream also find their way to mobile GeForce users as well.
Let's take a quick look at the new hardware specifications.
|GTX 880M||GTX 780M||GTX 870M||GTX 770M|
|GPU Code name||Kepler||Kepler||Kepler||Kepler|
|Rated Clock||954 MHz||823 MHz||941 MHz||811 MHz|
|Memory||Up to 4GB||Up to 4GB||Up to 3GB||Up to 3GB|
|Memory Clock||5000 MHz||5000 MHz||5000 MHz||4000 MHz|
Both the GTX 880M and the GTX 870M are based on Kepler, keeping the same basic feature set and hardware specifications of their brethren in the GTX 700M line. However, while the GTX 880M has the same CUDA core count as the 780M, the same cannot be said of the GTX 870M. Moving from the GTX 770M to the 870M sees a significant 40% increase in core count as well as a jump in clock speed from 811 MHz (plus Boost) to 941 MHz.
Subject: General Tech, Processors, Chipsets | March 13, 2014 - 03:35 AM | Scott Michaud
Tagged: Intel, Haswell-E, X99
Though Ivy Bridge-E is not too distant of a memory, Haswell-E is on the horizon. The enthusiast version of Intel's architecture will come with a new motherboard chipset, the X99. (As an aside: what do you think its eventual successor will be called?) WCCFTech got their hands on details, albeit some of which have been kicking around for a few months, outlining the platform.
Image Credit: WCCFTech
First and foremost, Haswell-E (and X99) will support DDR4 memory. Its main benefit is increased bandwidth and decreased voltage at the same current, thus lower wattage. The chipset will support four memory channels.
Haswell-E will continue to have 40 PCIe lanes (the user's choice between five x8 slots or two x16 slots plus a x8 slot). This is the same number of total lanes as seen on Sandy Bridge-E and Ivy Bridge-E. While LGA 2011-3 is not compatible with LGA 2011, it does share that aspect.
X99 does significantly increase the number of SATA ports, to ten SATA 6Gbps (up from two SATA 6Gbps and four SATA 3Gbps). Intel RST, RST Smart Response Technology, and Rapid Recover Technology are also present and accounted for. The chipset also supports six native USB 3.0 ports and an additional eight USB 2.0 ones.
Intel Haswell-E and X99 is expected to launch sometime in Q3 2014.
Subject: Graphics Cards | March 14, 2014 - 10:17 PM | Ryan Shrout
Tagged: amd, radeon, R9 290X, r9 290, r9 280x, r9 280
While sitting on the couch watching some college basketball I decided to start browsing Amazon.com and Newegg.com for some Radeon R9 graphics cards. With all of the stock and availability issues AMD has had recently, this is a more frequent occurrence for me than I would like to admit. Somewhat surprisingly, things appear to be improving for AMD at the high end of the product stack. Take a look at what I found.
|ASUS Radeon R9 290X DirectCU II||$599||-|
|Visiontek R9 290X||$599||-|
|XFX R9 290X Double D||$619||-|
|ASUS R9 290 DirectCU II||$499||-|
|XFX R9 290 Double D||$499||-|
|MSI R9 290 Gaming||$555||$469|
|PowerColor TurboDuo AXR9 280X||-||$329|
|Visiontek R9 280X||$370||$349|
|XFX R9 280 Double D||-||$289|
|Sapphire Dual-X R9 280||-||$299|
|Sapphire R7 265||$184||$149|
It's not perfect, but it's better. I was able to find two R9 290X cards at $599, which is just $50 over the expected selling price of $549. The XFX Double D R9 290X at $619 is pretty close as well. The least expensive R9 290 I found was $469 but others remain about $100 over the suggested price. In reality, having the R9 290 and R9 290X only $100 apart, as opposed to the $150 that AMD would like you to believe, is more realistic based on the proximity of performance between the two SKUs.
Stepping a bit lower, the R9 280X (which is essentially the same as the HD 7970 GHz Edition) can be found for $329 and $349 on Newegg. Those prices are just $30-50 more than the suggested pricing! The brand new R9 280, similar in specs to the HD 7950, is starting to show up for $289 and $299; $10 over what AMD told us to expect.
Finally, though not really a high end card, I did see that the R7 265 was showing up at both Amazon.com and Newegg.com for the second time since its announcement in February. For budget 1080p gamers, if you can find it, this could be the best card you can pick up.
What deals are you finding online? If you guys have one worth adding here, let me know! Is the lack of availability and high prices on AMD GPUs finally behind us??
So Many MHz, So Little Time...
If you've looked at memory for your system lately you've likely noticed a couple of things. First, memory prices have held steady for the past few months, but are still nearly double what they were a little over a year ago. Second, now that DDR3 has been a mature standard for years, there is a vast selection of RAM from many vendors, all with nearly identical specs. The standard has settled at 1600MHz for DDR3, and most desktop memory is programmed for this speed. Granted, many modules run at overclocked speeds, and there are some out there with pretty outlandish numbers, too - and it’s one of those kits that we take a look at today.
Hardly subtle, the Kingston HyperX 'Predator' dual channel kit for review today is clocked at a ridiculous 1066MHz OVER the 1600MHz standard. That's right, this is 2666MHz memory! It seems like such a big jump would have to provide increased system performance across the board, and that's exactly what we're going to find out.
We all want to get the most out of any component, and finding the best option at a given price is part of planning any new build or upgrade. While every core part is sold at a particular speed, and most can be overclocked, there are still some qualifying factors that make selecting the fastest part for your budget a little more complicated. Speed isn't based on MHz alone – as with processors, where it often comes down to number of cores, how many instructions per clock cycle a given CPU can churn out, etc.
Subject: General Tech | March 13, 2014 - 02:58 PM | Ken Addison
Tagged: podcast, video, evga, gtx 780, 780 ACX, acx, titanfall, 750 ti, nvidia, 800m, 860m, razer
PC Perspective Podcast #291 - 03/13/2014
Join us this week as we discuss the EVGA GTX 780 ACX, Building a PC for Titanfall, NVIDIA's 800m GPU Lineup and more!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the Store
- RSS - Subscribe through your regular RSS reader
- MP3 - Direct download link to the MP3 file
Hosts: Ryan Shrout, Josh Walrath, Allyn Malventano, and Morry Teitelman
Week in Review:
0:41:35 This podcast is brought to you by Coolermaster, and the CM Storm Pulse-R Gaming Headset
News items of interest:
Hardware/Software Picks of the Week:
Allyn: Leaving in the middle of podcasts
Subject: General Tech | March 16, 2014 - 05:48 PM | Ryan Shrout
Tagged: march madness, contest
Well, it was a heart breaking game for my University of Kentucky Wildcats, but I'm looking forward to the NCAA Men's College Basketball Tournament all the more! In the past we have run PC Perspective bracket competitions and I had a couple of requests to do the same for 2014.
If you are interested in joining the gang at PC Perspective for this years March Madness bracket challenge all you have to do is visit our CBSSports.com page and register an account. You can then make your selections and see how well you predict the college basketball landscape for the tournament.
- Visit http://www.cbssports.com/fantasy/universal
- Select "Mayhem" as the sport
- Input "pcper2014" as the league abbreviation
- Password: "gobigblue"
- Sign up for a spot!
- Pick your bracket after the selection show tonight!
- Win some stuff!
For the winners we are going to offer up some extra hardware we have here at the office BUT I haven't put together the list just yet. I'm going to be asking the other editors what we have not being used and I'll update this post as soon as we come up with a final answer.
For now though, sign up, have fun and good luck!!
(Yes, you can enter from ANYWHERE in the world for this!)
Subject: Systems | March 12, 2014 - 10:36 AM | Ryan Shrout
Tagged: video, nuc, next unit of computing, Intel, d54250wykh
In September of 2013 we reviewed the updated Intel NUC device that implemented the latest Haswell architecture in the form of the Core i5-4250U processor. In the conclusion I wrote:
The Next Unit of Computing is meant to be a showcase for different form factors and implementations that Intel's architectures can reach and I think it accomplishes this goal quite well and should be a blueprint for other system integrators and embedded clients going forward. Enthusiasts and standard PC users will be to adopt it too without feeling like they are leaving performance on the table which is impressive for this form factor.
At CES we first learned about the new D54250WYKH model and what it added - support for a 2.5-in HDD/SSD. While that isn't a drastic change, it does allow for more variance in configuration options including both mSATA and 2.5-in storage with only a minimal increase in size of the system.
Check out the video below for a quick overview of the H-variant of the Intel NUC!
Subject: Editorial, General Tech | March 11, 2014 - 10:15 PM | Scott Michaud
Tagged: valve, opengl, DirectX
Late yesterday night, Valve released source code from their "ToGL" transition layer. This bundle of code sits between "[a] limited subset of Direct3D 9.0c" and OpenGL to translate engines which are designed in the former, into the latter. It was pulled out of the DOTA 2 source tree and published standalone... mostly. Basically, it is completely unsupported and probably will not even build without some other chunks of the Source engine.
Still, Valve did not need to release this code, but they did. How a lot of open-source projects work is that someone dumps a starting blob, and if sufficient, the community pokes and prods it to mold it into a self-sustaining entity. The real question is whether the code that Valve provided is sufficient. As often is the case, time will tell. Either way, this is a good thing that other companies really should embrace: giving out your old code to further the collective. We are just not sure how good.
ToGL is available now at Valve's GitHub page under the permissive, non-copyleft MIT license.
Subject: Editorial, General Tech | March 16, 2014 - 03:27 AM | Scott Michaud
Tagged: windows, mozilla, microsoft, Metro
If you use the Firefox browser on a PC, you are probably using its "Desktop" application. They also had a version for "Modern" Windows 8.x that could be used from the Start Screen. You probably did not use it because fewer than 1000 people per day did. This is more than four orders of magnitude smaller than the number of users for Desktop's pre-release builds.
Yup, less than one-thousandth.
Jonathan Nightingale, VP of Firefox, stated that Mozilla would not be willing to release the product without committing to its future development and support. There was not enough interest to take on that burden and it was not forecast to have a big uptake in adoption, either.
From what we can see, it's pretty flat.
Paul Thurrott of WinSupersite does not blame Mozilla for killing "Metro" Firefox. He acknowledges that they gave it a shot and did not see enough pre-release interest to warrant a product. He places some of the blame on Microsoft for the limitations it places on browsers (especially on Windows RT). In my opinion, this is just a symptom of the larger problem of Windows post-7. Hopefully, Microsoft can correct these problems and do so in a way that benefits their users (and society as a whole).
Subject: General Tech | March 12, 2014 - 09:23 PM | Tim Verry
Tagged: unreal engine 4, nvidia, gtx 700, GeForce 800M, game bundle, daylight
NVIDIA recently announced the launch of two new game bundles for purchasers of certain GeForce GTX desktop of GeForce 700M and 800M mobile series graphics cards. The new bundles will offer up a redeemable code for the Unreal Engine 4-powered survival horror game DAYLIGHT to buyers of new desktop cards or a total of $150 of in-game currency in three Free-To-Play titles when buying a system with a new NVIDIA mobile GPU (or as an alternative to the DAYLIGHT bundle with desktop cards).
The DAYLIGHT game bundle is included with certain GeForce GTX 600 and 700-series desktop graphics cards. Users will get a redeemable code for a downloadable version of the game which can be activated on release day (April 8, 2014). Specifically, the eligible graphics cards for this bundle are as follows:
- GTX TITAN
- GTX 780 Ti
- GTX 780
- GTX 770
- GTX 760
- GTX 690
- GTX 680
- GTX 670
- GTX 660 Ti
- GTX 660
Alternatively, NVIDIA is offering $150 (total) in in-game currency for three free to play games to users that purchase a notebook with a 700M or 800M mobile GPU or as an alternative to the Daylight game bundle when purchasing certain desktop GPUs. The bundle will offer $50 of in-game currency for Heroes of Newerth, Path of Exile, and Warface. Users that purchase a mobile GPU (700M or 800M series) or GTX 750 Ti, GTX 750, GTX 650 Ti, or GTX 650 from a participating e-tailer or system builder will be able to get this game bundle.
According to NVIDIA, both of its new game bundles are available now with cards and pre-built systems from Newegg, Amazon, Tiger Direct, NCIX, et al, and nationwide system builders respectively. NVIDIA has put together a full list of participating partners along with further information on the following bundle information pages:
Subject: Systems | March 12, 2014 - 07:38 PM | Ryan Shrout
Tagged: video, SFF, projector, i3-4010u, gigabyte, bxpi3-4010, brix projector, brix
With more than a few of NUC-sized SFF PCs floating around these days, the BRIX Projector, with a catchy model number of BXPi3-4010, has something that no other option can offer: an integrated mini projector. As the name would imply, the BRIX Projector is part BRIX and part projector, and the combination is unique to the market as far as I can tell.
The guts of the BXPi3-4010 are split seemingly in half between the computer components that make up the BRIX and the DLP LED projector that rests on top. The processor inside is a Core i3-4010U that runs at up to 1.7 GHz and includes integrated Intel HD 4400 graphics. With a dual-core HyperThreaded design, the 4010U is competent, but nothing more, for standard application workloads and productivity. The HD 4400 graphics can run your most basic of games (think Peggle, FTL, Starbound) but isn't up to the task of most demanding 3D games like Bioshock.
You'll get a set of four USB 3.0 ports, a Gigabit Ethernet connection, mini-DisplayPort and HDMI output. Combined with the projector, you can use any TWO displays at one time: projector plus HDMI, HDMI plus mDP, etc.
The mini-HDMI input is pretty interesting and allows you to use the BRIX Projector as a stand alone projector, hooking up a DVD player, game console or anything to be displayed. The power button on the projector is separate from the PC power and you can run each without the other.
The unit comes as a barebones design, meaning you'll have to add mSATA storage and DDR3 low power SO-DIMMs to get up and running. Once you have your OS installed, you are going to be met with a rather small 854x480 resolution projector powered by a 75 lumen output. It's good, but not great.
That low resolution causes some issues with browsing the web and using some applications like Steam because we have all moved past the likes of 800x600 - thank goodness. Windows works fine and even Big Picture mode in Steam is an easy fix.
You can see in the video review below that image quality was pretty good for such a small device but the noise levels of the fan cooling the projector are quite high. I was even thinking of ripping it open and trying more creative ways of cooling the display components until Gigabyte informed me they need it back in a...functional capacity. Oh well.
The Gigabyte BRIX Projector BXPi3-4010 is selling for about $550 on both Newegg.com and Amazon.com which does NOT include the memory or storage you'll need (WiFi is included though). That seems kind of steep but considering other pico or mini projectors can easily cost $250-350, this BRIX unit is a better deal that the price might first indicate.
Subject: General Tech | March 13, 2014 - 02:31 PM | Jeremy Hellstrom
Tagged: usb, charger, DIY
Over at Hack a Day is a guide on how to convert your old chargers for devices you no longer use into a useful charger with a USB plug. They will need to be of the 5V variety and provide at least 500 mA in order to be useful with today's gadgets with 1A being preferable; don't go so high you are at risk of killing your device though. Apple fanatics will have to do the usual modifications to convince their iThing to accept a charge but most other devices won't care if the charger is home made or not, they just want the USB. Do try not to set yourself or any of your possessions on fire by not testing your charger thoroughly before leaving it unattended.
"If you’re like us, you probably have a box (or more) of wall warts lurking in a closet or on a shelf somewhere. Depending on how long you’ve been collecting cell phones, that box is likely overflowing with 5V chargers: all with different connectors."
Here is some more Tech News from around the web:
- BB10's 'dated' crypto lets snoops squeeze the juice from your BlackBerry – researcher @ The Register
- Replicant OS Developers Find Backdoor In Samsung Galaxy Devices @ Slashdot
- Hackers can steal Whatsapp conversations due to Android security flaw @ The Inquirer
- How to Use the Super Fast i3 Tiling Window Manager on Linux @ Linux.com
- Seven Great Moments in World Wide Web History @ The Inquirer
- How to shop wisely for the IT department of the future @ The Tech Report
- Projector on a smartphone? There's a chip for that @ The Register
- Make an HD Projector for Next to Nothing! @ Hack a Day
- Win a Powerful ASUS R9 290 Graphics Card @ Kitguru
Introduction and Technical Specifications
Courtesy of ASUS
The ASUS Maximus VI Impact is ASUS' newest mini-ITX member of the Republic of Gamer (ROG) family. ASUS integrated design innovations from its Z77-based mITX board and added in some ROG-based innovations to come up with a wholly unique entity. With an MSRP of $229, the Maximus VI Impact comes in at the higher-end of the mITX price range with enough integrated features to more than justify the cost.
Courtesy of ASUS
Similar to other members of the ROG-based Z87 releases, ASUS designed the Maximus VI Impact board with top of the line power components. The board's digital power system centers on an 8+2 phase power regulation system using 60 amp-rated BlackWing chokes, powIRstage MOSFETS, and 10k-rated Black Metallic capacitors. To save space on the board, the power components are mounted vertically on a hard-attached PCB to the right of the socket with the sound components and wireless networking on vertical removable cards to the upper left of the CPU socket and integrated into the board's rear panel.
Subject: General Tech | March 14, 2014 - 03:01 PM | Jeremy Hellstrom
Tagged: microsoft, office, office 365, tablet
The newest member of Microsoft's cloudy version of the world's most common productivity software is called Office 365 Personal and it will provide a single license which can be used on a PC or Mac and one tablet. The subscription will cost less than the current Office 365 Home Premium which allowed up to five devices access but only offered a version of Office dubbed Office Mobile for tablets and phones. This will not be the watered down version of Office that ships with WinRT on Surface and while The Register was provided some hints on what the new software will look like we won't be seeing any demos until closer to the launch which will take place this Spring.
"Microsoft will soon debut a new formulation of its Office 365 subscription service aimed at individual consumers, the company said on Thursday, and in the process it hinted that new, touch-centric Office apps may be coming soon."
Here is some more Tech News from around the web:
- Google starts encrypting search data to protect users from NSA snooping @ The Inquirer
- Microsoft gives away Windows Phone 8 licences in India – report @ The Register
- HTC One 2 release date, specs, rumours and price @ The Inquirer
- VLC Player beta arrives for Windows 8 @ The Inquirer
- The real story behind Twin Galaxies @ Kitguru
Subject: General Tech | March 16, 2014 - 10:27 PM | Sebastian Peak
Tagged: supercomputer, solid state drive, NSF, flash memory
We know that SSD's help any system perform better by reducing the storage bottlenecks we all experienced from hard disk drives. But how far can flash storage go in increasing performance if money is no object?? Enter the multi-million dollar world of supercomputers. Historically supercomputers have relied on the addition of more CPU cores to increase performance, but two new system projects funded by the National Science Foundation (NSF) will try a different approach: obscene amounts of high-speed flash storage!
The news comes as the NSF is requesting a cool $7 billion in research money for 2015, and construction has apparently already begun on two new storage-centered supercomputers. Memory and high-speed flash storage arrays will be loaded on the Wrangler supercomputer at Texas Advanced Computing Center (TACC), and the Comet supercomputer at the San Diego Supercomputer Center (SDSC).
Check out the crazy numbers from the TACC's Wrangler: a combination of 120 servers, each with Haswell-based Xeon CPU's, and a total of 10 petabytes (10,000TB!) of high performance flash data storage. The NSF says the supercomputer will have 3,000 processing cores dedicated to data analysis, with flash storage layers for analytics. The Wrangler supercomputer's bandwidth is said to be 1TB/s, with 275 million IOPS! By comparison, the Comet supercomputer will have “only” 1,024 Xeon CPU cores, with a 7 petabyte high-speed flash storage array. (Come on, guys... That’s like, wayyy less bytes.)
Supercomputer under construction…probably (Image credit CBS/Paramount)
The supercomputers are part of the NSF's “Extreme Digital” (XD) research program, and their current priorities are "relevant to the problems faced in computing today”. Hmm, kind of makes you want to run a big muilti-SSD deathwish RAID, huh?
Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | March 17, 2014 - 09:01 AM | Scott Michaud
Tagged: OpenGL ES, opengl, Khronos, gdc 14, GDC
Today, day one of Game Developers Conference 2014, the Khronos Group has officially released the 3.1 specification for OpenGL ES. The main new feature, brought over from OpenGL 4, is the addition of compute shaders. This opens GPGPU functionality to mobile and embedded devices for applications developed in OpenGL ES, especially if the developer does not want to add OpenCL.
The update is backward-compatible with OpenGL ES 2.0 and 3.0 applications, allowing developers to add features, as available, for their existing apps. On the device side, most functionality is expected to be a driver update (in the majority of cases).
OpenGL ES, standing for OpenGL for Embedded Systems but is rarely branded as such, delivers what they consider the most important features from the graphics library to the majority of devices. The Khronos Group has been working toward merging ES with the "full" graphics library over time. The last release, OpenGL ES 3.0, was focused on becoming a direct subset of OpenGL 4.3. This release expands upon the feature-space it occupies.
OpenGL ES also forms the basis for WebGL. The current draft of WebGL 2.0 uses OpenGL ES 3.0 although that was not discussed today. I have heard murmurs (not from Khronos) about some parties pushing for compute shaders in that specification, which this announcement puts us closer to.
The new specification also adds other features, such as the ability to issue a draw without CPU intervention. You could imagine a particle simulation, for instance, that wants to draw the result after its compute shader terminates. Shading is also less rigid, where vertex and fragment shaders do not need to be explicitly linked into a program before they are used. I inquired about the possibility that compute devices could be targetted (for devices with two GPUs) and possibly load balanced, in a similar method to WebCL but no confirmation or denial was provided (although he did mention that it would be interesting for apps that fall somewhere in the middle of OpenGL ES and OpenCL).
The OpenGL ES 3.1 spec is available at the Khronos website.
- 1 of 2
Get notified when we go live!