Subject: Editorial, Mobile | September 28, 2015 - 09:57 AM | Ryan Shrout
Tagged: iphone 6s, iphone, ios, google, apple, Android
PC Perspective’s Android to iPhone series explores the opinions, views and experiences of the site’s Editor in Chief, Ryan Shrout, as he moves from the Android smartphone ecosystem to the world of the iPhone and iOS. Having been entrenched in the Android smartphone market for 7+ years, the editorial series is less of a review of the new iPhone 6s as it is an exploration on how the current smartphone market compares to what each sides’ expectations are.
Full Story Listing:
- Day 0: What to Expect
- Day 3: Widgets and Live Photos
- Day 6: Battery Life and Home Screens
- Day 17: SoC Performance
- Day 31: Battery Life and Closing
Opening and setting up a new iPhone is still an impressive experience. The unboxing process makes it feel like you are taking part in the reveal of product worth its cost and the accessories included are organized and presented well. Having never used an iPhone 6 or iPhone 6 Plus beyond the cursory “let me hold that”, it was immediately obvious to me that the iPhone build quality exceeded any of the recent Android-based smartphones I have used; including the new OnePlus 2, LG G4 and Droid Turbo. The rounded edges sparked some debate in terms of aesthetics but it definitely makes the phone FEEL slimmer than other smartphone options. The buttons were firm and responsive though I think there is more noise in the click of the home button than I expected.
The setup process for the phone was pretty painless but Ken, our production editor who has been an iPhone user every generation, did comment that the number of steps you have to go through to get to a working phone have increased quite a bit. Setup Siri, setup Touch ID, setup Wi-Fi, have you heard about iCloud? The list goes on. I did attempt to use the “Move to iOS” application from the Android Play Store on my Droid Turbo but I was never able to get it to work – the devices kept complaining about a disconnection of some sort in its peer-to-peer network and after about 8 tries, I gave up. I’m hoping to try it again with the incoming iPhone 6 Plus next week to see if it was a temporary issue.
After getting to the iPhone 6s home screen I spent the better part of the next hour doing something that I do every time I get a new phone: installing apps. The process is painful – go to the App Store, search for the program, download it, open it, login (and try to remember login information), repeat. With the Android Play Store I do appreciate the ability to “push” application downloads to a phone from the desktop website, making it much faster to search and acquire all the software you need. Apple would definitely benefit from some version of this that doesn’t require installing iTunes.
I am a LastPass user and one of the first changes I had to get used to was the change in how that software works on Android and iOS. With my Droid Turbo I was able to give LastPass access to system levels lower than you can with iOS and when using a third-party app like Twitter, LastPass can insert itself into the process and automatically input the username and/or password for the website or service. With the iPhone you don’t have that ability and there was a lot of password copying and pasting to get everything setup. This is an area where the openness of the Android platform can benefit users.
That being said, the benefits of Touch ID from Apple were immediately apparent. After going through the setup process using my fingerprint in place of my 15+ digit Apple ID password is a huge benefit and time saver. Every time I download a new app from the App Store and simply place my thumb on the home button, I grin; knowing this is how it should be for all passwords, everywhere. I was even able to setup my primary LastPass password to utilize Touch ID, removing one of the biggest annoyances of using the password keeping software on Android. Logging into the phone with your finger or thumb print rather than a pattern or PIN is great too. And though I know new phones like the OnePlus 2 uses a fingerprint reader for this purpose, the implementation just isn’t as smooth.
My final step before leaving the office and heading for home was to download my favorite podcasts and get that setup on the phone for the drive. Rather than use the Apple Podcasts app it was recommended that I try out Overcast, which has been solid so far. I setup the Giant Bombcast, My Brother, My Brother and I and a couple of others, let them download on Wi-Fi and set out for home. Pairing the iPhone 6s with my Chevy Volt was as easy as any other phone but I did notice that Bluetooth-based information being passed to the entertainment system (icons, current time stamps, etc.) was more accurate with the iPhone 6s than my Droid Turbo (starting times and time remaining worked when they previously did not). That could be a result of the podcast application itself (I used doubleTwist on Android).
On Saturday, with a bit more free time to setup the phone and get applications installed that I had previously forgotten, I did start to miss a couple of Android features. First, the lack of widgets on the iPhone home screens means the mass of icons on the iPhone 6s is much less useful than the customized screens I had on my Droid Turbo. With my Droid I had a page dedicated to social media widgets I could scroll through without opening up any specific applications. Another page included my current to-do list from Google Keep and my most current 15 items from Google Calendar, all at a glance.
I know that the top drag down menu on iOS with the Today and Notifications tabs is supposed to offer some of that functionality but the apps like Google Keep and Twitter don’t take advantage of it. And though cliché at this point, why in the hell doesn’t the Apple Weather application icon show the current temperature and weather status yet??
The second item I miss is the dedicated “back” button that Android devices have on them that are universal across the entire system. Always knowing that you can move to the previous screen or move from the current app to the home screen or other program that was just recently switched over is a great safety net that is missing in iOS. With only a single “always there” button on the phone, some software has the back button functionality on the top left hand corner and others have it in the form of an X or Close button somewhere else. I found myself constantly looking around each new app on the iPhone 6s to find out how to return to a previous screen and sometimes would hit the home button out of habit, which obviously isn’t going to have the intended function. Swiping from the left of the screen to the middle works with some applications, but not all.
Also, though my Droid Turbo phone was about the same size as the iPhone 6s, the size of the screen makes it hard to reach the top of the screen when only using one hand. With the Android back button along the bottom of the phone that meant it was always within reach. Those iOS apps that put the return functionality in the top left of the screen make it much more difficult to do, often risking dropping the phone by repositioning it in your hand. And double tapping (not clicking) the home button and THEN reaching for the back button on any particular app just seems to take too long.
On Saturday I went camping with my family at an early Halloween event that we have annually. This made for a great chance to test out the iPhone 6s camera, and without a doubt, it was the best phone camera I have used. The images were clear, the shutter speed was fast, and the ability to take high frame rate video or 4K video is a nice touch. I think that enough people have shown the advantages of the iPhone camera systems over almost anything else on the smartphone market and as a user of seemingly slow and laggard Android-based phone cameras, the move to the iPhone 6s is a noticeable change. As a parent of a 3 month old baby girl, these photos are becoming ever more important to me.
The new Live Photos feature, where essentially a few frames before and a few frames after the picture you actually took are captured (with audio included), is pretty much a gimmick but the effect is definitely eye-catching. When flipping through the camera roll you actually see a little bit of movement (someone’s face for example) which caused me to raise an eyebrow at first. It’s an interesting idea, but I’m not sure what use they will have off of the phone itself – will I be able to “play” these types of photos on my PC? Will I be able to share them to other phone users that don’t have the iPhone 6s?
Most of Sunday was spent watching football and using the iPhone 6s to monitor fantasy football and to watch football through our Wi-Fi network when I needed to leave the room for laundry. The phone was able to keep up, as you would expect, with these mostly lightweight tasks without issue. Switching between applications was quick and responsive, and despite the disadvantage that the iPhone 6s has over many Android flagship phones in terms of system memory, I never felt like the system was penalized for it.
Browsing the web through either Safari or Google Chrome did demonstrate a standard complaint about iOS – reloading of webpages when coming back into the browser application even if you didn’t navigate away from the page. With Android you are able to load up a webpage and then just…leave it there, for reference later. With the iPhone 6s, even with the added memory this model ships with, it will reload a page after some amount of time away from the browser app as the operating system decided it needed to utilize that memory for another purpose.
I haven’t had a battery life crisis with the iPhone yet, but I am worried about the lack of Quick Charging or Turbo Charging support on the iPhone 6s. This was a feature I definitely fell in love with on the Droid Turbo, especially when travelling for work or going on extended outings without access to power. I’ll have to monitor how this issue does or does not pop its head up.
Speaking of power and battery life – so far I have been impressed with how the iPhone 6s has performed. As I write this editorial up at 9:30pm on Sunday night, the battery level sits at 22%. Considering I have been using the phone for frequent speed tests (6 of them today) and just general purpose performance and usability testing, I consider this a good result. I only took one 5 minute phone call but texting and picture taking was plentiful. Again, this is another area where this long-term test is going to tell the real story, but for my first impressions the thinness of the iPhone 6s hasn’t created an instant penalty for battery life.
The journey is still beginning – tomorrow is my first full work day with the iPhone 6s and I have the final installment of my summer evening golf league. Will the iPhone 6s act as my golf GPS like my Droid Turbo did? Will it make it through the full day without having to resort to car charging or using an external battery? What other features and capabilities will I love or hate in this transition? More soon!
Subject: Editorial | September 18, 2015 - 01:00 PM | Josh Walrath
Tagged: Zen, raja koduri, lisa su, Jim Keller, bulldozer, amd
2012 was a significant year for AMD. Many of the top executives left and there were many new and exciting hires at the company. Lisa Su, who would eventually become President and CEO of AMD was hired in January of that year. Rory Read seemed to be on a roll with many measures to turn around the company. He also convinced some big name folks to come back to AMD from other lucrative positions. One of these rehires was Jim Keller.
Jim Keller, breakin it down for AMD. Or doing "The Robot". Or both.
Today it was announced that Jim would be leaving AMD effective Sept. 18th. He was back at AMD for three years and in that time headed up the CPU group. He implemented massive changes that would result in the design of the upcoming Zen architecture. There was a full scale ejection of the Bulldozer concept that powered AMD processors since 2011 with the FX-8150 introduction with the current Excavator core design to last through 2016 with the final product being "Bristol Ridge,"expected next summer. Zen will not ship until late 2016 with the first full quarter of revenue in 2017.
Jim helped to develop the K7 and K8 processors from AMD. He also was extremely influential in the creation of the X86-64 ISA that not only powers AMD’s parts, but also was adopted by Intel after their disastrous EPIC/IA64 ISA failed to go anywhere. His past also includes work at DEC on the Alpha processors and before AMD at Apple working on the A4 and A5 SOCs.
We do not know any of the details about his leaving, and perhaps never will. AMD has released an official statement that “Jim Keller is leaving AMD to pursue other opportunities, effective September 18”. Looking at Jim’s past employment, he seems to move around a bit. Perhaps he enjoys coming into a place, turning things around, implementing some new thinking, but then becomes bored with the daily routine of management, budget, and planning.
In the near future this change will not affect AMD’s roadmaps or product lineups. We still will see Bristol Ridge as the follow-up for Godavari in Summer 2016 and the late 2016 introduction of Zen. What can be said beyond that is hard to quantify. There are a lot of smart and talented people still working at AMD and perhaps this allows someone there to step up and introduce the next generation of architectures and thinking at AMD. Everybody likes the idea of a rockstar designer coming in to shake things up, but time moves on and new people become those rockstars.
We wish Jim well on his new journey and hope that this is not a harbinger of things to come for AMD. Consumers need the competition that AMD brings to the table and we certainly hope we see them continue to release new products and stay on a schedule that will benefit both them and consumers. Perhaps he will join fellow veteran Glenn Henry at VIA/Centaur and produce the next, great X86-64 chip. Perhaps not.
Subject: Editorial | September 9, 2015 - 03:53 PM | Ryan Shrout
Tagged: raja koduri, amd
In a move of outstanding wisdom and forward thinking, AMD has made a personnel move that I can get behind and support. After forming the Radeon Technologies Group to help refocus the company on graphics, it has promoted Raja Koduri to the role of Senior Vice President and Chief Architect of that new group. While this might be a little bit of an "inside baseball" announcement to discuss, Raja is one of the few people in the industry that I have known since day one and he is an outstanding and important person in the graphics world as we know it today.
Koduri recently returned to AMD after a stint with Apple as the mobile SoC vendors director of graphics architecture and his return was met with immediate enthusiasm and hope for a company that continues to struggle financially.
In this new role, Koduri will no longer just be responsible for the IP of AMD graphics, adding to his responsibility the entirety of the hardware, software and business direction for Radeon products. From personal experience I can assure readers that Raja is a fantastic leader, has great instincts for what the industry needs and has seen some of AMD's most successful products through development.
This new role and new division of structure at AMD will come with a lot of responsibility, as Koduri will be responsible for finding ways to grow the Radeon brand's shrinking market share, how to make a play in the mobile IP space, change the dynamic between developers and AMD, and how working with console vendors like MS and Sony makes sense going forward. In many ways this is a return to the structure that made ATI so successful as a player in the GPU space and AMD is definitely hoping this move can turn things around.
Good luck Raja!
To the Max?
Much of the PC enthusiast internet, including our comments section, has been abuzz with “Asynchronous Shader” discussion. Normally, I would explain what it is and then outline the issues that surround it, but I would like to swap that order this time. Basically, the Ashes of the Singularity benchmark utilizes Asynchronous Shaders in DirectX 12, but they disable it (by Vendor ID) for NVIDIA hardware. They say that this is because, while the driver reports compatibility, “attempting to use it was an unmitigated disaster in terms of performance and conformance”.
AMD's Robert Hallock claims that NVIDIA GPUs, including Maxwell, cannot support the feature in hardware at all, while all AMD GCN graphics cards do. NVIDIA has yet to respond to our requests for an official statement, although we haven't poked every one of our contacts yet. We will certainly update and/or follow up if we hear from them. For now though, we have no idea whether this is a hardware or software issue. Either way, it seems more than just politics.
So what is it?
Simply put, Asynchronous Shaders allows a graphics driver to cram workloads in portions of the GPU that are idle, but not otherwise available. For instance, if a graphics task is hammering the ROPs, the driver would be able to toss an independent physics or post-processing task into the shader units alongside it. Kollock from Oxide Games used the analogy of HyperThreading, which allows two CPU threads to be executed on the same core at the same time, as long as it has the capacity for it.
Kollock also notes that compute is becoming more important in the graphics pipeline, and it is possible to completely bypass graphics altogether. The fixed-function bits may never go away, but it's possible that at least some engines will completely bypass it -- maybe even their engine, several years down the road.
But, like always, you will not get an infinite amount of performance by reducing your waste. You are always bound by the theoretical limits of your components, and you cannot optimize past that (except for obviously changing the workload itself). The interesting part is: you can measure that. You can absolutely observe how long a GPU is idle, and represent it as a percentage of a time-span (typically a frame).
And, of course, game developers profile GPUs from time to time...
According to Kollock, he has heard of some console developers getting up to 30% increases in performance using Asynchronous Shaders. Again, this is on console hardware and so this amount may increase or decrease on the PC. In an informal chat with a developer at Epic Games, so massive grain of salt is required, his late night ballpark “totally speculative” guesstimate is that, on the Xbox One, the GPU could theoretically accept a maximum ~10-25% more work in Unreal Engine 4, depending on the scene. He also said that memory bandwidth gets in the way, which Asynchronous Shaders would be fighting against. It is something that they are interested in and investigating, though.
This is where I speculate on drivers. When Mantle was announced, I looked at its features and said “wow, this is everything that a high-end game developer wants, and a graphics developer absolutely does not”. From the OpenCL-like multiple GPU model taking much of the QA out of SLI and CrossFire, to the memory and resource binding management, this should make graphics drivers so much easier.
It might not be free, though. Graphics drivers might still have a bunch of games to play to make sure that work is stuffed through the GPU as tightly packed as possible. We might continue to see “Game Ready” drivers in the coming years, even though much of that burden has been shifted to the game developers. On the other hand, maybe these APIs will level the whole playing field and let all players focus on chip design and efficient injestion of shader code. As always, painfully always, time will tell.
Subject: Editorial | August 21, 2015 - 02:28 PM | Ryan Shrout
Tagged: video, Skylake, master system, Intel, 6700k
Sometimes you get weird boxes in the mail and you just know they are going to be up to no good. This time, Intel just launched the Intel Box Master System gaming system...with COLOR!
You really need to watch the video, but if you MUST sneak a peek at what we're talking about, check out the images below!
Visit Intel at http://inte.ly/unbox
It's Basically a Function Call for GPUs
Mantle, Vulkan, and DirectX 12 all claim to reduce overhead and provide a staggering increase in “draw calls”. As mentioned in the previous editorial, loading graphics card with tasks will take a drastic change in these new APIs. With DirectX 10 and earlier, applications would assign attributes to (what it is told is) the global state of the graphics card. After everything is configured and bound, one of a few “draw” functions is called, which queues the task in the graphics driver as a “draw call”.
While this suggests that just a single graphics device is to be defined, which we also mentioned in the previous article, it also implies that one thread needs to be the authority. This limitation was known about for a while, and it contributed to the meme that consoles can squeeze all the performance they have, but PCs are “too high level” for that. Microsoft tried to combat this with “Deferred Contexts” in DirectX 11. This feature allows virtual, shadow states to be loaded from secondary threads, which can be appended to the global state, whole. It was a compromise between each thread being able to create its own commands, and the legacy decision to have a single, global state for the GPU.
Some developers experienced gains, while others lost a bit. It didn't live up to expectations.
The paradigm used to load graphics cards is the problem. It doesn't make sense anymore. A developer might not want to draw a primitive with every poke of the GPU. At times, they might want to shove a workload of simple linear algebra through it, while other requests could simply be pushing memory around to set up a later task (or to read the result of a previous one). More importantly, any thread could want to do this to any graphics device.
The new graphics APIs allow developers to submit their tasks quicker and smarter, and it allows the drivers to schedule compatible tasks better, even simultaneously. In fact, the driver's job has been massively simplified altogether. When we tested 3DMark back in March, two interesting things were revealed:
- Both AMD and NVIDIA are only a two-digit percentage of draw call performance apart
- Both AMD and NVIDIA saw an order of magnitude increase in draw calls
Subject: Editorial | July 20, 2015 - 08:28 PM | Scott Michaud
Tagged: microsoft, windows, windows 10
As we've been saying for several months now, Windows 10 is coming in a handful of days. Naturally, Microsoft is trickling out information and marketing material leading up to it. Some of the interesting ones we can talk about. I'd normally consider a one-minute TV spot as “not very interesting”, and it probably isn't for our audience, but there was one thing that I wanted to say about it.
The ad looks through an international cast of children, and of course an adorable puppy, describing how their technology life will evolve with Windows 10. The premise is that the OS will empower everything that they do, and grow with them because of automatic updates. Of course, young children and a puppy does a lot to sell a consumer product in itself. The video currently has over 200,000 views on YouTube with an almost 20:1 like-to-dislike ratio.
But the part that interested me was the quote “for them, every screen is meant to be touched”.
In a direct way, yes. Once you provide someone with a touch screen, especially a young child, they instantly want to touch every screen in their life. This has actually led to schools refusing to install touch-based all-in-one PCs because they were worried about kids ruining the non-touch monitors.
It is odd that Microsoft would focus on “touch” in the ad, though. This leads me to the point that I want to bring up. Nowhere in the ad is “familiar” or similar verbiage used. Each example is touch, stylus, or voice. You would think that Microsoft wants to draw in the audience who avoided Windows 8.x, and yet the tone sounds identical to what they've been saying for years.
It's just a TV spot, but it sounds a bit out of tune with the last year.
Tick Tock Tick Tock Tick Tock Tock
A few websites have been re-reporting on a leak from BenchLife.info about Kaby Lake, which is supposedly a second 14nm redesign (“Tock”) to be injected between Skylake and Cannonlake.
UPDATE (July 2nd, 3:20pm ET): It has been pointed out that many hoaxes have come out of the same source, and that I should be more clear in my disclaimer. This is an unconfirmed, relatively easy to fake leak that does not have a second, independent source. I reported on it because (apart from being interesting enough) some details were listed on the images, but not highlighted in the leak, such as "GT0" and a lack of Iris Pro on -K. That suggests that the leaker got the images from somewhere, but didn't notice those details, which implies that the original source was hoaxed by an anonymous source, who only seeded the hoax to a single media outlet, or that it was an actual leak.
Either way, enjoy my analysis but realize that this is a single, unconfirmed source who allegedly published hoaxes in the past.
Image Credit: BenchLife.info
If true, this would be a major shift in both Intel's current roadmap as well as how they justify their research strategies. It also includes a rough stack of product categories, from 4.5W up to 91W TDPs, including their planned integrated graphics configurations. This leads to a pair of interesting stories:
How Kaby Lake could affect Intel's processors going forward. Since 2006, Intel has only budgeted a single CPU architecture redesign for any given fabrication process node. Taking two attempts on the 14nm process buys time for 10nm to become viable, but it could also give them more time to build up a better library of circuit elements, allowing them to assemble better processors in the future.
What type of user will be given Iris Pro? Also, will graphics-free options be available in the sub-Enthusiast class? When buying a processor from Intel, the high-end mainstream processors tend to have GT2-class graphics, such as the Intel HD 4600. Enthusiast architectures, such as Haswell-E, cannot be used without discrete graphics -- the extra space is used for more cores, I/O lanes, or other features. As we will discuss later, Broadwell took a step into changing the availability of Iris Pro in the high-end mainstream, but it doesn't seem like Kaby Lake will make any more progress. Also, if I am interpreting the table correctly, Kaby Lake might bring iGPU-less CPUs to LGA 1151.
Keeping Your Core Regular
To the first point, Intel has been on a steady tick-tock cycle since the Pentium 4 architecture reached the 65nm process node, which was a “tick”. The “tock” came from the Conroe/Merom architecture that was branded “Core 2”. This new architecture was a severe departure from the high clock, relatively low IPC design that Netburst was built around, which instantaneously changed the processor landscape from a dominant AMD to an Intel runaway lead.
After 65nm and Core 2 started the cycle, every new architecture alternated between shrinking the existing architecture to smaller transistors (tick) and creating a new design on the same fabrication process (tock). Even though Intel has been steadily increasing their R&D budget over time, which is now in the range of $10 to $12 billion USD each year, creating smaller, more intricate designs with new process nodes has been getting harder. For comparison, AMD's total revenue (not just profits) for 2014 was $5.51 billion USD.
Digging in a Little Deeper into the DiRT
Over the past few weeks I have had the chance to play the early access "DiRT Rally" title from Codemasters. This is a much more simulation based title that is currently PC only, which is a big switch for Codemasters and how they usually release their premier racing offerings. I was able to get a hold of Paul Coleman from Codemasters and set up a written interview with him. Paul's answers will be in italics.
Who are you, what do you do at Codemasters, and what do you do in your spare time away from the virtual wheel?
Hi my name is Paul Coleman and I am the Chief Games Designer on DiRT Rally. I’m responsible for making sure that the game is the most authentic representation of the sport it can be, I’m essentially representing the player in the studio. In my spare time I enjoy going on road trips with my family in our 1M Coupe. I’ve been co-driving in real world rally events for the last three years and I’ve used that experience to write and voice the co-driver calls in game.
If there is one area that DiRT has really excelled at is keeping frame rate consistent throughout multiple environments. Many games, especially those using cutting edge rendering techniques, often have dramatic frame rate drops at times. How do you get around this while still creating a very impressive looking game?
The engine that DiRT Rally has been built on has been constantly iterated on over the years and we have always been looking at ways of improving the look of the game while maintaining decent performance. That together with the fact that we work closely with GPU manufacturers on each project ensures that we stay current. We also have very strict performance monitoring systems that have come from optimising games for console. These systems have proved very useful when building DiRT Rally even though the game is exclusively on PC.
How do you balance out different controller use cases? While many hard core racers use a wheel, I have seen very competitive racing from people using handheld controllers as well as keyboards. Do you handicap/help those particular implementations so as not to make it overly frustrating to those users? I ask due to the difference in degrees of precision that a gamepad has vs. a wheel that can rotate 900 degrees.
Again this comes back to the fact that we have traditionally developed for console where the primary input device is a handheld controller. This is an area that other sims don’t usually have to worry about but for us it was second nature. There are systems that we have that add a layer between the handheld controller or keyboard and the game which help those guys but the wheel is without a doubt the best way to experience DiRT Rally as it is a direct input.
Subject: Editorial | May 29, 2015 - 12:37 PM | Ryan Shrout
Tagged: SSD 750, PCI Express, NVMe, Intel, giveaway, contest, 750 series
PC Perspective and Intel are partnering together to offer up a giveaway with some pretty impressive swag. Surely by now you have read all about the new Intel SSD 750 Series of products, a new class of solid state drive that combines four lanes of PCI Express 3.0 and a new protocol called NVM Express (NVMe) for impressive bandwidth throughput. In Allyn's review of the SSD in April he called it "the obvious choice for consumers who demand the most from their storage" and gave it a PC Perspective Editor's Choice Award!
Thanks to our friends at Intel we are going to be handing out a pair of the 400GB add-in card models to loyal PC Perspective readers and viewers. How can you enter? The rules are dead simple:
- Fill out the contest entry form below to find multiple entry methods including reading our review, answering a question about Intel SSD 750 Series specs or following us on Twitter. You can fill out one or all of the methods - the more you do the better your chances!
- Leave a comment on the news post below thanking Intel for sponsoring PC Perspective and for supplying this hardware for us to give to you!
- This is a global contest - so feel free to enter from anywhere in the world!
- Contest will close on June 2nd, 2015.
Our most sincere thanks to Intel for bringing this contest to PC Perspective's readers and fans. Good luck to everyone (except Josh)!
Sponsored by Intel
|Capacity||Seqential 128KB Read (up to MB/s)||Sequential 128KB Write (up to MB/s)||Random 4KB Read (up to IOPS)||Random 4KB Write (up to IOPS)||Form Factor||Interface|
|400 GB||2,200||900||430,000||230,000||2.5-inch x 15mm||PCI Express Gen3 x4|
|1.2 TB||2,400||1,200||440,000||290,000||2.5-inch x 15mm||PCI Express Gen3 x4|
|400 GB||2,200||900||430,000||230,000||Half-height half-length (HHHL) Add-in Card||PCI Express Gen3 x4|
|1.2 TB||2,400||1,200||440,000||290,000||Half-heigh half-length (HHHL) Add-in Card||PCI Express Gen3 x4|
Experience the future of storage performance for desktop client and workstation users with the Intel® SSD 750 Series. The Intel SSD 750 Series delivers uncompromised performance by utilizing NVM Express* over four lanes of PCIe* 3.0.
With both Add-in Card and 2.5-inch form factors, the Intel SSD 750 Series eases migration from SATA to PCIe 3.0 without power or thermal limitations on performance. The SSD can now deliver the ultimate in performance in a variety of system form factors and configurations.