All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Subject: Graphics Cards | February 28, 2019 - 11:25 PM | Scott Michaud
Tagged: nvidia, graphics drivers, security
Normally, when we discuss graphics drivers, there are a subset of users that like to stay on old versions. Some have older hardware and they believe that they will get limited benefits going forward. Others encounter a bug with a certain version and will refuse to update until it is patched.
In this case – you probably want to update regardless.
NVIDIA has found eight security vulnerabilities in their drivers, which have been corrected in their latest versions. One of them also affects Linux... more on that later.
On Windows, there are five supported branches:
- Users of R418 for GeForce, Quadro, and NVS should install 419.17.
- Users of R418 for Tesla should install 418.96.
- Users of R400 for Quadro and NVS should install 412.29.
- Users of R400 for Tesla should install 412.29.
- Users of R390 for Quadro and NVS should install 392.37.
Basically, you should install 419.17 unless you are using professional hardware.
One issue is being likened to Meltdown and Spectre although it is not quite the same. In those cases, the exploit took advantage of hardware optimizations leak system memory. In the case of CVE-2018-6260, however, the attack uses NVIDIA’s performance counters to potentially leak graphics memory. The difference is that GPU performance counters are a developer tool, used by applications like NVIDIA Nsight, to provide diagnostics. Further, beyond targeting a developer tool that can be disabled, this attack also requires local access to the device.
Linux users are also vulnerable to this attack (but not the other seven):
- Users of R418 for GeForce, Quadro, and NVS should install 418.43.
- Users of R418 for Tesla should install 418.39.
- Users of R400 for GeForce, Quadro, NVS, and Tesla should install 410.104.
- Users of R390 for GeForce, Quadro, NVS, and Tesla should install 390.116.
- Users of R384 for Tesla should install 384.183.
Whether on Windows or Linux, after installing the update, a hidden option will allow you to disable GPU performance counters unless admin credentials are provided. I don’t know why it’s set to the insecure variant by default… but the setting can be toggled in the NVIDIA Control Panel. On Windows it’s Desktop then Enable Developer Settings then Manage GPU Performance Counters under Developer then Restrict access to the GPU counters to admin users only. See the driver release notes (especially the "Driver Security" section) for more info.
The main thing to fix is the other seven, however. That just requires the driver update. You should have received a notification from GeForce Experience if you use it; otherwise, check out NVIDIA’s website.
Subject: General Tech | February 28, 2019 - 03:28 PM | Jeremy Hellstrom
Tagged: MH751, analogue, cooler master, gaming headset, audio
Cooler Master's MH751 is the analogue sibling of the USB MH752 and is designed to offer performance, with no extraneous features like RGBs, for a decent price. They are available for $80, so at least they managed that goal, but as for performance and comfort you will have to rely on TechPowerUp's experiences for now. They described the feel of the headset "like a hug for your head" and were more than happy with the quality of audio which is good news for anyone shopping for a decent, understated headset.
"Cooler Master's new analogue gaming headset hits all the right spots: it's comfortable, performs very well, and offers good value for your money!"
Here is some more Tech News from around the web:
- EasySMX COOL 2000 Gaming Headset Review @ NikKTech
- EVGA’s Nu Audio Brings Entry-Level Audiophile Sound to the PC @ BabelTechReviews
- Xanova JUTURNA-U Gaming Headset Review @ NikKTech
- Logitech G935 gaming headset @ The Tech Report
Subject: Storage | February 28, 2019 - 02:27 PM | Jeremy Hellstrom
Tagged: adata, SX8200 Pro, 1TB, NVMe, SM2262EN
Last year ADATA launched their XPG SX8200 NVMe SSD, which offered impressive speed without a high cost, currently you can grab 1TB for just under $200. This year they followed up with the XPG SX8200 Pro, using Silicon Motion's new SM2262EN controller, paired with the same 64-layer Micron TLC flash as used on the original. The Tech Report tested it out and found it to be almost a chart topper, surpassing many other more famous brands, and the best news is it is a mere $10 more than the previous version.
If you are looking for a PCIe 4x M.2 NVMe drive, this one should be on your list!
"Last year's XPG SX8200 was a great NVMe drive, but Adata thinks it can do even better. The XPG SX8200 Pro is mostly the same hardware with just a couple of small changes. Join us to find out whether those end up making all the difference."
Here are some more Storage reviews from around the web:
- WD Black SN750 NVME SSD @ Guru of 3D
- Toshiba XG6 1TB SSD @ Kitguru
- Samsung 970 EVO Plus 500GB NVMe Linux SSD Benchmarks @ Phoronix
- iStorage diskAshur PRO2 5TB USB 3.1 PIN Authenticated Portable HDD Review @ NikKTech
Subject: General Tech | February 28, 2019 - 01:27 PM | Jeremy Hellstrom
Tagged: 5G, wireless, Huawei, qualcomm, x50, X55
The roll out of 5G has been somewhat painful to watch, with a variety of questionable marketing techniques and a staggered roll out. The Inquirer dropped by MWC to see how much progress the various vendors, such as Qualcomm, Intel and Huawei are faring at the moment. Qualcomm will be rolling out their new X55 to market some time this year, offering up to 7Gbps download speeds with similar power requirements to the existing LTE 4G chips. Huawei expects delays, for reasons obvious to those who follow the news and Intel is not expecting to deliver anything until next year.
Take a peek at the picture below for an idea of how segmented the standard is at the moment and then head over for a more detailed look.
"If you've been following closely, leading vendors have been subtly playing down expectations and that's closer to reality. The missing bits of Release 15 were delayed three months to focus on stability, the 3GPP said at the time."
Here is some more Tech News from around the web:
- Guru3D Rig of the Month - February 2019
- The next version of Windows 10 has finally been released to the Slow Ring @ The Inquirer
- Police In Canada Are Tracking People's 'Negative' Behavior In a 'Risk' Database @ Slashdot
- Competition in foldable smartphone market heating up @ DigiTimes
- Serious Amazon Ring Vulnerability Leaves Audio, Video Feeds Open To Attack @ Slashdot
- Lenovo kicks down door of MWC, dumps a stack of sexy new ThinkPads @ The Register
- Does WiFi Kill Houseplants? @ Hackaday
- Samsung is Loading McAfee Antivirus Software On Smart TVs @ Slashdot
- AVM FRITZ!Fon C4 & C5 DECT Cordless Telephones Review @ NikKTech
Subject: General Tech | February 28, 2019 - 09:43 AM | Jim Tanous
Tagged: Z390, usb 3.2, speakers, podcast, microSD, Hyper 212 Black Edition, gtx 1660 ti, gtx 1660, Dominator Platinum RGB, Adrenalin
PC Perspective Podcast #534 - 2/27/2019
This week we review the new GTX 1660 Ti, Dominator Platinum RGB Memory from Corsair, the high-end ASUS ROG Maximus XI Formula Z390 motherboard, and talk about the absurd new USB 3.2 specification.
Subscribe to the PC Perspective Podcast
Check out previous podcast episodes: http://pcper.com/podcast
00:00:47 - Review: GTX 1660 Ti
00:32:04 - Review: Corsair Dominator Platinum RGB Memory
00:43:33 - Review: ASUS ROG Maximus XI Formula Z390
00:49:04 - Review: Cooler Master Hyper 212 RGB Black Edition
00:58:19 - Review: Logitech Z606 5.1 Speakers
01:08:19 - Review: ASUS ROG Strix Flare Keyboard
01:13:16 - News: NVIDIA MX230 & MX250 Mobile GPUs
01:15:43 - News: RX Vega 56 Price Cuts
01:20:03 - News: GTX 1660 & 1650 Rumors
01:26:21 - News: Return of the Intellimouse
01:29:58 - News: TSMC 7nm & 5nm EUV Production
01:36:33 - News: Radeon Adrenalin 2019 Edition 19.2.3 Update
01:39:40 - News: 1TB SanDisk microSDXC Card
01:42:26 - News: Absurd New USB 3.2 Specifications
01:54:15 - Picks of the Week
Subject: Mobile | February 27, 2019 - 11:12 PM | Tim Verry
Tagged: nm card, MWC, mate x, Leica, Kirin 980, Huawei, foldable, balong 5000, android 9
Huawei raised the stakes at MWC 2019 with the reveal of its new flagship foldable smartphone that is nearly all screen wrapping around the front and back in phone mode and able to fold outwards into an eight-inch tablet.
The upcoming Mate X measures 78.3 x 161.3 x 5.4 to 11mm when folded up in phone mode and expands to 146.2mm x 161.3 x 5.4-11mm in tablet mode. The Interstellar Blue phone weighs in a 259 grams (0.57 lbs) and is nearly all OLED display except for a small bump along the right side (which can double as a useful handle when in tablet mode akin to Kindle devices or Lenovo’s smaller tablets) where the three cameras, fingerprint sensor/power button, volume controls, USB Type-C port, and many of the internal hardware components are nestled.
As far the screen, Huawei is using an OLED panel covered with plastic (no glass here, unfortunately, but that’s the tradeoff for going foldable) with a resolution of 2480 x 2200 when unfolded in tablet mode or 2480 x 1148 for the 6.6” front display and 2480 x 892 for the 6.38” rear display when folded. Huawei’s Mate X is a very sleek design with rounded edges and corners that is able to fold into a fairly slim package (slimmer than Samsung’s Galaxy Fold which folds inwards). A button on the side unlocks the rear display and allows it to fold outwards to make a display that is reportedly flat without a crease or visible divider though does have a different feel to it than other flagship smartphones that have moved to glass displays. It certainly looks impressive though long-term reviews will flesh out how well the display holds up over time and many folds.
Internally, Huawei is using the Kirin 980 SoC along with the Balong 5000 5G modem to power the smartphone. The smartphone further includes 8GB of RAM and 512GB of internal memory. The Kirin 980 SoC is comprised of two Cortex-A76 cores clocked at 2.6 GHz, two Cortex-A76 cores clocked at 1.92 GHz, and four Cortex-A55 cores clocked at 1.8 GHz, a Mali-G76 GPU, and NPU for AI acceleration tasks. The Balong 5000 modem supports 2G, 3G, 4G, and 5G multi mode in stand alone or non standalone configurations. The phone supports a dual SIM design with one SIM for 5G and the other for up to 4G networks. Alternatively, instead of a second SIM card users can slot in a nano memory card (NM card) up to 256GB which is Huawei’s expandable storage form factor that is a memory card with the size and form factor of a nano SIM. A 4,500 mAh battery powers the foldable phone with a 55W SuperCharger able to charge the battery from 1% to 85% in 30 minutes (4G standby, screen turned off). Connectivity options include 802.11ac, Bluetooth 5.0 (with AptX and other features supported), and USB Type-C 3.1 gen 1 with a cable purchased separately (the out-of-the-box cable is USB 2.0). The Mate X runs Android 9 with Huawei’s EMUI 9.1.1 skin.
The Leica cameras include a 40MP wide angle, 16MP ultra-wide angle, and 8MP telephone camera with the ability to mirror the screen when taking photographs (or selfies) so that the subjects can see the photo at the same time as the photographer to help compose the shot.
Huawei’s flagship Mate X foldable will be available in around the second half of 2019 with a MSRP of 2299 Euros (~$2615 though we likely won't see it in the US unless imported) that demands your wallet to go all in or fold. With that asking price, it is likely out of reach of most people, but it is an interesting look at the future and what it could bring as costs go down and the hinges and bendable display technologies are refined. I was admittedly not very excited about the idea of a foldable phone, especially seeing the rumor now reality where Samsung’s Galaxy Fold has a smaller screen in phone mode, but Huawei’s design has piqued my interest of what’s possible and I’m ready. Having a bigger screen on tap would be very helpful in being able to blow up text and make reading textbooks and fiction not yet available as an audiobook much easier on the eyes. It also just looks cool and futuristic to me as well(heh) with the only thing missing being a stylus/pen input hidden away in the ridge on the right side (if only!).
If you are curious to see the folding action, Michael Fisher was able to get hands on video at Mobile World Congress in Barcelona, Spain.
I’m ready. What are your thoughts on these foldable flagships and the idea of a foldable phone?
Subject: Mobile | February 27, 2019 - 08:10 PM | Tim Verry
Tagged: nokia, HMD, android one, pie, light, camera, photography, pOLED, snapdragon 845, qualcomm
Finnish company HMD Global Oy unveiled an interesting new smartphone under its Nokia brand at Mobile World Congress that, in typical Nokia fashion, focuses on camera quality. The Nokia PureView 9 offers up five rear cameras along with the hardware and software to harness computational photography techniques to deliver high quality HDR images.
The PureView 9 nestles a 5.99-inch QHD+ pOLED HDR10 certified display (2880x1440 resolution) in a two-tone Midnight Blue body with front and back glass faces and aluminum sides with curved stylized edges. There is an optical fingerprint reader under the display and a small front facing camera sitting above the display. If you are looking for an edge-to-edge display, the PureView 9 is not the phone for you as it does have small bezels top and bottom and the front face does not curve into the sides. Ars Technica compares the design to the LG V30 which I would say is fair as both phones have similar bezels with curved display corners. For a most specific comparison, the V30 puts the “selfie” camera on the left not the right like the PureView 9, the bezels on the Nokia may be ever so slightly thicker and there is also a Nokia logo in the top right corner while there is no branding on the front of the V30. Nokia’s PureView 9 features a single USB-C port on the bottom edge along with what looks to be a single speaker. The right side holds the volume and power buttons while the left side is blank. The top edge appears to be the SIM tray slot.
I like the blue colors HMD has chosen, and while a good portion of the back is taken up by the camera system, the lenses sit flush with the body which is nice to see (Nokia has never been one afraid of cameras protruding from the phone in the name of photo and lens quality). There are five Zeiss camera lenses, one LED flash, and a sensor suite including time of flight grouped in a hexagonal shape.
The cameras are the star of the show with the Nokia PureView 9 and where most of the money was focused. HMD/Nokia partnered with Light to design a system with five 12MP f/1.8 camera sensors two of which have the RGB color filters and three of which are monochrome sensors that let it far more light than your usual camera sensor thanks in large part to not having a color filter which absorbs most of the light that enters the camera. In fact, HMD claims that the PureView 9’s five camera sensor system captures 10 times as much light as single sensor of the same type. Light provided its Lux Capacitor co-processor to allow all five cameras (it supports up to six) to shoot simultaneously allowing Nokia to use up to 60MP of total data from a single shot from each of the five 12MP cameras or up to 240MP of data when doing temporal image stacking with each camera taking four shots each combined and then downstacked/downsampled into, ideally, a much better 12MP (JPG or RAW DNG) image than would be possible with a single camera on its own using various computational photography and “Image stacking” techniques. The camera should do really well in low-light situations as well as being able to offer depth of field and bokeh effects that are much closer to reality and DSLR cameras than to your typical smartphone that can fake it. Nokia’s also partnered with Google to allow photographers to save shots to Google Photos with GDepth at up to 1200 layers of dept of field data that can be adjusted later to get customized photos in editing. Speaking of editing, Nokia and Adobe are supporting the PureView 9 in the Android version of Lightroom with a camera profile allowing you to work with the RAW DNG images right on your phone which is interesting, at least in theory (it’s not clear what performance will be like with the SD845).
In typical Nokia fashion, its Pro Camera UI offers a full manual mode as well as features like long exposure (with a tripod), time lapse, bokeh, filters, scenes, and more.
What is powering this camera that happens to make calls and run Android though? Well, here is where Nokia has compromised in the design with the use of the older Snapdragon 845 chipset though it is paired with 6GB of RAM and 128GB of UFS 2.1 internal memory (not expandable as there is no microSD card support). There is a 3320 mAh battery though and a stock Android One (Pie) OS experience.
HMD’s Nokia PureView 9 will reportedly be a limited production run product with an MSRP of $699. The flagship pricing may be difficult for some smartphone enthusiasts to justify especially with competing flagships also being announced at MWC featuring newer designs with edge-to-edge displays, newer processors, and support for 2TB microSD cards. For amateur photographers and anyone that uses their smartphone as their primary camera and love taking photos though the Nokia PureView 9 may be the niche product to beat in 2019 so long as the usual build quality, I’ve come to expect from Nokia holds up.
I do worry about the glass back and how that will hold up (it is Gorilla Glass 5 at least and the phone is IP67 rated for dust/water resistance) and 9-to-5 Google’s hands-on video mentions that the optical fingerprint reader was hit-or-miss (which can hopefully be improved between now and launch). No microSD card slot and no headphone jack may also turn off buyers (one advantage the V30 retains), and while many photo-happy users could live without the headphone jack, no expandable storage is a real disappointment and the 128GB of internal storage simply may not be enough.
I am looking forward to the reviews on this and am curious to see how the camera performs in the real world and what is possible with video recording as well. I don’t see the PureView 9 winning any popularity contests in 2019 and it appears to be kind of a mixed bag even with its exciting camera system with certain drawbacks dragging it down but I can also appreciate why some users might well choose it even with its compromises.
Subject: Memory | February 27, 2019 - 05:51 PM | Jeremy Hellstrom
Tagged: RGB, frag harder disco lights, Dominator Platinum RGB, ddr4, corsair
Sometimes it is sad that the <blink> tag was deprecated, for instance it would be perfect for Corsair's Dominator Platinum RGB kits. As you can see below there are more than four lights on each module and they are certainly not dim. If you need to feed your RGB addition you can pick from a variety of kits including two 16GB kits, one running at 3200MHz and one at 4800Mhz, or 64GB of DDR4-3600MHz, up to a 128GB 3600MHz kit. All feature Frag Harder Disco Lights compatible with Corsair's iCUE software so you can make them dance like Jim did.
"Corsair is adding a huge 64GB RAM kit that many enthusiast High End Desk Top users might be interested in. We take the new Dominator Platinum RGB DIMMs for a ride on both Intel X299 and AMD X399 systems and see how the clocks shake out. And of course, enough Frag Harder Disco Lights to illuminate your house."
Here are some more Memory articles from around the web:
- Corsair Dominator Platinum RGB DDR4 @ Guru of 3D
- Colorful iGame DDR4 3200 8 GB @ TechPowerUp
- T-Force Delta TUF Gaming RGB DDR4 3200 @ Guru of 3D
- Team Group T-Force Delta TUF Gaming RGB 3200 MHz @ TechPowerUp
Subject: General Tech | February 27, 2019 - 05:02 PM | Jeremy Hellstrom
Tagged: antem, BioWare, gaming
There once was a time when a new BioWare game caused great excitement among gamers, now it creates excitement for critics. Anthem has jetted onto the scene and the reception has been mixed, in part because it isn't a new Mass Effect game though there are other reasons offered as well. Combat involves rocket packs and somewhere short of a billion different types of weapons, and Rock, Paper, SHOTGUN describes it as similar to Mass Effect: Andromeda’s rocket jumping shooty bang. Your home base, Fort Tarsis offers some interesting story lines and interactions when you want a break from combat to do a bit of shopping and talking which may appeal to you more.
Of course, with games such as this, Destiny and that other unmentionable one, things are subject to change and this could be a whole different ball of whacks in a few months.
"Hello, it’s me, the resident BioLiker, here to tell you wot I think of Anthem, a game about which you probably already have opinions, whether you’ve played it or not."
Here is some more Tech News from around the web:
- Darkest Dungeon 2 is coming and looks rather chilly @ Rock, Paper, SHOTGUN
- How Command & Conquer: Tiberian Sun punished the computers of the day @ Ars Technica
- Humble Neptunia Bundle
- GOG ending rebates for regional pricing differences @ Rock, Paper, SHOTGUN
- Microsoft minimises differences between Xbox & Windows games @ HEXUS
- Metro Exodus @ Overclockers Club
Subject: General Tech | February 27, 2019 - 12:49 PM | Jeremy Hellstrom
Tagged: security, powershell
Bad news from the trenches of the eternal battle between white hats and black hats, as attackers have moved from infecting files on drives to simply running PowerShell scripts in memory. That type of attack does not leave the same traces on your file system as previous styles of infection and renders your fancy antivirus software ineffective. A well crafted PowerShell script can happily sit in memory and convince your system to mine cryptocurrency, upload password files or completely map your network to assist in attacks against other machines.
"This finding is important because it is another reminder that admins can no longer solely rely on detecting malicious executables and similar data on hard drives and other storage, to identify cyber-intrusions."
Here is some more Tech News from around the web:
- Just a fifth of Windows 10 PCs are running the latest version @ The Inquirer
- Anti-cheat software causing big problems for Windows 10 previews @ Ars Technica
- We'll ask you one more time: Where's our DRAM money? @ The Register
- USB 3.2 is going to make USB branding even more awful @ The Inquirer
Subject: Editorial | February 27, 2019 - 11:53 AM | Sebastian Peak
Tagged: usb-if, USB Implementers Forum, USB 3.x, usb 3.2, usb 3.1, usb 3.0, usb, universal serial bus
There was a time when USB simply meant Universal Serial Bus, and we watched as devices that had previously relied on serial and parallel (etc.) ports moved to the new, more convenient standard; and in the enthusiast community we watched with some trepidation as PS/2 became a legacy option for keyboards in favor of USB. Since then we have seen tremendous increases in speed for this interface with huge strides from USB 2.0 and then USB 3.0, but in the recent past there has been a proliferation of different generations of the technology with their own speed ratings, a new connector, and a lot of confusion.
Types of USB connectors (via conwire.com)
Now, in an apparent - yet misguided - effort to clarify the situation, the people making decisions about what to call these standards has released documentation for the re-naming of existing USB 3.x standards - which makes about as much sense as continuing to call the latest version of USB another three-point-anything, when we clearly should have moved on to USB 4.0 by now.
The organization calling the shots about the standard is called the USB Implementers Forum (USB-IF), described from their about page as "a non-profit corporation founded by the group of companies that developed the Universal Serial Bus specification" which was "formed to provide a support organization and forum for the advancement and adoption of Universal Serial Bus technology".
So what did the USB-IF come up with? Truth is, as they say, stranger than fiction:
The USB 3.2 specification absorbed all prior 3.x specifications. USB 3.2 identifies three transfer rates, USB 3.2 Gen 1 at 5Gbps, USB 3.2 Gen 2 at 10Gbps and USB 3.2 Gen 2x2 at 20Gbps. It is important that vendors clearly communicate the performance signaling that a product delivers in the product’s packaging, advertising content, and any other marketing materials.
- USB 3.2 Gen 1
- Product capability: product signals at 5Gbps
- Marketing name: SuperSpeed USB
- USB 3.2 Gen 2
- Product capability: product signals at 10Gbps
- Marketing name: SuperSpeed USB 10Gbps
- USB 3.2 Gen 2x2
- Product capability: product signals at 20Gbps
- Marketing name: SuperSpeed USB 20Gbps
If this was not crystal clear already, the USB-IF goes on to emphasize the importance of clarifying the performance potential separately from the protocols when advertising one of these standards, itself suggesting that they have failed to clarify anything with these changes:
"It is critical for manufacturers to distinguish between USB 3.2 Gen 1, USB 3.2 Gen 2 and USB 3.2 Gen 2x2 products. USB-IF also strongly urges manufacturers to identify the performance capabilities of a product separately from other protocols or physical characteristics in product names and marketing materials."
Various USB-IF standards
Subject: Storage | February 27, 2019 - 11:02 AM | Tim Verry
Tagged: UHS-I, uhs-1, sneakernet, smartphone, sandisk, microSD
SanDisk recently announced new microSDXC cards in 512GB and 1TB capacities that it claims are the fastest cards [soon to be] on the market. The SanDisk Extreme UHS-I micro SD cards conform to the C10/V30/U3/A2 speed classes (only USB-IF is more confusing heh) and are able to hit up to 160 MB/s reads and 90 MB/s write speeds reportedly thanks to Western Digital’s (who owns SanDisk) proprietary flash (though the PR and product page do not go into details on which version it is using it is likely some version of 96-layer BiCS flash).
In addition to transfer speeds, the micro SDXC UHS-1 cards offer A2 class enhanced application performance with up to 4,000 read IOPS and 2,000 write IOPS. As a result, the cards allegedly support faster load times and random access of applications run from the microSD card (e.g. Android applications installed to the expansion card rather than internal storage).
According to the product page, the cards are rated for temperatures ranging from -13F to 185F (cold is much worse for flash memory than heat) when in use and down to -40F when not in use.
It is impressive to see 1TB and even 512GB of storage available in such a small physical format when just a few years ago 64GB was considered large! Many smartphone do not even (officially) support higher than 256GB or less for their expandable storage though so long as the cards are formatted correctly these new cards should still work.
Brian Pridgeon, Director of Marketing for SanDisk at Western Digital was quoted in the press release in stating:
“People trust SanDisk-brand cards to capture and preserve their world. Our goal is to deliver the best possible experience so consumers can share the content that’s important to them,” said Brian Pridgeon, director of marketing for SanDisk-branded products, Western Digital.
4K UHD and soon enough 8K video recording on a smartphone or dedicated camera seems to be an obvious use case for these new higher capacity cards as well as the ability to sneakernet files and mail off data for offsite backups easily thanks to the tiny size and weight.
Note that a full card would take just over 2 hours to copy from card to computer and just over 3.5 hours to fill at maximum transfer speeds of 160 MB/s and 90 MB/s respectively. Western Digital's SanDisk Extreme UHS-I is slightly faster than Micron's 1TB microSD card in reads while the two are about even in writes with Micron's microSDXC card hitting up to 100 MB/s reads and 95 MB/s writes.
The increased storage space doesn’t come cheap though with MSRPs on the new micro SDXC cards being $199.99 for the 512GB UHS-I card and $499.99 for the 1TB model. SanDisk is offering the cards for pre-order on its website with wider retail availability expected April 2019.
Will you be picking up a 1TB microSD card? Personally, I’m still a ways away from filling up my 64GB mSD card though I do use Sync to copy my photos and videos off of my phone and regularly delete them from my phone. The wife might be able to make use of one of these high capacity cards since she’s constantly running out of space on her phone and needs to pay for cloud storage – if only she didn’t have an iPhone!
Subject: General Tech | February 26, 2019 - 02:04 PM | Jeremy Hellstrom
Tagged: rumour, nvidia, gtx 1660, gtx 1650
You could successfully argue that neither AMD nor NVIDIA have offered a lower end GPU for casual gaming and content consumption in the last generation. Rumours abound that NVIDIA will offer not one, but two cards priced around the $200 mark which would fill that niche, the GTX 1660 and GTX 1650. We have little information about them, though you can safely assume that they will perform at a lower level than the GTX 1660 Ti.
The launch dates for these cards, assuming they exist, is pegged for March 15th and April 30th, according to DigiTimes. According to one of our favourite leakers, TUM_APISAK, the GTX 1650 will sport 4GB of RAM and have a core clock of 1,485MHz. The GTX 1660 remains a mystery.
"The sources said that Nvidia is slated to launch GTX 1660 on March 15 and GTX1650 on April 30, which will bear minimum price tags of US$229 and US$179, respectively."
Here is some more Tech News from around the web:
- 'You Do Not Need Blockchain: Eight Popular Use Cases And Why They Do Not Work' @ Slashdot
- Nokia 9: HMD Global hauls PureView out of brand limbo @ The Register
- Lipid nanotablet makes tiny biocomputer @ Physicsworld
- Fan boy 3: Huawei overhauls Air-a-like MateBooks @ The RegisterE
- String of ions may out-compute best quantum computers @ Ars Technica
- Satya Nadella defends Microsoft's HoloLens military tie-up after workers revolt @ The Inquirer
- Need a 1TB microSD for your smartmobe? Come April, you can free up storage space in your wallet and buy one @ The Register
Subject: Graphics Cards | February 26, 2019 - 12:41 AM | Scott Michaud
Tagged: Khronos, Khronos Group, vulkan, vulkan sc, opengl, opengl sc
The Khronos Group, the industry body that maintains OpenGL, OpenCL, EGL, glTF, Vulkan, OpenXR, and several other standards, has announced the Vulkan Safety Critical (SC) Working Group at Embedded World Conference 2019. The goal is to create an API that leverages Vulkan’s graphics and compute capabilities in a way that implementations can be safe and secure enough for the strictest of industries, such as automotive, air, medical, and energy.
It's a safety hammer, I promise. (No I don't.)
The primary goal is graphics and compute, although the working group will also consider exposing other hardware capabilities, such as video encode and decode. These industries currently have access to graphics through OpenGL SC, although the latest release is still significantly behind what a GPU can do. To put it into perspective – the latest OpenGL SC 2.0 (which was released in 2016) has less functionality than the original release of WebGL back in 2011.
While OpenGL SC 2.0 allows programmable vertex and fragment (pixel) shaders, it falls short in many areas. Most importantly, OpenGL SC 2.0 does not allow compute shaders; Vulkan SC is aiming to promote the GPU into a coprocessor for each of these important industries.
There is not much else to report on at this point – the working group has been formed. A bunch of industry members have voiced their excitement about the new API’s potential, such as Codeplay, Arm, and NVIDIA. The obvious example application would be self-driving cars, although I’m personally interested in the medical industry. Is there any sort of instrument that could do significantly more if it had access to a parallel compute device?
If you are in a safety-critical enterprise, then look into joining the Khronos Group.
Subject: Graphics Cards, Processors | February 25, 2019 - 07:19 PM | Jeremy Hellstrom
Tagged: Adrenalin Edition, adrenaline 19.2.3, amd, ryzen, Vega
AMD's regular driver updates have a new trick up their sleeves, they now include drivers for AMD Ryzen APUs with a Vega GPU inside. Today's 19.2.3 launch is the first to be able to do so, and you can expect future releases to as well. This is a handy integration for AMD users, even if you have a GPU installed you can be sure that your APU drivers are also up to date in case you need them. For many users this may mean your Hybrid APU + GPU combination will offer better performance than you have seen recently, with no extra effort required from you.
Along with the support for Ryzen APUs you will also see these changes.
- AMD Ryzen Mobile Processors with Radeon Vega Graphics Up to 10% average performance gains with AMD Radeon Software Adrenalin 2019 Edition 19.2.3 vs. 17.40 launch drivers for AMD Ryzen Mobile Processors with Radeon Vega Graphics.
- Up to 17% average performance gains in eSports titles with AMD Radeon Software Adrenalin 2019 Edition 19.2.3 vs. 17.40 launch drivers for AMD Ryzen Mobile Processors with Radeon Vega Graphics.
- Dirt Rally 2 - Up to 3% performance gains with AMD Radeon Software Adrenalin 2019 Edition 19.2.3, on a Radeon RX Vega 64 in Dirt Rally 2.
- Battlefield V players may experience character outlines stuck on screen after being revived.
- Fan speeds may remain elevated for longer periods than expected when using Tuning Control Auto Overclock or manual fan curve in Radeon WattMan on AMD Radeon VII.
- ReLive wireless VR may experience an application crash or hang during extended periods of play.
- Zero RPM will correctly disable in Radeon WattMan on available system configurations when manual fan curve is enabled.
- A loss of video may be intermittently experienced when launching a fullscreen player application with Radeon FreeSync enabled.
- Mouse lag or system slowdown is observed for extended periods of time with two or more displays connected and one display switched off.
- Changes made in Radeon WattMan settings via Radeon Overlay may sometimes not save or take effect once Radeon Overlay is closed.
- Some Mobile or Hybrid Graphics system configurations may intermittently experience green flicker when moving the mouse over YouTube videos in Chrome web browser.
- A work around if this occurs is to disable hardware acceleration.
- Radeon WattMan settings changes may intermittently not apply on AMD Radeon VII.
- Performance metrics overlay and Radeon WattMan gauges may experience inaccurate fluctuating readings on AMD Radeon VII.
Subject: Motherboards | February 25, 2019 - 05:09 PM | Jeremy Hellstrom
Tagged: x299, asus, prime x299 deluxe II, Intel, LGA 2066
ASUS have regenerated their Prime X299 Deluxe motherboard to be able to support new Intel chips such as the i9-9980XE by completely redesigning the PWMs and adding a few new features. This is the third board which [H]ard|OCP reviewed that uses the new "Twin phase" design as opposed to the previous phase doubling used on the vast majority of boards. Their results so far suggest the new design is at least as stable as the old standard, as you can see yourself in their full review.
"The ASUS Prime X299 Deluxe II is an ultra-feature rich solution for today’s discerning computing enthusiast. It does not wear ROG branding, and is clad in white shrouding, letting you know that this is a PRIME motherboard. Is it a bargain priced motherboard? Nope. This one comes in at $500. Let's see if it is worth it."
Here are some more Motherboard articles from around the web:
- MSI MEG X299 Creation @ Guru of 3D
- MSI MEG X299 CREATION @ TechPowerUp
- MSI X299 Creation – HEDT, With an Extra Helping of Excess? @ Bjorn3d
- ASUS ROG STRIX Z390-E Gaming @ Overclockers Club
Subject: Mobile | February 25, 2019 - 03:45 PM | Jeremy Hellstrom
Tagged: gigabyte, Aero 15 X9, RTX 2070 Max-Q, Core i7-8750H, Intel, nvidia, gaming laptop, 144hz
Gigabyte's Aero 15 X9 has been upgraded to an RTX 2070 Max-Q to power the 144Hz 1080p screen, though if you have the money there is a model with an RTX 2080 Max-Q and a 4k display. Techspot reviewed the former and tested its performance against previous models as well as determining if the laptop has enough power to provide a decent experience with DLSS or DXR enabled. They also discovered the laptop gets rather warm while testing and that thanks to the thin bezel the camera has been moved to the bottom, providing a handy way to trim your nose hairs.
"Today we are reviewing the Gigabyte Aero 15 X9, the first Nvidia RTX laptop we tested and used for our RTX 2070 Max-Q feature earlier this month. It's a cool gaming laptop, pretty similar to the Aero 15X v8 we looked at last year, but with a few upgrades that we'll walk you through here."
Here are some more Mobile articles from around the web:
More Mobile Articles
- XPS 13 2019 review: One small move made Dell’s best laptop even better @ Ars Technica
- Lenovo ThinkPad X1 Extreme @ TechARP
- Lenovo ThinkPad P1: Sumptuous pro PC that gets a tad warm @ The Register
- Guidemaster: The least-awful Android phones @ Ars Technica
Subject: General Tech | February 25, 2019 - 12:48 PM | Jeremy Hellstrom
Tagged: vega 10, Sunny Cove, rumour, Iris Plus Graphics 940, Intel, ice lake
If you liked Jim's example of a bad chart on the podcast, you are going to love these leaked Intel Ice Lake graphics benchmarks. At the root, the as yet to be released Iris Plus Graphics 940 portion of the APU is faster than AMD's Vega 10, which was released in 2017. This should not shock anyone.
The numbers at The Inquirer show just how much salt you should take this with, the frequently posted 77.41% better performance is when you compare a coming generation of GPU against a previous one and drops to about 44% when a specific test which favours Intel is dropped. Remember that AMD and Intel both have tests which favour their architecture, and keep that in mind when you are reading PR from either company.
When you compare Intel's scores to AMD's current Vega 11 the advantage drops to a hair under 2% better and falls behind when you don't order a Manhattan.
"The incoming part, also referred to as the Iris Plus Graphics 940, is, on average, 77.41 per cent faster than Gen9 in the GFXBench 5.0 benchmark and around 62.97 per cent faster than AMD's Vega 10 graphics."
Here is some more Tech News from around the web:
- Microsoft Announces HoloLens 2 Mixed Reality Headset For $3,500 @ Slashdot
- ZX Spectrum Vega+ 'backer'? Nope, you're now a creditor – and should probably act fast @ The Register
- SD Association Unveils microSD Express Format That Promises Transfer Speeds of Up To 985 MB/s @ Slashdot
- Linus Torvalds pulls pin, tosses in grenade: x86 won, forget about Arm in server CPUs, says Linux kernel supremo @ The Register
- LG Announces G8 ThinQ Smartphone That Uses 'Advanced Palm Vein Authentication' Tech To Unlock @ Slashdot
- OnePlus 5G phone first look: Firm shows off Snapdragon 855 prototype @ Ars Technica
- You can now run Android on the Nintendo Switch (but you probably don't want to) @ The Inquirer
Subject: Graphics Cards | February 23, 2019 - 03:58 PM | Sebastian Peak
Tagged: zotac, video card, turing, nvidia, msi, gtx 1660 ti, graphics, gpu, gigabyte, geforce, gaming, evga, asus, amazon
NVIDIA partners launched their new GeForce GTX 1660 Ti graphics cards yesterday, and we checked out a pair of these in our review and found these new TU116-based cards to offer excellent performance (and overclocking headroom) for the price. Looking over Amazon listings today here is everything available so far, separated by board partner. We've added the Boost Clock speeds for your reference to show how these cards are clocked compared to the reference (1770 MHz), and purchases made through any of these Amazon affiliate links help us out with a small commission.
In any case, this list at least demonstrates the current retail picture of NVIDIA's new mainstream Turing GPU on Amazon, so without further preamble here are all currently available cards in alphabetical order by brand:
ASUS Phoenix GeForce GTX 1660 Ti OC
- Boost Clock: 1815 MHz
- $284.99 on Amazon.com
ASUS Dual GeForce GTX 1660 Ti OC
- Boost Clock: 1830 MHz
- $309.99 on Amazon.com
ASUS Strix Gaming GTX 1660 Ti OC
- Boost Clock: 1890 MHz
- $329.99 on Amazon.com
EVGA GeForce GTX 1660 Ti XC Black Gaming
- Boost Clock: 1770 MHz
- $279.99 on Amazon.com
EVGA GeForce GTX 1660 Ti XC Gaming
- Boost Clock: 1845 MHz
- $289.99 on Amazon.com
GIGABYTE GeForce GTX 1660 Ti OC 6G
- Boost Clock: 1800 MHz
- $279.99 on Amazon.com
GIGABYTE GeForce GTX 1660 Ti Windforce OC 6G
- Boost Clock: 1845 MHz
- $289.99 on Amazon.com
MSI GTX 1660 Ti VENTUS XS 6G OC
- Boost Clock: 1830 MHz
- $289.99 on Amazon.com
MSI GTX 1660 Ti ARMOR 6G OC
- Boost Clock: 1860 MHz
- $299.99 on Amazon.com
MSI GTX 1660 Ti GAMING X 6G
- Boost Clock 1875 MHz
- $309.99 on Amazon.com
ZOTAC Gaming GeForce GTX 1660 Ti
- Boost Clock: 1770 MHz
- $279.99 on Amazon.com
Already we are seeing many cards offering factory overclocks, ranging from a small 30 MHz bump at $279.99 from GIGABYTE (GTX 1660 Ti OC 6G, 1800 MHz Boost Clock) to 100 MHz+ from the MSI GTX 1660 Ti GAMING X 6G (1875 MHz Boost Clock) we reviewed at $309.99.
We will update the list as additional cards become available on Amazon.
Subject: General Tech | February 23, 2019 - 03:08 PM | Tim Verry
Tagged: TSMC, lithography, euv, asml, 7nm, 5nm
According to Hexus, chip manufacturing giant TSMC will begin mass production of its enhanced 7nm process node as soon as next month. The new "CLN7FF+, N7+" mode incorporates limited use of EUV (extreme ultraviolet lithography) on four non-critical layers using specialized equipment from ASML to offer 20% higher transistor density and between six to twelve percent lower power consumption at the same complexity/frequency. Those numbers are versus TSMC's current 7nm process node (CLN7FF, N7) which uses DUV (deep ultraviolet lithography) with ArF (Argon Fluoride) excimer lasers.
TSMC is reportedly buying up slightly more than half of ASML's production of EUV equipment for 2019 with the chip maker reserving 18 of the 30 EUV units that will ship this year. It will use the ASML Twinscan NXE step and scan machines to produce its enhanced 7nm node and allow TSMC to familiarize themselves with the technology and dial it in for use with its upcoming 5nm node (and beyond) which will more heavily incorporate EUV with it being used on up to 14 layers of the 5nm process node manufacturing. AnandTech reports that the 5nm EUV node will bring 1.8-times the transistor density (45% area reduction) of the non-EUV 7nm node along with either 20% less power usage or 15% more performance at the same chip complexity and frequency.
Interestingly, while 7nm production accounted for roughly 9% of TSMC's output in 2018, it will reportedly be up to a quarter of all TSMC's chip shipments in 2019.
Mass production of the 7nm EUV node will begin as soon as March with risk production of 5nm chips slated to being in April with the first chip designs being taped out within the first half of the year. Volume production of 5nm chips is not expected until the first half of 2020, however, though that would put it just in time for AMD's Zen 2+ architecture. Of course, AMD, Apple, HiSilicon, and Xilinx are TSMC's big customers for the current 7nm node (especially AMD who is using TSMC for its 7nm CPU and GPU orders), and Huawei / HiSilicon may well be TSMC's first customer for the EUV incorporating CLN7FF+, N7+ node.
With GlobalFoundries backing off of leading-edge process techs and shelving 7nm, Intel and Samsung are TSMC's competition in this extremely complicated and expensive space. 2020 and beyond are going to be very interesting as EUV production ramps up and is pushed as far as it can go to bring process technologies as close to the theoretical limits that the market will bear. I think we still have a good while left for process shrinks, with some of these lower node numbers being attributed to marketing (with some elements being that small but depending on what and how they measure these nodes) but it is definitely going to get expensive and I am curious who will continue on and carry the ball to the traditional manufacturing process finish line or if we will need some other exotic materials or way of computing paradigm shift to happen before we even attempt to get there simply due to unrealistic R&D and other costs not making it worth it enough for even the big players to pursue.
In talking with Josh Walrath, he clarified that EUV does not, by itself, offer performance enhancements, but it does cut down on exposures/patterning and reduces the steps where things can go wrong which can lead to improved yields when implemented correctly. Using extreme ultraviolet lithography isn't a magic bullet though, as the fabrication equipment is expensive and uses a lot of power driving up manufacturing costs. TSMC is using EUV on its N7+ node to get "tighter metal pitch" and more density along with lower power consumption. Performance improvements are still unknown at this point (to the public, anyway), but as Mr. Walrath said performance isn't going to increase simply from moving to EUV. When moving to 5nm, TSMC does claim performance improvements, but most of those gains are likely attributed to the much higher density of the resulting chips. Using EUV to get yields up at that small of a node is likely the biggest reason for utilizing EUV to get enough useable wafer and dies per wafer. TSMC must believe that the costs [of EUV] versus trying to do it [5nm] without working in EUV into the processis worth it. Stay tuned to this week's PC Perspective podcast if you are interested in additional thoughts from JoshTekk and the team (or check out our Discord server).
What are your thoughts?