Subject: Processors, Mobile | September 2, 2018 - 11:45 AM | Sebastian Peak
Tagged: SoC, octa-core, mobile, Mali-G76, Kirin, Huawei, HiSilicon, gpu, cpu, Cortex-A76, arm, 8-core
Huawei has introduced their subsidiary HiSilicon’s newest mobile processor in the Kirin 980, which, along with Huawei's claim of the world's first commercial 7nm SoC, is the first SoC to use Arm Cortex A76 CPU cores and Arm’s Mali G76 GPU.
Huawei is aiming squarely at Qualcomm with this announcement, claiming better performance than a Snapdragon 845 during the presentation. One of its primary differences to the current Snapdragon is the composition of the Kirin 980’s eight CPU cores, notable as the usual 'big.LITTLE' Arm CPU core configuration for an octa-core design gives way to a revised organization with three groups, as illustrated by AnandTech here:
Of the four Cortex A76 cores just two are clocked up to maximize performance with certain applications such as gaming (and, likely, benchmarks) at 2.60 GHz, and the other two are used more generally as more efficient performance cores at 1.92 GHz. The remaining four A55 cores operate at 1.80 GHz, and are used for lower-performance tasks. A full breakdown of the CPU core configuration as well as slides from the event are available at AnandTech.
Huawei claims that the improved CPU in the Kirin 980 results in "75 percent more powerful and 58 percent more efficient compared to their previous generation" (the Kirin 970). This claim translates into what Huawei claims to be 37% better performance and 32% greater efficiency than Qualcomm’s Snapdragon 845.
The GPU also gets a much-needed lift this year from Arm's latest GPU, the Mali-G76, which features "new, wider execution engines with double the number of lanes" and "provides dramatic uplifts in both performance and efficiency for complex graphics and Machine Learning (ML) workloads", according to Arm.
Real-world testing with shipping handsets is needed to verify Huawei's performance claims, of course. In fact, the results shown by Huawei at the presentation carry a this disclaimer, sourced from today’s press release:
"The specifications of Kirin 980 does not represent the specifications of the phone using this chip. All data and benchmark results are based on internal testing. Results may vary in different environments."
The upcoming Mate 20 from Huawei will be powered by this new Kirin 980 - and could very well provide results consistent with the full potential of the new chip - and that is set for an official launch on October 16.
The full press release is available after the break.
Subject: Graphics Cards | August 20, 2018 - 12:15 PM | Sebastian Peak
Tagged: video card, RTX 2080 Ti, RTX 2080, nvidia, newegg, graphics, gpu, geforce
Newegg has listed NVIDIA GeForce RTX cards ahead of a probably announcement at today's "BeForTheGame" event in Germany, apparently confirming the rumors about the existence of these two GPUs. Both RTX 2080 and RTX 2080 Ti cards are featured on this Newegg promo page:
Clearly this went live a bit early (none of the linked RTX products bring up a valid page yet) as NVIDIA's announcement has yet to take place, though live coverage continues on NVIDIA's Twitch channel now.
Subject: Graphics Cards | August 20, 2018 - 11:30 AM | Sebastian Peak
Tagged: video card, nvidia, live stream, graphics, gpu, announcement
The wait (and endless speculation) is nearly over, as NVIDIA will will be hosting their "BeForTheGame" event with probable product announcements at noon eastern today, and this will be streamed live on the company's Twitch channel.
You can watch the event right here:
Will there be new GeForce cards? Is it GTX or RTX? Were the rumors true or totally off-base? There is only one way to find out! (And of course we will cover any news stories emerging from this event, so stay tuned!)
Subject: Graphics Cards | August 17, 2018 - 02:59 PM | Sebastian Peak
Tagged: VideoCardz, video card, rumor, RTX 2080 Ti, RTX 2080, report, pcb, nvidia, leak, graphics, gpu
The staff at VideoCardz.com have been a very busy of late, posting various articles on rumored NVIDIA graphics cards expected to be revealed this month. Today in particular we are seeing more (and more) information and imagery concerning what seems assured to be RTX 2080 branding, and somewhat surprising is the rumor that the RTX 2080 Ti will launch simultaneously (with a reported 4352 CUDA cores, no less).
Reported images of MSI GAMING X TRIO variants of RTX 2080/2080 Ti (via VideoCardz)
From the reported product images one thing in particular stand out, as memory for each card appears unchanged from current GTX 1080 and 1080 Ti cards, at 8GB and 11GB, respectively (though a move to GDDR6 from GDDR5X has also been rumored/reported).
Even (reported) PCB images are online, with this TU104-400-A1 quality sample pictured on Chiphell via VideoCardz.com:
The TU104-400-A1 pictured is presumed to be the RTX 2080 GPU (Chiphell via VideoCardz)
Other product images from AIB partners (PALIT and Gigabyte) were recently posted over at VideoCardz.com if you care to take a look, and as we near a likely announcement it looks like the (reported) leaks will keep on coming.
Subject: Graphics Cards | June 12, 2018 - 02:21 PM | Ryan Shrout
Tagged: Intel, graphics, gpu, raja koduri
Intel CEO Brian Krzanich disclosed during an analyst event last week that it will have its first discrete graphics chips available in 2020. This will mark the beginning of the chip giant’s journey towards a portfolio of high-performance graphics products for various markets including gaming, data center, and AI.
Some previous rumors posited that a launch at CES 2019 this coming January might be where Intel makes its graphics reveal, but that timeline was never adopted by Intel. It would have been drastically overaggressive and in no way reasonable with the development process of a new silicon design.
Back in November 2017 Intel brought on board Raja Koduri to lead the graphics and compute initiatives inside the company. Koduri was previously in charge of the graphics division at AMD, helping to develop and grow the Radeon brand, and his departure to Intel was thought to have significant impact on the industry.
A typical graphics architecture and chip development cycle is three years for complex design, so even hitting the 2020 window with engineering talent is aggressive.
Intel did not go into detail about what performance level or target market this first discrete GPU solution might address, but Intel EVP of the Data Center Group Navin Shenoy confirmed that the company’s strategy will include solutions for data center segments (think AI, machine learning) along with client (think gaming, professional development).
This is a part of the wider scale AI and machine learning strategy for Intel, that includes these discrete graphics chip products in addition to other options like the Xeon processor family, FPGAs from its acquisition of Altera, and custom AI chips like the Nervana-based NNP.
While the leader in the space, NVIDIA, maintains its position with graphics chips, it is modifying and augmenting these processors with additional features and systems to accelerate AI even more. It will be interesting to see how Intel plans to catch up in design and deployment.
Though few doubt the capability of Intel for chip design, building a new GPU architecture from the ground up is not a small task. Intel needs to provide a performance and efficiency level that is in the same ballpark as NVIDIA and AMD; within 20% or so. Doing that on the first attempt, while also building and fostering the necessary software ecosystem and tools around the new hardware is a tough ask of any company, Silicon Valley juggernaut or no. Until we see the first options available in 2020 to gauge, NVIDIA and AMD have the leadership positions.
Both AMD and NVIDIA will be watching Intel with great interest as GPU development accelerates. AMD’s Forest Norrod, SVP of its data center group, recently stated in an interview that he didn’t expect Koduri at Intel to “have any impact at Intel for at least another three years.” If Intel can deliver on its 2020 target for the first in a series of graphics releases, it might put pressure on these two existing graphics giants sooner than most expected.
Subject: Graphics Cards | June 5, 2018 - 11:58 PM | Tim Verry
Tagged: Vega, machine learning, instinct, HBM2, gpu, computex 2018, computex, amd, 7nm
AMD showed off its first 7nm GPU in the form of the expected AMD Radeon Instinct RX Vega graphics product and RX Vega GPU with 32GB of HBM2 memory. The new GPU uses the Vega architecture along with the open source ecosystem built by AMD to enable both graphics and GPGPU workloads. AMD demonstrated using the 7nm RX Vega GPU for ray tracing in a cool demo that showed realistic reflections and shadows being rendered on a per pixel basis in a model. Granted, we are still a long way away from seeing that kind of detail in real time gaming, but is still cool to see glimpses of that ray traced future.
According to AMD, the 32GB of HBM2 memory will greatly benefit creators and enterprise clients that need to work with large datasets and be able to quickly make changes and updates to models before doing a final render. The larger memory buffer will also help in HPC applications with more big data databases being able to be kept close to the GPU for processing using the wide HBM2 memory bus. Further, HBM2 has physical size and energy efficiency benefits which will pique the interest of datacenters focused on maximizing TCO numbers.
Dr. Lisa Su came on state towards the end of the 7nm Vega demonstration to show off the GPU in person, and you can see that it is rather tiny for the compute power it provides! It is shorter than the two stacks of HBM2 dies on either side, for example.
Of course AMD did not disclose all the nitty-gritty specifications of the new machine learning graphics card that enthusiasts want to know. We will have to wait a bit longer for that information unfortunately!
As for other 7nm offerings? As Ryan talked about during CES in January, 2018 will primarily be the year for the machine learning-focused Radeon Instinct RX Vega 7nm GPU, with other consumer-focused GPUs using the smaller process node likely coming out in 2019. Whether those 7nm GPUs in 2019 will be a refreshed Vega or the new Navi is still up for debate, however AMD's graphics roadmap certainly doesn't rule out Navi as a possibility. In any case, AMD did state during the livestream that it intends to release a new GPU every year with the GPUs alternating between new architecture and new process node.
What are your thoughts on AMD's graphics roadmap and its first 7nm Vega GPU?
Subject: Graphics Cards | May 9, 2018 - 12:23 PM | Sebastian Peak
Tagged: video card, pricing, msrp, mining, GTX 1080, gtx 1070, gtx 1060, gtx, graphics, gpu, gaming, crypto
The wait for in-stock NVIDIA graphics cards without inflated price tags seems to be over. Yes, in the wake of months of crypto-fueled disappointment for gamers the much anticipated, long-awaited return of graphics cards at (gasp) MSRP prices is at hand. NVIDIA has now listed most of their GTX lineup as in-stock (with a limit of 2) at normal MSRPs, with the only exception being the GTX 1080 Ti (still out of stock). The lead time from NVIDIA is one week, but worth it for those interested in the lower prices and 'Founders Edition' coolers.
Many other GTX 10 Series options are to be found online at near-MSRP pricing, though as before many of the aftermarket designs command a premium, with factory overclocks and proprietary cooler designs to help justify the added cost. Even Amazon - previously home to some of the most outrageous price-gouging from third-party sellers in months past - has cards at list pricing, which seems to solidify a return to GPU normalcy.
The GTX 1080 inches closer to standard pricing once again on Amazon
Some of the current offers include:
GTX 1070 cards continue to have the highest premium outside of NVIDIA's store, with the lowest current pricing on Newegg or Amazon at $469.99. Still, the overall return to near-MSRP pricing around the web is good news for gamers who have been forced to play second (or third) fiddle to cryptomining "entrepreneurs" for several months now; a disturbing era in which pre-built gaming systems from Alienware and others actually presented a better value than DIY builds.
Subject: General Tech | April 30, 2018 - 01:49 PM | Jeremy Hellstrom
Tagged: gpu, cryptocurrency, msrp
It does seem to be that GPU prices have recently begun to trend down, heading towards their MSRP though with a ways to go yet. There is no definitive reason for such good news, but there are certainly some obvious factors, of which cryptocurrency is the most glaring. With more mature currencies close to being mined out, and being designed to run better on ASIC's anyways, that part of the crowd is no longer buying GPUs. The more recent altcoins which were designed for GPUs are having some minor trust issues at the moment, and there are those designing ASICs for those currencies which is also taking the heat off.
Also consider the maturity of both Vega and Pascal, they have been out long enough for supply to catch up with demand and in theory there are new products coming soon from both vendors which generally means they are looking to get rid of exisiting stock. Slashdot and other sites offer a variety of theories but no matter how you slice it, this is good news.
"If you were looking for a new graphics card for your PC over the last year, your search probably ended with you giving up and slinging some cusses at cryptocurrency miners. But now the supply of video cards is on the verge of rebounding, and I don't think you should wait much longer to pull the trigger on a purchase."
Here is some more Tech News from around the web:
- Guru3D Rig of the Month - April 2018
- USB 3.2 Work Is On The Way For The Linux 4.18 Kernel: Report @ Slashdot
- Microsoft's Office 2019 is now available in preview for Windows 10 users @ The Inquirer
- Core Memory Upgrade for Arduino @ Hack a Day
- Huawei developing own Android and Windows 10 alternatives amid US tensions @ The Inquirer
- Windows USB-stick-of-death, router bugs resurrected, and more @ The Register
- Hard Drive Gives Its Life to Cool 3D Prints @ Hack a Day
- Paperback writer? Microsoft slaps patents on book-style gadgetry with flexible display @ The Register
Subject: General Tech | December 20, 2017 - 02:22 PM | Jeremy Hellstrom
Tagged: krzanich, Intel, cpu, gpu
Intel's recent releases have not been terribly exciting, the chips offer small incremental efficiency improvements each generation, with a few enterprise level features added which do not benefit enthusiasts. The majority of the changes have been to the sockets, something many feel Intel could have left alone. This may change, according to the memo from CEO Brian Krzanich who describes the coming years at Intel as akin to the mid-eighties when they risked everything to move from producing memory to producing CPUs and other chips. This change will not focus solely on CPUs; he intends to capitalize on the IP which Intel has acquired recently as well as branching into the mysterious GPU which has been announced. Drop by Slashdot for links.
"Anything that produces data, anything that requires a lot of computing, the vision is, we're there." The memo also underscores the dramatic change in the nature of Intel's business as it approaches its 50th anniversary in July 2018. "We're just inches away from being a 50/50 company, meaning that half our revenue comes from the PC and half from new growth markets," Krzanich wrote."
Here is some more Tech News from around the web:
- Android trojan has miner so aggressive it can bork your battery @ The Register
- Last Minute Shopping? Let the INQ Christmas Gift Guide 2017 help... @ The Inquirer
- Ubuntu 17.10 Temporarily Pulled Due To A BIOS Corrupting Problem @ Slashdot
- Facebook flashes ramped-up face-recog tech. Try not to freak out @ The Register
- Assassin's Creed IV : Black Flag Is FREE For A Limited Time! @ TechARP
Subject: Graphics Cards | December 7, 2017 - 11:44 PM | Jim Tanous
Tagged: Volta, titan, nvidia, graphics card, gpu
NVIDIA made a surprising move late Thursday with the simultaneous announcement and launch of the Titan V, the first consumer/prosumer graphics card based on the Volta architecture.
Like recent flagship Titan-branded cards, the Titan V will be available exclusively from NVIDIA for $2,999. Labeled "the most powerful graphics card ever created for the PC," Titan V sports 12GB of HBM2 memory, 5120 CUDA cores, and a 1455MHz boost clock, giving the card 110 teraflops of maximum compute performance. Check out the full specs below:
6 Graphics Processing Clusters
80 Streaming Multiprocessors
5120 CUDA Cores (single precision)
320 Texture Units
640 Tensor Cores
1200 MHz Base Clock (MHz)
1455 MHz Boost Clock (MHz)
850 MHz Memory Clock
1.7 Gbps Memory Data Rate
4608K L2 Cache Size
12288 MB HBM2 Total Video Memory
3072-bit Memory Interface
652.8 GB/s Total Memory Bandwidth
384 GigaTexels/sec Texture Rate (Bilinear)
12 nm Fabrication Process (TSMC 12nm FFN High Performance)
21.1 Billion Transistor Count
3 x DisplayPort, 1 x HDMI Connectors
Dual Slot Form Factor
One 6-pin, One 8-pin Power Connectors
600 Watts Recommended Power Supply
250 Watts Thermal Design Power (TDP)
The NVIDIA Titan V's 110 teraflops of compute performance compares to a maximum of about 12 teraflops on the Titan Xp, a greater than 9X increase in a single generation. Note that this is a very specific claim though, and references the AI compute capability of the Tensor cores rather than we traditionally measure for GPUs (single precision FLOPS). In that metric, the Titan V only truly offers a jump to 14 TFLOPS. The addition of expensive HBM2 memory also adds to the high price compared to its predecessor.
The Titan V is available now from NVIDIA.com for $2,999, with a limit of 2 per customer. And hey, there's free shipping too.