Someone, who wasn’t Intel, seeded Tom’s Hardware an Intel Core i7-7700k, which is expected for release in the new year. This is the top end of the mainstream SKUs, bringing four cores (eight threads) to 4.2 GHz base, 4.5 GHz boost. Using a motherboard built around the Z170 chipset, they were able to clock the CPU up to 4.8 GHz, which is a little over 4% higher than the Skylake-based Core i7-6700k maximum overclock on the same board.
Image Credit: Tom's Hardware
Lucky number i7-77.
Before we continue, these results are based on a single sample. (Update: @7:01pm — Also, the motherboard they used has some known overclock and stability issues. They mentioned it a bit in the post, like why their BCLK is 99.65MHz, but I forgot to highlight it here. Thankfully, Allyn caught it in the first ten minutes.) This sample has retail branding, but Intel would not confirm that it performs like they expect a retail SKU would. Normally, pre-release products are labeled as such, but there’s no way to tell if this one part is some exception. Beyond concerns that it might be slightly different from what consumers will eventually receive, there is also a huge variation in overclocking performance due to binning. With a sample size of one, we cannot tell whether this chip has an abnormally high, or an abnormally low, defect count, which affects both power and maximum frequency.
That aside, if this chip is representative of Kaby Lake performance, users should expect an increase in headroom for clock rates, but it will come at the cost of increased power consumption. In fact, Tom’s Hardware states that the chip “acts like an overclocked i7-6700K”. Based on this, it seems like, unless they want an extra 4 PCIe lanes on Z270, Kaby Lake’s performance might already be achievable for users with a lucky Skylake.
I should note that Tom’s Hardware didn’t benchmark the iGPU. I don’t really see it used for much more than video encoding anyway, but it would be nice to see if Intel improved in that area, seeing as how they incremented the model number. Then again, even users who are concerned about that will probably be better off just adding a second, discrete GPU anyway.
If these numbers hold up
If these numbers hold up (IMHO the stock voltage might be too high, but WDIK), this feels very late-stage CuMine to me.
Juicing a tired (not bad, just close to its limit) core for its last few extra MHz. And no .13µ equivalent in sight that could add another 30% in headroom.
At least this chip won’t fail
At least this chip won’t fail on a linux kernel compile test 🙂 Intel did get cumine to 1.10 stably but never back to 1.13.
4%… *giggle*
On the other
4%… *giggle*
On the other side i’m wondering to how much a 14nm process can be scaled down even further when you think about the fact that a single silicon atom has a size of 0,24nm. That’s only 58 atoms!
It’s worse than that. Silicon
It's worse than that. Silicon lattice-distance is the interval for a solid, which is ~0.543nm (~26 atoms for 14nm) at room temperature. Solids exist as a crystal structure.
This is taking feature size as literal, though, which is progressing toward a meaningless marketing term.
The real minimum feature size
The real minimum feature size for Intel’s “14nm” is 42nm. Still quite small, but…
4.20GHz… brah
4.20GHz… brah
Seems Intel knows how bad AMD
Seems Intel knows how bad AMD ZEN actually is. Intel new model is: Tick – Relax – Tock -Relax.
Zen is not supposed to be
Zen is not supposed to be faster than Intel’s latest offerings on that single core IPC metric. Zen is only expected to get close and get sales by using a lower price. So it’s the price/performance metric that Zen will offer. It’s that and when the Zen interposer based APUs get here AMD’s graphics IP will beat Intel’s seeing is that AMD will also be developing APUs on an Interposer with way more that just 8 GPU compute units.
So an APU on an interposer design with a single 4 Zen cores CCX and 4 full fat Zen Cores on one die, that only share a large L3 cache and the Cores in the CCX only share the L3 cache and nothing else, with each Zen core having its own core compute resources and SMT for 2 processor threads per core. The Interposer based Zen die with 1 CCX(4 full Zen Cores) wired up via an interposer to a 16 CU Vega based GPU die and HBM2 will have plenty of graphics performance that Intel will be hard pressed to match.
AMD only stated that a Zen core’s performance would be around 40% more in the single threaded core IPC metric over a single excavator core’s IPC metric, so core for core Zen gets around 40% more IPC than Excavator. An Excavator core does not have any SMT ability but a Zen core does have SMT(2 processor threads per core) so Zen will have a bit better CPU core execution resources utilization with SMT. Intel’s version of SMT(HyperThreading) enabled cores get 15–30% more performance relative to any of Intel’s non SMT enabled cores due to the extra CPU core utilization enabled by using SMT hardware in the CPU’s cores to extract extra performance from any potentially underutilized CPU core execution resources.
Modern gaming and the DX12/Vulkan graphics APIs will be able to make use of all of a CPU cores’ resources for gaming and the new DX12/Vulkan graphics APIs will also be accelerating more non graphics gaming computation on the GPU for additional gaming compute performance in addition to the standard graphics done on the GPU. Intel has not improved its graphics IP and once AMD begins making Interposer based APUs with more than 8 GCN GPU CUs Intel’s will not be able to compete with that without a whole lot more investment in it’s SOC based graphics IP.
AMD line of thought 10 years
AMD line of thought 10 years ago (the time of ATI acquisition). Their CPUs or GPUs are not really suitable for notebooks. Combine the two in APU and you have the least desirable part possible.
It proved not feasible for AMD to make high end APU (CPU alone was 220W, remember?).
Low end means low margins and also means they need to compete with used parts market.
Interposer based APUs would have the same disadvantages.
To have any advantage AMD would have to change PC architecture so CPU is separate but runs of a GPU memory. Intel will allow (support) this change only when they are ready (x64 casus).
10 years ago is not now, so
10 years ago is not now, so why the fixation on the past. Graphics APIs and the whole gaming industry are different now than it was 10 years ago so why the fixation on the past. AMD APUs have been a very affordable solution for some at a much more affordable price, and AMD’s APU graphics has always been better than Intel’s for the lower price that AMD’s charges for its APU based graphics. Intel really never offers its best graphics on its “Affordable” SOC SKUs.
AMD’s interposer based Zen/Vega APUs will have way more graphics power than anything Intel can manage. Intel has very lower shader core counts clocked much higher to make up for the Intel shader core deficiency relative to AMD’s or even Nvidia’s APU/SOC based graphics. Intel’s higher clocks on its integrated graphics are not very power efficient, hence Intel’s reluctance to greatly improve it graphics offerings on a yearly basis. For gaming Intel’s graphics is a well known joke among gamers.
At 14nm AMD will be able to offer much better APU SKUs with Zen CPU cores and Vega graphics with more than the current 8 GPU CUs to go along with the SMT enabled Full Fat Zen cores. So there will probably be plenty of room using interposer based APU laptop SKUs that are able to provide 16+ Vega CUs rather than 8, with AMD able to offer HBM2 bandwidth to keep those 16 Vega Compute Units from ever being starved for bandwidth. Intel will have to seriously look at a solution that can provide as much bandwidth as HBM2 for the total amounts of faster RAM that HBM2 can provide. HBM2 will be able to provide up to 8GB of fast high bandwidth RAM per HBM2 stack.
Laptop OEM’s will like very much the overall space savings and power savings that AMD’s Interposer based APUs will provide that will be able to supply enough APU graphics power to not have the need for any discrete mobile GPUs and the extra motherboard complexity and cost that goes along with that. Laptop cooling solutions will only have to cover the interposer unit and motherboard memory DIMM options may not be needed if the APU can at least offer 32GB HBM2 Options.
Lower cost APU solutions could get by with a single channel to DIMM based DRAM with a single HBM2 stack that can act as a faster tier of memory to keep the GPU’s bandwidth needs met. So maybe some lower cost Zen/Vega APU solutions with a single HBM2 stack at 8GB to offer the Zen/Vega CPU/GPU cores plenty of high bandwidth memory while offering a single DIMM channel to more secondary slower RAM. Its not hard to design an APU’s memory controller to treat the HBM2 as an L4 cache and keep the GPU and CPU working from the much Faster HBM2 memory most of the time.
It would not be difficult for a single 8GB HBM2 stack to leverage 3 to 4 times more slower DIMM based DRAM and allow an interposer based APU to run from the faster HBM2 while managing any slower DRAM to HBM2 transfers in the background to keep the APU’s CPU/GPU from ever having to run directly from the slower DIMM based DRAM. And remember the 4 Zen cores in a complex(CCX) will be mostly operating from the CCX’s own large shared L3 cache so any direct access to any memory outside of the CCX can have the latency hidden and keep those Zen cores operating from L3 cache before even having to go to any HBM2 or DIMM based slower memory direct accesses. The memory controller on any Zen/Vega APU SKU will be able to leverage HBM2 very effectively and keep any latency issues associated with slower memory hidden.
Is that you, Dale?
Is that you, Dale?
No not Dale, or Chip for that
No not Dale, or Chip for that matter! Intel’s dog food graphics you get it even if you don’t plan on ever using it for high end gaming. Most high end gamers would rather have the CPU cores sans the dog food graphics!
Maybe with DX12’s/Vulkan’s explicit GPU multi-adaptor that Intel “Graphics” can be a little more useful for some non graphics gaming computational uses but those lower SP counts on Intel’s graphics do not add up to as much total FP Flops relative to Nvidia’s and AMD’s integrated graphics. Intel’s graphics has to be clocked higher to get anywhere near Nvidia’s and AMD’s SOC/APU products in the FP/Flops metric.
Too bad, your ideas reminded
Too bad, your ideas reminded me a lot of Dale. Wonder what he’s doing these days, if he’s still around.
“AMD’s APU graphics has
“AMD’s APU graphics has always been better than Intel’s”
Correct me if I am wrong but since Sandy Bridge Intel’s multimedia engine has higher throughput (decoding h.264 1080p 60Hz and more) then AMD APUs.
“At 14nm AMD will be able to offer much better APU SKUs with Zen CPU cores and Vega graphics with more than the current 8 GPU CUs”
APUs are small niche and one with RX460 built in changes nothing. Most PC gaming sites recommends at least RX470 today. 16 CUs was good in 2013.
“Laptop OEM’s …”
OEMs do not want hotspots and want thin cooling which is easier with separate CPU and GPU. AMD GPU hardly made it into Macbook Pro this year and even old GTX 965M in Surface Book 4 edges it out.
“Correct me if I am wrong but
“Correct me if I am wrong but since Sandy Bridge Intel’s multimedia engine has higher throughput (decoding h.264 1080p 60Hz and more) then AMD APUs.”
Decoding functional blocks for video content is not Shader Processors for graphics/gaming usage, and both Nvidia and AMD have better graphics(more SPs) than Intel. So it’s not about AMD vs Nvidia. It’s about Intel charging to much for its overpriced “Graphics” and not even offering it’s best graphics on its lower cost SOC SKUs. Also AMD’s and Nvidia’s GPU shader cores can be used to accelerate decoding! AMD an Nvidia have much higher SP counts on their respective APU/SOC graphics than Intel offers on its graphics.
OEMs pushing that Intel financed(Contra Revenue) ultrabook thin and light crap have even caused Nvidia to have to throttle their mobile offerings, read up on Nvidia removing some overclocking functionality from its mobile laptop SKUs. So with Intel paying to offset advertising(Ultrabook) for the Laptop OEMs Intel is indirectly paying the laptop OEMs bribes, ditto for Nvidia and its large cash reserves for doing the same with the laptop OEMs.
Intel’s SOCs can not high end game without a little help from Nvidia or AMD. And AMD’s Interposer based APUs will eat Intel’s lunch for gaming graphics and video decoding! So even if any Zen/Vega APU even has just one HBM2 stack that adds up to a lot more bandwidth for any integrated AMD APU graphics even if the Zen/Vega APU had a single channel to a larger amount of slower DIMM based DDR3/DDR4 DRAM. A single HBM2 stack(8GB) used like an L4 cache can make 8/12 gigs of slower single channel DDR3/DDR4 DRRAM a non issue with the Integrated Vega GPU working from HBM2.
Stop with your bait and switch tactics, Intel’s graphics is dog food for any real gaming, and Intel charges dearly for even its non “High End” SOC graphics options. Intel’s high end SOC graphics appears to be mostly for show, because Intel nevers offers it in its low cost SKUs.
It is sad but AMD GPUs and
It is sad but AMD GPUs and occasionally CPUs are just lots of unrealized potential. What exactly is that AMD excels at? Nothing? I thought so.
AMD matters in the
AMD matters in the Price/performance metric and not in your need to have the absolutely fastest at Too high a cost and a terrible price/performance metric because of that illogical need for a winner at all costs to your wallet.
AMD excels at bringing new innovative technology to market at an affordable price point! HBM/HBM2 is AMD and SK Hynix working for years to develop that HBM standard, and being great at making that HBM into a JEDEC standard that even Nvidia is using. AMD’s Zen products do not have to beat Intel at that over emphasized single core IPC metric Zen just has to get close. And the only reason that Zen has to get its IPC close currently for gaming workloads is for some older games that are not engineered to make use of more than one CPU core or the new DX12/Vulkan APIs that will allow for better multi-CPU core usage and better non graphics gaming compute accelerated on AMD’s GPUs.
AMD’s GPUs really shine for games making use of DX12/Vulkan to run more of the game’s non graphics gaming compute on the GPU, in addition to the graphics, with lower latency. AMD’s GPUs have fully in their GCN GPUs the necessary hardware to manage any GPU asynchronous compute without having to rely on any slower software methods that can not be responsive enough to mange efficiently any GPU Shader Processor execution resources. Most gamers are not using Intel’s most costly CPU/SOC SKUs because they simply can not afford to be priced gouged, so most are currently using Intel’s a little less costly i5, or older i7 based SKUs.
AMD will price Zen more affordable and AMD will be able to price its Zen CPUs, Radeon GPUs and motherboard chip-sets as an even better package deal. Nvidia will only be able to price its GPUs and Intel will only be able to price its CPUs/MB chip-sets, so AMD will have more latitude to price the whole package deal CPU, GPU, and motherboard chip sets.
AMD’s motherboard partners are also more than likely also its GPU AIB partners, so AMD will be able to work with these partners with lower pricing on AMD’s Zen CPU SKUs, Radeon Polaris/Vega GPU SKUs, and motherboard chip set SKUs to put together some great package deals that its competitors Nvidia(no x86 license, no motherboard chip sets for x86, only GPUs) and Intel(x86 CPUs, motherboard chip sets, BUT no discrete GPUs) can not offer.
Price/Performance is what AMD is really good for! AMD is good for those that want to keep more of their cash and still game well. Ditto for DP Adaptive Sync for AMD’s GPUs and gaming monitors as opposed to the more costly G-sync cash grab from Nvidia for gaming monitors($$$$)! AMD is for more money in your pocket while still being able to game without breaking the bank.
AMD is the Coin Miners choice for more SP T-Flops for less dollars. 2 RX 480s have about the same SP T-Flops metric as a Titan X(Pascal). So those miners are getting 4 RX 480’s at twice the SP FP T-Flops of a Titan X(Pascal) and still saving 2 C-Notes! Now Crunch on them numbers!
Notebooks are the cheapest
Notebooks are the cheapest complete PC these days. AMD is not competitive in notebooks thus very low presence.
It is Nvidia Maxwell excellent color compression which proved really innovative not AMD brute force HBM.
True, AMD is noticeable better then Nvidia in Doom 4 under Vulkan. Unfortunately id Software with Doom 4 is exception to the rule. Battlefield 4 under Manlte was good as well while the new Battlefield 1 runs much better under DX11 then DX12 (both by DICE).
True, one can build mediocre AMD gaming PC for a reasonable amount. Sony and Microsoft did that with PS4 and Xbox One.
Now, Coin Mining, witch problem does it solves exactly? To much power? To little pollution?
The games that utilize DX11
The games that utilize DX11 are already made and no more will be invested in DX11 games! The work on DX12/Vulkan games progresses as I write this reply. The entire games/gaming engine and Graphics API ecosystem is converting over to the new software/hardware technology and will not be focusing on any previous obsoleted Graphics API’s and Hardware technologies. The VR gaming market is still in its infancy and much resources are being invested in the VR gaming hardware/software ecosystem, as is the market for 4k gaming.
The coin mining market’s choice of AMD’s RX 480s and more than likely the RX 490s is a direct result of AMD’s including more FP hardware in its very reasonably priced consumer GPU SKUs! If you did the math on just how many SP FP T-Flops a single RX 480 provides at that low low price point then that speaks for itself. Those DX12/Vulkan Games that will be accelerating more of the non graphics gaming computational workloads on AMD’s GPUs will make good use of AMD’s GCN CU/Ace units and the extra SP FP T-Flops for gaming also, in addition to the GPU’s graphics processing ability.
More of the VR games makers have expressed their preferences for AMD’s GPUs and the direction that the VR gaming market is heading for doing more of the entire games’ workloads on the GPU(Graphics and Non Graphics Gaming Compute) to keep the PCIe latency issues to a minimum!
One can build a very nice mainstream gaming system that is much better than any console using one or two RX 480s, or for even more savings 2 RX 470s and still have money in the bank. AMD’s is working on their Flagship SKUs as it is already done it mainstream Polaris product roll-outs. So The RX 480 is not designed to compete as a flagship! And when AMD’s Flagship SKUs are to market expect that it will not cost very much to afford those SKUs also relative to what Nvidia charges. AMD makes consumer SKUs to please the gaming users and also the please the coin miners and anyone who uses a GPU for more than just gaming workloads. Nvidia will charge dearly for any extra compute as the user will have to spring for the mad cash and get the higher cost Nvidia Pro SKUs($$$$$$)!
We will see in few years.
We will see in few years. Till then Nvidia rules in DX11 (and OpenGL).
Somehow Nvidia does better in VR for now. Please check hardopc.com.
Dual RX470/480 bests console but in almost all games is substantially behind single GTX1070/1080.
Nvidia charges much more for
Nvidia charges much more for less compute! Some folks do more with their GPUs than game and even games will be in that 2 years time doing more non graphics gaming compute on the GPU. CPUs are not going the be all that more used for future gaming unless they can put some CPU cores right in there on the GPU. That PCIe CPU to discrete GPU latency Issue for CPU to discrete GPU communication is really a problem for VR gaming.
AMD’s interposer based APUs with Zen cores and Beefy on interposer Vega Graphics will not have that PCIe CPU to discrete GPU latency issue! Add to that HBM2 and an interposer etched CPU Die to GPU Die very wide parallel connection fabric and there will be high end gaming APUs with some very impressive gaming figures. Silicon Interposer technology is just beginning to be used and future Interposers will switch from the passive traces only designs to become active designs with whole coherent interconnect fabrics and coherency circuity etched into the interposer’s silicon substrate.
Wait for AMD’s Navi GPUs to see what types of modular interposer based GPU and APU systems that AMD will bring to the market, with smaller GPU dies added in increasing numbers to create GPUs, Or APUs, of low to high processing power just by adding more modular die components to various sizes of active interposer designs. AMD’s Workstation/HPC/Supercomputer interposer based APUs will have consumer variants with most of the R&D paid for by the professional Workstation/HPC/Supercomputer market so the consumer SKUs can be more affordable for the consumer market.
Nvidia has been very good at marketing that power metric gotten by stripping out so much compute and other GPU hardware but that is going to become a thing of that past as VR/4k games start to demand more FP asynchronous compute resources fully managed in the GPU’s hardware for the new games/gaming markets that will use as much non graphics gaming compute and graphics compute as can be provided by the GPU.
DX11 is on its way out, DX12 and Vulkan are in the here and now, and into the future!
Sorry, that’s only a 7.5/10
Sorry, that’s only a 7.5/10 because you’re missing most of the fancy new Zen marketing gobbledygook.
Up yours, Martin (damage
Up yours, Martin (damage control) Trautvetter, folks know your MO! And Zen does not have to beat Intel in the single core IPC metric. I’d rather Zen just get close so the pricing can remain affordable! It’s the AMD graphics that comes with any Zen/Vega APU that will be more important than any brand of crappy CPU! Just you try and do any heavy graphics workloads on a CPU’s paltry amount of cores. The GPU is what will be important not the CPU. You like Intel’s dog food graphics you eat it! I’ll take an AMD Interposer based APU with 4 Zen Cores(That’s plenty of CPU) and the remainder of the space on the interposer reserved for a much larger Vega GPU die where the real rendering will happen and some HBM2 so the Vega graphics will not be starved for bandwidth!
CPU’s, any brand of CPU, are the Mooks of the graphics processing world. GPU’s are where the rendering is done. It’s funny that AMD chose Blender 3D to showcase Zen’s performance relative to Intel’s CPUs because any GPU would have finished that simple rendering job at the click of the mouse and not the minutes it took both the Zen 8 cores and the Intel CPU to finish the task. CPU’s are such a joke at rendering! And any Blender 3d user is going to be using Cycles/or other GPU based rendering plug-in and not any damn dirty CPU for rendering workloads.
Well, aren’t you a perfectly
Well, aren’t you a perfectly civil human being.
I hope you’ll get that very specific SOP you’re so obsessed with, I don’t see how it makes sense. (a pure quad-core CPU is much too small to be economical; doesn’t matter what die you connect the HBM to, it’ll be worse than on a single, monolithic APU die with it integrated into the internal switching fabric; while you seem very smitten with rendering, the target market for that use is minuscule and probably much better served with discrete graphics)
Do not accuse people of being
Do not accuse people of being marketing just because they are interested in new/better technology! There is nothing stopping Intel from making interposer based SOCs, other than the fact that their GPU IP is pure crap. You choose to forget that AMD will be making interposer based APUs for the workstation/server/HPC market so the engineering and R&D for any consumer APU variants will be paid for by the workstation/server/HPC market, in the very same manner that Intel funds is R&D from to produce the technology for Intel’s consumer variants. Add in some exascale computing APUs built on interposers via AMD’s exascale research grants for the US government and that APU on an interposer R&D will be amortized by both government and industry to produce lower cost consumer grade APUs on an interposer SKU’s.
You did read the articles on AMD’s recent patent filings to even add some FPGA compute to the HBM’s die stacks for localized compute of the large in memory data-sets that will be used in the server/HPC/Exascale computing systems. HBM2 and FPGA compute on the HBM2 stacks themselves along with having a Big Vega GPU accelerator right next to 32 Zen cores will meet the exascale computing rigid power savings budget. Moving large data-sets around wastes plenty of power, using higher clocked narrow bus memory channels wastes power as opposed to HBM2 that will provide massive amounts of raw effective bandwidth ant relatively low clocks compared to any DIMM/external based DRAM that is clocked up to 7 times higher than HBM.
Also even the GPUs on large single monolithic dies are made of many independent modular blocks wired up via on die interconnects, and AMD in the future with Navi will be moving the on die interconnects onto the interposer’s silicon substrate along with any active coherency circuitry and producing modular GPUs of smaller independent modular dies, smaller higher yield dies to save on fabrication costs. This will allow AMD to create interposer based GPUs that can be scaled up just by adding more dies to the interposer in a scalable fashion, and these dies will be made to look to any software as a single monolithic die with the fact that it will actually be many dies abstracted away in the interposer based modular GPU’s hardware.
Ditto for any interposer based APUs produced with Navi graphics and Zen CCX(4 core complexes) for scalable hardware solutions on the interposer.
You are a very insidious and well funded spin doctor, spinning your yarns for the hand that feeds! Really trying to identify, insult, and degrade AMD/others rather than discussing the technology’s merits has your MO marked and verified.
AMD makes GPU that are used by coin miners and graphics rendering users that need all the SP FP T-Flops for ray tracing acceleration/or bit coin hashing done on the GPU! And AMD does not gimp its GPUs of compute just to please some gaming GITs! AMD sales GPUs for more than just gaming usage to the consumer and professional markets!
“AMD’s GPUs really shine for
“AMD’s GPUs really shine for games making use of DX12/Vulkan to run more of the game’s non graphics gaming compute on the GPU, in addition to the graphics, with lower latency.”
Talk about excelling at a corner case.
“AMD will price Zen more affordable and AMD will be able to price its Zen CPUs, Radeon GPUs and motherboard chip-sets as an even better package deal. Nvidia will only be able to price its GPUs and Intel will only be able to price its CPUs/MB chip-sets, so AMD will have more latitude to price the whole package deal CPU, GPU, and motherboard chip sets.
AMD’s motherboard partners are also more than likely also its GPU AIB partners, so AMD will be able to work with these partners with lower pricing on AMD’s Zen CPU SKUs, Radeon Polaris/Vega GPU SKUs, and motherboard chip set SKUs to put together some great package deals that its competitors Nvidia(no x86 license, no motherboard chip sets for x86, only GPUs) and Intel(x86 CPUs, motherboard chip sets, BUT no discrete GPUs) can not offer.”
Who are you making this console OEM business case to? And are you really telling us that for AMD to succeed, all it needs is for its partners (up- and downstream) and itself to suffer financially?
“Ditto for DP Adaptive Sync for AMD’s GPUs and gaming monitors as opposed to the more costly G-sync cash grab from Nvidia for gaming monitors($$$$)! AMD is for more money in your pocket while still being able to game without breaking the bank.”
I think AMD screwed up Freesync and now large swaths of customers are suffering for it. They even managed to allow their partners to replace good SKUs with displays with broken Freesync. (XR341->XR342)
Sure its cheaper, but it puts the onus on the customer not to get hoodwinked by the Freesync moniker and instead choose one of the monitors that implement it properly. (no 48-75Hz junk)
48-75Hz Freesync range is
48-75Hz Freesync range is quite good except for the fact that AMD does not have any GPU capable of delivering at least 48fps in AAA titles.
I really hope we will see Freesync in TV sets next year. It will make very hard for Nvidia not support it if this happens.
Freesync without LFC is
Freesync without LFC is broken. Yes, higher-performance GPUs would help, but you’d still be tuning your visuals to the lowest supported refresh rate, which means you’ll always have worse image quality for potentially a few scenes that would otherwise stutter.
On the other hand, proper Freesync displays with enough range for LFC are expensive. (though not as expensive as their G-Sync cousins)
One can argue that a game is
One can argue that a game is broken if it requires to drop so low.
It’s not broken for proper
It’s not broken for proper Freesync and all G-Sync displays.
Thanks, AMD.
Take that up with the display
Take that up with the display OEMs and stop your foolishness concerning AMD and DP adaptive sync. Folks know your MO, Mr. Damage control, for the hand that feeds you!
It’s AMD’s problem that they
It’s AMD’s problem that they allow broken implementations to carry and advertise with their Freesync trademark.
And the solution is so simple: Let everyone go nuts with your beloved adaptive sync, but require displays to at least support LFC to be allowed into the Freesync tent.
It’s the VESA standards body
It’s the VESA standards body that is in control, and that FreeSync branding is simply marketing monkeys pandering to stupid gaming GITs! VESA shoud have forced AMD to drop that branding once the Display Port AdaptiveSync(TM) standard was redified! Both VESA and USB-IF are doing a piss poor job of educating the public and eforcing the proper naming conventions. Look at the USB-IFs USB Type-C naming conventions, no one if following the proper USB-IF naming guidence and still to the day the Technology press and marketing copy in ads still does not follow the poper USB Type-C naming conventions established by USB-IF. Take your paid spin to the VESA standards body because FreeSync technology does not exist, its calld VESA Display Port AdaptiveSync(TM)! Also take it up with the dispaly OEMs, they are the ones to argue with if you are unhappy with their products!
“Who are you making this
“Who are you making this console OEM business case to? And are you really telling us that for AMD to succeed, all it needs is for its partners (up- and downstream) and itself to suffer financially?”
It’s not a console market case for AMD and it’s pricing flexibility! It’s for pricing PC motherboards, CPUs and GPUs for some Home PC gaming builders that will have the ability to purchase and save more money by getting a package deal from AMD and its partners for the consumer home PC gaming system builder market. Buy an AMD Zen CPU, Radeon GPU, and AM4 high end Motherboard as a package and save even more money type of deal. AMD’s can price the motherboard chip-sets, Zen CPUs(already installed on the motherboard, and GPU as a home PC builder mad savings package deal. And AMD and its GPU AIB partners, and motherboard partners, can get together with AMD offering reduced CPU/GPU/Motherboard chip-set pricing for any special package deals.
Nvidia can not offer that, Intel lacks any discrete gaming GPUs to package in a deal.
No downstream suffering because its not for PC OEM’s it’s for motherboard OEMs and AIB Partners/OEMs and the consumer getting offered the great package deal pricing if the home PC builder/consumer purchases the package deal from the Motherboard maker, GPU AIB maker(Most likely the same company as the motherboard maker) and AMD providing the Zen CPU(at special pricing) and giving the Motherboard maker/GPU AIB maker discounts on the Motherboard chip-set and GPU die for some special package deal directly to the consumer/home PC builder.
“FreeSync” is actually the industry standard VESA Display Port Adaptive-Sync(TM) there is no “FreeSync”! VESA’s Display Port Adaptive-Sync(TM) is getting the wide adoption while G-Sync reeks of 2 extra C-Notes of expense for high end consumer gaming monitors into Nvidia’s fat coffers!
“for the consumer home PC
“for the consumer home PC gaming system builder market.”
You’re describing a target market of roughly 13 people, 12 of which are not AMD customers currently. You seem to not understand that motherboard OEMs and GPU AIBs are downstream partners of AMD (who do no longer build graphics cards or motherboards), and your clever idea to make people buy AMD products is to give them away.
This is not a sound business strategy.
“”FreeSync” is actually the industry standard VESA Display Port Adaptive-Sync(TM) there is no “FreeSync”!”
You should tell that to AMD and their “AMD FreeSync™ Technology.”
You are really grasping at
You are really grasping at straws, And AMD’s partners work closely with AMD on many deals, and AMD just so happens to control its CPU die, GPU die, And motherboard chip-set pricing to be able to offer the consumer via AMD’s downstream partners for some very nice complete gaming PC package deals. AMD has that latitude and its downstream partners will jump to be part of those package deal offerings and have motherboard, CPU and GPU package deals for even more sales. Intel’s got not diecrete GPUs and Nvidia has no x86 or chip-set products. Zen is coming and so is Vega, as well as AM4 and AMD has that to work with for the complete gaming PC package, let that package deals begin!
VESA Display Port Adaptive-Sync(TM) is the standard, so your argument is with AMD’s marketing monkeys!
Why do you insist on AMD
Why do you insist on AMD giving away their dies and demanding sacrifices from their partners to move product into a niche market?
Are they all going to “make it up on volume?”
“VESA Display Port Adaptive-Sync(TM) is the standard, so your argument is with AMD’s marketing monkeys!”
Wouldn’t that be _your_ argument? 😀
Up yours, Martin (damage
Up yours, Martin (damage control) Trautvetter, I’m not marketing I’m talking technology! HBM2 technology and Interposer technology for much better graphics on any Interposer based APU/SOC! So for more powerful laptop APU SKUs with at least 4 Zen cores(Mostly to take on the OS bloat) and a 16 CU+ Vega integrated GPU with plenty of Shader Cores so as to not choke on my high polygon count mesh models like Intel’s dog food graphics does! I can’t use Blender 3D’s edit mode and do any high polygon mesh modeling on Intel’s starved of Shader Cores dog food graphics. Only AMD’s or Nvidia’s graphics have the Shader Core numbers to Handle High Polygon mesh models in Blender 3D’s editor. Intel’s graphics is the dog food you have to eat to get Intel’s CPUs and nothing more! Intel’s graphics is not good for any productive graphics workloads!
There are plenty of mobile
There are plenty of mobile computers (if that’s somehow a necessity) that fit (and exceed) your bill, all the way up to the GTX 1080.
Why aren’t you just using one of those right now instead of “talking technology” that is, at best, several years out?
As long as it’s available
As long as it’s available before 2020 and runs on a full Linux OS based laptop, I’m happy. Blender 3d runs under Linux also! I want my next laptop to be monopoly free, so no M$, Intel, or Nvidia with their CUDA lock-in and high prices!
AMD does not have to give
AMD does not have to give away their cores(CPU GPU), MB chip-sets but AMD is right where a parts supplier should be in a proper market, Down and innovating to stay in business! And with some proper and fair market competition Intel and Nvidia need to be taken down to that same level and forced to lower pricing to get the sales while saving the consumer loads of dosh!
Let AMD, Intel, and Nvidia make most of their revenues from the Server/HPC/Workstation markets where businesses can write of the costs on their taxes. But to the consumers AMD, Intel and Nvidia are just lowly parts suppliers who have no business having any sorts of excessive influence on the OEM markets other than the struggle to make sales with the lowest costs to the consumer(Package Pricing Deals included to make for even better consumer price savings).
Just look at what Intel’s ultrabook excessive market influence has done to the workhorse/regular form factor laptop market. Hell I can not even go to Micro Center and find any quad core i7 based HP ProBook laptops for sale that come with AMD discrete GPUs! No it’s all crappy dual core i7 and i5 U series Ultra-Crap-Book influenced CPU SKUs in crappy thin and light form factor abominations.
The Intel/Nvidia/AMD parts suppliers of the world need to be in their proper place struggling to stay alive and giving the consumer great deals and good year on year innovation! IBM knew how to treat a parts supplier(Intel) at the very start of the PC era by forcing Intel to cross license the x86 16 bit ISA to AMD and others and keep those supplies of CPU parts competitively low and Intel in its place without Intel getting IBM between a rock and a hard place as far as a part as essential as the CPU is for any PC/laptop OEM is!
“Hell I can not even go to
“Hell I can not even go to Micro Center and find any quad core i7 based HP ProBook laptops for sale that come with AMD discrete GPUs!”
But you can choose the color 😉
A red Ultra-Crap-Book would
A red Ultra-Crap-Book would suck just as much as a blue one, with Intel’s U series crap inside! Give me a laptop APU with at least 4 Zen cores with at least 16 Vega CUs and I’m happy for my Blender 3D mesh modeling, and light rendering workloads!
Intel still milking 4/8
Intel still milking 4/8