The difference in binning:
– Intel makes good and “better” monolithic CPUs (the old generation you mentioned), at random. They fall into a basket, are tested, ranked by worst core performance, and are separated from each other in groups. Then each performance group moves to a different SKU with a different maximum iteration, and thus is set to the worst core. All cores are treated equally by BIOS/OS.
Although part of it is binning, this is not entirely true. Intel is already partially deprecated on the worst core(s), and minimal properties like baseclock and TDP are set to that. However, Intel also uses the best core and Intel also has an advantage where the best cores can score higher. https://www.intel.com/con…boost-max-technology.html employment https://www.intel.com/con…000021587/processors.html
From the above frequently asked questions:
Intel® Turbo Boost Max 3.0 Technology Overview
Product information and documentation
Intel® Turbo Boost Max Technology 3.0 is a combination of software and hardware combined with information stored in the processor. It defines workloads and directs them to the fastest core in the template first.
Windows* (OS) contains native support for Intel® Turbo Boost Max 3.0 technology. Make sure your Windows operating system is upgraded to the latest version.
Q: Do I need to install any software (or driver) to get Intel® Turbo Boost Max 3.0 technology to run in my system?
No. No need to install any driver. Ensure that your Windows operating system is upgraded to the latest version, and that your processor supports Intel® Turbo Boost Max Technology 3.0.
Q: Is Intel® Turbo Boost Max 3.0 technology automatically enabled or do I need to install it?
Windows has native support for Intel® Turbo Boost Max 3.0 technology and the feature is enabled automatically. The processor features Intel® Turbo Boost Max 3.0 enabled hardware and a p. The operating system knows this feature of the processor and loads native support. There is no need to enable in BIOS or OS. Running a tool such as the Intel® Extreme Tuning Utility (Intel® XTU) can monitor the frequency of the Intel® Turbo Boost Max 3.0 processor.
Here is a list of all SKUs supported in the consumer segment: https://ark.intel.com/con…00&1_Family=122139
They’ve had this technology since Broadwell, which has been kept in consumer meta storage since 2016. So even before AMD had this. Note that not all Intel Sku uses it, it’s generally reserved for the HEDT, i7s, and i9s parts, while AMD also uses it for the R5 and R3s.
– TSMC makes good and “better” cores for AMD, at random. These are _not_tested and assembled to the bone, but are sawn into strips and strapped together at random in a package. It is possible that there is still correlation at the chiplet level, otherwise you may be out of luck as a consumer. Then they are tested, after which the CPU “knows” which core came out best from the baking process. Then there’s a bit of UEFI’s intermediate tools (well, I was going to say BIOS), which can ensure that the heaviest task can land on the best random CPU: CPPC2.
This is also not entirely true, TSMC does not make the best cores and cores. TSMC makes chips (and on these chips, just like you can see with Intel or previous AMD processors, not all cores are “equal”, there will always be better or worse cores on a chiplet) .. it can’t be drawn at random who chips are deployed, and how they are posted Determined already before production. After all, a chiplet is just a basic building block and comes on the chip in a predetermined way, where you can use one or more CPUs, along with I/O, the other building block. What happens is that chiplets are embedded. During that binning, chiplets are distributed, those with the best server CPU characteristics go to Epyc, some go to Threadripper and some go to the consumer chip for use in Ryzen.
It seems that AMD in particular is going further than Intel in this matter and testing it “to the end” and certainly not randomizing the SKUs. For example, you see that the 5900X or the 5950X almost always has a chip with one or two good-performing cores, which are then used as the “preferred core”, while the second chip is often of “lower” quality.
What you also see is that on average, these “lower quality” segments improve the longer the product is on the market.
Older Intels don’t have that whole architecture where the chip knows which core is the best, and the entire chip is tuned to run at the “worst” core frequency. So the OS won’t configure threads there using scheduling (sorry all the bad anglicists), to empirically assign the seemingly heaviest task according to real-time stats to the stronger kernel.
Depending on what you call older, Haswell and earlier versions don’t really have this, but you also have with CPUs from 2014 and earlier.
Since older Intel CPUs don’t have this mechanism and software of their own, MSFT can’t inadvertently destroy them either.
[Reactie gewijzigd door Dennism op 7 oktober 2021 14:03]
“Lifelong zombie fanatic. Hardcore web practitioner. Thinker. Music expert. Unapologetic pop culture scholar.”