The expected behavior of AMD 7 nm Ryzen processors at a very high frequency compared to actual measured behavior is one of the most frequently asked questions by reviewers in recent weeks. AMD promised today to inform the user community today about the expected behavior of its processors and the changes that would be incorporated into future revisions to the UEFI.
To summarize briefly: Reports published at the end of July showed that some AMD processors only reached the upper clock frequency boost on a single central unit. Last week, the overclocker Der8aurer released the results of a user survey, which shows that only some The AMD 7nm Ryzen processors reached their maximum (the exact percentage varies depending on the model). Late last week, Paul Alcorn of Tom's Hardware released a thorough test from the impact of different versions of AMD AGESA and UEFI versions of the motherboard on the motherboard pointing. AGESA is the generic architecture of the encapsulated software AMD – the library of procedures used to initialize the processor and various components. Motherboard vendors use AGESA as a template for creating UEFI versions.
What THG found is that different versions of UEFI and AGESA versions have shown slightly different synchronization results. The latest versions have been slightly lower than previous versions used for journals. At the same time, however, these latest versions have also often maintained their recall clocks for longer before slowing down the processor.
It has also been proven that the acceleration temperatures have been subtly adjusted from 80 ° C to 75 before going back to 77. These changes would not necessarily affect performance – the processor increases slightly, but also longer – but that was not the case. And that's exactly what AMD was trying to accomplish. During his IFA presentation last week, Intel argued that these subtle variations were proof that AMD was trying to deal with a potentially important reliability issue with its processors. THG was not willing to subscribe to this explanation without additional information.
While all this was happening, AMD informed us that it would make an announcement on September 10 regarding a new update of AGESA.
The following text is directly from AMD and concerns the improvements that will be made to the updated UEFIs of different motherboard manufacturers. Normally, I do not quote this blog post extensively, but I think it is important to present the exact text of what AMD says.
The analysis indicates that the processor's boost algorithm has been affected by a problem that could lead to lower target frequencies than expected. This has been resolved. We also explored other possibilities for optimizing performance, which can further improve the frequency. These changes are now implemented in the flashable BIOSes of our motherboard partners. Our internal tests show that these modifications can add approximately 25 to 50 MHz to the current boost rates in various workloads.
AMD Reference Motherboard (AGESA 1003ABBA Beta BIOS)
AMD Wraith Prism and Noctua NH-D15S chillers
Windows Update May 10, 2019
Laboratory of ambient tests at 22 ° C
Streacom BC1 open work table
AMD 1.8.19.xxx Chipset Driver
AMD Ryzen Balanced diet plan
Default BIOS (except OC in memory)
These improvements will be available in flashable BIOSes from about two to three weeks, depending on the timing of testing and implementation of the manufacturer of your motherboard.
In the future, it is important to understand how our Boost technology works. Our processors perform intelligent real-time analysis of processor temperature, motherboard voltage regulator current (amperes), socket power (watts), charged cores, and power intensity. workload to optimize performance, from millisecond to millisecond. Make sure your system has adequate thermal paste; reliable cooling system; the latest BIOS of the motherboard; reliable BIOS settings / configuration; the latest AMD chipset driver; and the latest operating system can enhance your experience.
After installing the latest BIOS update, a consumer running a single threaded application on a PC with the latest software updates and sufficient voltage and thermal margin should benefit from the overvoltage frequency maximum of its processor. PCMark 10 is a good proxy for a user who tests the maximum boost rate of the processor in his system. If users run a workload such as Cinebench, which runs for an extended period of time, the operating frequencies may be lower than the maximum throughout the run.
In addition, we want to answer recent questions about reliability. We perform extensive technical analysis to develop reliability models and to model the life of our processors before starting serial production. Although AGESA 1003AB contains modifications aimed at improving system stability and user performance, no changes have been made for reasons of product longevity. We do not expect that boosting frequency enhancements for AGESA 1003ABBA will impact the life of your Ryzen processor. (Underline added).
In addition, AMD also provided information on the firmware changes implemented in AGESA 1003ABBA, intended to reduce the operating voltage of the processor by filtering requests for voltage / frequency increase for light applications. 1003ABBA AGESA now contains an activity filter designed to ignore "intermittent background noise from the operating system and applications." This should reduce the processor voltage to 1.2 v, unlike the higher peaks reported.
Finally, AMD will release a new surveillance SDK that will allow everyone to create a monitoring tool to measure the different facets of Ryzen processor performance. More than 30 API calls will be exposed in the new application, including:
Current operating temperature: Indicates the average core temperature of the CPU over a short sample period. By design, this metric filters transient spikes that can distort temperature reports.
Peak voltage (s) (PCV): Indicates the voltage identification (VID) requested by the CPU package of the voltage regulators on the motherboard. This voltage is configured to meet the needs of cores under active load, but does not necessarily correspond to the final voltage experienced by all cores of the CPU.
Average heart tension (ACV): Reports the average voltages experienced by all processor cores over a short sampling period, taking into account active power management, sleep states, VDROOP, and time of day. inactivity.
CDE (A), CDT (A), PPT (W): Current and power limits for the VRM and CPU socket of your motherboard.
Maximum speed: Maximum frequency of the fastest kernel during the sampling period.
Effective frequency: Frequency of the processor cores after taking into account the time spent in the sleep state (for example, basic sleep cc6 or packet sleep pc6). Example: A processor core operates at 4 GHz when it is awake, but in standby cc6 for 50% of the sampling period. The effective frequency of this nucleus would be 2 GHz. This value can give you an idea of how often cores use aggressive energy management features that are not immediately obvious (for example, clock or voltage changes).
Different voltages and clocks, including: SoC voltage, DRAM voltage, matrix clock, memory clock, etc.
Ryzen Master has already been updated to give the average base voltage values. AMD expects motherboard manufacturers to release their new UEFIs with the 1003ABBA AGESA version built in two weeks from now. As we wrote last week and despite rumors from Shamino, an Asus employee, AMD in no way describes these timing behavior adjustments as reliability-related.
Regarding AMD's statements on improved clocks, I want to wait to see how these changes impact the behavior of our own test processors before drawing any conclusions. I will say that I do not expect to see a lot of changes in overall performance – 25-50 MHz is only an improvement of 0.5 to 1% on a 4.2 processor GHz and we may not even be able to detect a performance change in a standard benchmark from such a change in clock. But we can directly monitor the clock speeds and will report the impact of these changes.
When Intel removed the Ice Lake cover, we found that the CPU performance data was complex. When it comes to the graphics processor, Ice Lake is a huge step forward, with performance far better than anything we've seen before with Intel integrated graphics. The processor, however, was a rather mixed bag. Limited to a TDP of 15 W, Ice Lake processors were not necessarily faster than the Coffee Lake chips that they had to replace and were often a little slower. If you give the processor extra room, the problem is solved – but of course, giving the chip more play power has a negative impact on the heat and battery life.
When Intel invited reviewers to ice lake test, the proposed test systems included a rocker switch to switch envelopes from 15W to 25W. This is how PCMag and other publications have been able to test the laptop in both modes, as shown below:
Users do not usually have this type of option. The TDP ranges are usually predefined by the OEM and can not be changed by the end user for obvious reasons – starting the TDP for laptop is a good way to overheat the system if you do not know not what you do and if the laptop is not specifically designed to operate at a higher power level. To the best of our knowledge (up to today), no consumer notebook could change its TDP values on the fly. In the Ice Lake testing session, Intel told critics that retail Ice Lake laptops would not have this option either.
However, there seems to be at least one exception to this rule. The Razer Blade 13 will have a Adjustable TDP can be configured via the Razer Synapse software. Supposedly, this ability has always existed, dating back to the original Razer blade. If this is true, the company does not seem to have put it forward before. Google does not provide results that reference an adjustable TDP on previous versions of the Razer blade. unless you count the fact that the laptop would slow down the load under certain circumstances. To be clear, the ability to run the processor in an energy-efficient envelope under load is not the same as being able to voluntarily put it in a higher TDP mode and make it work with a additional power reserve.
Given that Intel had already told critics not to expect adjustable TDP ranges to be a major feature for laptops, this begs the question: is it specific to Razer or will we see do more laptop manufacturers take advantage of these new features? Will Intel make adjustable TDPs a feature that high-end customers will be able to buy as they wish?
Razer's website for the new blade indicates that the system will use a 25W Ice Lake processor, but does not mention anything about the fact that the system can be adjusted in a power envelope of 15W instead of 25W.
Since at least Computex, Intel has told reviewers about the types of tests we run, the applications they typically use, and whether these tests reflect "real" performance. Specifically, Intel believes that tests such as Cinebench are far too test-driven, while applications that users actually use are virtually ignored.
Let's leave a few things aside.
Every company has benchmarks that it prefers and benchmarks that it does not like. The fact that some tests work better on AMD than on Intel, or on Nvidia against AMD, is notin itself, evidence that the benchmark was deliberately designed to favor one business or another. Companies tend to worry about benchmarks used by reviewers when they face increased competitive pressure in the marketplace. Those of you who think that Intel raises questions about the tests we look at collectively use in part because he loses in a lot of these tests They are not wrong. But the fact that a company has reasons of self interest to ask questions does not automatically mean that it is also wrong. And since I do not spend dozens of hours and sometimes at night testing the material to give people a misconception of how it works, I am always willing to go back on my own conclusions.
The following are my own thoughts on this situation. I do not pretend to speak for another critic than myself.
Being in favor of real-world hardware performance tests is one of the least controversial opinions one can have in computing. I met people who did not necessarily care about the difference between synthetic and real tests, but I never remember meeting anyone who thought the actual tests were irrelevant. The fact that almost everyone agrees on this point does not mean that everyone agrees to know where the boundaries between the real world and a synthetic landmark lie. Consider the following scenarios:
You will have your own opinion about which of these scenarios (if any) is a reference in the real world, and which do not. Let me ask you a different question – an issue that I think is more important than whether a test is "real" or not. Which of these hypothetical benchmarks tells you something useful about the performance of the tested product?
The answer is: "potentially, all." The benchmark I choose is a function of the question I am asking. A synthetic or standalone test that works as a good model for a different application continues to accurately model performance in this application. It can be a much better model in terms of actual performance than testing done in a highly optimized application for a specific architecture. Even if all the tests of the optimized application are "real world" – they reflect workloads and real tasks – the application itself may be an unrepresentative outlier.
All the scenarios I have described above can be good benchmarks, depending on their ability to generalize to other applications. The generalization is important Review. In my experience, reviewers generally try to balance known applications to favor a business with applications that work well on everyone's hardware. Often, if a vendor-specific feature is enabled in a dataset, revisions will include a second dataset with the same feature disabled, to provide a more neutral comparison. The use of provider-specific indicators can sometimes affect the ability of the test to speak to a wider audience.
Until now, we have strictly discussed the question of whether a test is real, whether the results are generalized to other applications. However, there is another way to frame the subject. Intel polled users to find out which applications they were actually using, and then presented us with this data. It looks like this:
The implication here is that by testing the most common applications installed on users' hardware, we can get a more efficient and representative use case. This feels intuitively true – but the reality is more complicated.
The fact that an application is frequently used does not make an objective reference. Some applications are not particularly demanding. Although there are absolutely scenarios in which the performance measurement of Chrome could be important, as in the case of low-end laptops, good reviews of these products already include these types of tests. In the context of high-end enthusiasts, it is unlikely that Chrome is a demanding application. Are there test cases that can make it taxing? Yes But these scenarios do not reflect the way the application is used the most.
The practical experience of using Chrome on a Ryzen 7 3800X is identical to its use on a Core i9-9900K. Even if that was not the case, Google makes it difficult to maintain a previous version of Chrome available for continuous A / B testing. Many people use extensions and adblockers, which have their own impact on performance. Does this mean that reviewers should not test Chrome? Of course not. That's why many notebook reviews make Test Chrome, especially in the context of browser-based battery life, where Chrome, Firefox, and Edge are known to produce different results. Adapt the marker to the situation.
There was a time when I spent a lot more time testing many of the apps on this list than at present. When I started my career, most of the reference suites were focused on desktop applications and basic 2D graphics tests. I remember that exchanging a person's graphics processor could dramatically improve the quality of the 2D image and the responsiveness of the Windows user interface, even without upgrading their monitor. When I wrote for Ars Technica, I wrote comparisons of CPU usage when decoding HD content, because at the time, there were significant differences to find. If you think back to the launch of Atom netbooks, many reviews have been devoted to issues such as responsiveness of the UI with a GPU solution Nvidia Ion and compared with Intel's built-in graphics. Why Because Ion has had a noticeable influence on the overall performance of the user interface. The examiners are not unaware of these problems. Publications tend to come back when there is significant differentiation.
I do not choose evaluation criteria only because the application is popular, although its popularity can appear in the final decision. The objective, in a general review, is to choose tests that will be generalized to other applications. The fact that someone has installed Steam or Battle.net does not tell me anything. Does this person play Overwatch or WoW Classic? Do they play Minecraft or No Man's Sky? Do they choose MMORPG or FPS type games, or are they just stuck in Goat Simulator 2017? Are they playing games at all? I can not know without more data.
Applications in this list that have significant performance differences in common tasks are typically already tested. Publications like Puget Systems publish regular performance comparisons in the Adobe suite. In some cases, the reason why applications are not tested more often is that there is a long-standing concern about the reliability and accuracy of the performance test suite that includes them most often.
I am always interested in better methods of measuring computer performance. Intel absolutely has a role to play in this process – the company has been helpful many times to find ways to highlight new features or solve problems. But the only way to find significant differences in the material is to find significant differences in tests Again, in general, you will see critics check laptops for shortcomings in battery life and energy consumption as well as performance. In GPUs, we are looking for differences in frame time and number of frames per second. Because none of us can execute each workload, we are looking for applications with generalizable results. At ET, I run several rendering applications specifically to make sure we do not favor any vendor or solution. That's why I tested Cinebench, Blender, Maxwell Render, and Render the crown. Handbrake is the ideal solution for everyone in media encoding, but we are testing H.264 and H.265 to ensure that we capture multiple test cases. When the tests prove to be inaccurate or insufficient to capture the data I need, I use different tests.
The very well-argued difference between the "synthetic" and "real" marks constitutes a poor framing of the question. Ultimately, the important question is whether the benchmark data presented by the evaluator collectively provides an accurate view of the expected performance of the device. Like Rob Williams details At Techgage, Intel is only too happy to use Maxon's Cinebench as a reference to a time when its own processor cores dominated performance. In a recent to post Ryan Shrout of Intel wrote on Medium:
Today, at IFA, we organized an event for journalists and analysts on a subject that is close to our heart and dear to our hearts: real-world performance. We've been running these events for a few months, starting with Computex, then E3, and we've learned a lot along the way. The process reinforced our view of synthetic benchmarks: they are useful if you want a quick and narrow perspective on performance. We always use them internally and we know that many of you know it too, but the reality is that they are less and less accurate in assessing the actual performance of the user, regardless of the segment of product in question.
It seems overwhelming. He continues with this slide:
To demonstrate the alleged inferiority of synthetic tests, Intel presents 14 distinct results, 10 of which are derived from 3DMark and PCMark. These two applications are generally considered as synthetic applications. When the company reports its own performance against ARM, it repeats the same problem:
Why does Intel refer to synthetic applications in the same blog article in which it specifically refers to them as a bad choice over supposedly "real world" tests? This may be because Intel makes its reference choices, like our critics, with an eye to representative and repeatable results, using affordable tests, with good features that do not crash or fail. for unknown reasons after installation. Maybe Intel is also struggling to keep up with the flow of regularly updated software and picks tests to represent its products it can rely on. Perhaps he wants to continue to develop his own synthetic performance tests, such as WebXPRT, without putting all this effort under a bus, even if he simultaneously tries to suggest that the reference tests on which AMD supports are inaccurate.
And maybe that's because all the synthetic / real framing is bad to start with.
Updated (5/9/2019): One thing I have not remembered is that Intel's most widely used application dataset is based entirely on laptops and 2-in-1 devices. This is described in the slide above. We would not be expect content creators working in 3D applications such as Blender, Cinebench or similar applications for workstations would use the 2 in 1 software. The fact that the hardware configurations measured by Intel are not representative of the systems we were expecting these applications are used to suggest that these applications are less important because of a reduced installation base.
We've heard for a while that Intel could react to the 7 nm AMD attack with a higher processor count on desktop processors. A new leak suggests that this is exactly what the company will do, with a new chipset supporting up to 10 processors based on its mature 14 nm process. This will supposedly require a new processor connector because Intel is increasing the power supply and capacity of its desktop motherboards to offset the higher power requirements of a 10-core chip.
The new take is supposed LGA 1200 and the high-end chips offer 10C / 20T configurations if we believe the rumors. The TDP is finally rising, up to 125W. The latter is something of an interesting point. The power consumption of the Intel processor currently has little relationship with TDP if you let the CPU grow. The TDP is measured at the base clock and not at the amplification. Intel may need to extend TDP to add more processor cores, but in the past, Intel still has processors in the same TDP media by cutting the base clock.
Our hypothesis is that Intel is raising TDP because it does not want to do it anymore. It is probably possible to further reduce the base clock to stay in the old 95 W TDP support with 10 cores instead of eight, but risk creating negative comparisons with previous generation components or AMD hardware. Intel reduced the base clock speed when it went from Core i7-8700K to Core i9-9900K – the 9900K has a base clock of 3.6 GHz, while the 8700K is 3 , 7 GHz. The old 7700K had a base clock of 4.2 GHz, although the overall performance is significantly lower.
The relatively low base clock may not have been a cause for concern when AMD's Ryzen 7 base clocks were also in the 3.6-3.7 GHz range, but AMD slightly adjusted its own clock ranges on 7 nm. The 3700X has a base clock of 3.7 GHz, while the Ryzen 3800X has a base of 3.9 GHz and the 3900X is a chip at 3.8 GHz. Intel may want to bring clocks up slightly to make sure that it fits on the base, and the only way to do it is to push the TDP higher.
Supposedly, the new 400 series adds 49 extra pins to the LGA1200, with the extra pins used for power supply. There would be some new features, such as integrated 802.11ax support and probably an easier method of integrating Thunderbolt 3, similar to what we saw in mobile. The 65W and 35W processors would still be supported (and released) during this latest 14nm revision. This is simply enthusiastic TDP support that could stretch up to 125W. Intel will probably try to keep the tempo boost as high as possible, but I do not want to speculate on what it will be.
Anyone who has focused his attention on the relative rankings between AMD and Intel has already realized that a 10-core Lake Comet will not live up to AMD in most performance areas. The 16-core Ryzen 9 3950X is on its way and we've already seen what happens when a 10-core Intel HEDT processor adopts a 16-core AMD Threadripper: The 10-core processor loses. Above all, he loses a lot.
But if that sounds absurd, defeating AMD's goal in terms of absolute multicore performance is probably not the goal. The two companies work together on their respective strengths: for AMD, this means favoring the multi-core while seeking to improve the mono-core, where Intel still has a limited advantage in some games in 1080p. For Intel, it means trying to improve a single core while competing more effectively with multi-core. Composing up to 10 cores and raising the basic clock via the increase in TDP probably help the company achieve this goal. It will take more than +2 hearts to put Intel back into the multi-threading game seriously, and the company knows it.
The rumors of a 10-core Comet Lake are strong enough and last long enough for me to think they are strong enough. We believe that this generation will also witness the return of hyper-threading to strengthen Intel's competitiveness against AMD at lower price ranges. Without price information, we obviously can not think of how the two companies will come together, but Intel has always introduced better price / performance ratios when launching major products. This suggests that we will see the company adjust its basic account / dollar strategy at the next major launch.
Intel has announced a new slice of 10th generation mobile chips, this time based on 14 nm. This is the third recent announcement of the Intel 10th generation company and the first to show us how 10nm and 14nm products will live side by side in the same product families. The main news is that Intel has reduced its maximum number of mobile CPU cores to 6C / 12T in a 15W power supply envelope, instead of 4C / 8T. 14nm processors<a href="https://r.zdbb.net/u/6k9u" target="_blank" rel="noopener noreferrer"><img src="https://www.extremetech.com/wp-content/uploads/2018/07/SEEAMAZON_ET_135.jpg" alt="SEEAMAZON_ET_135 View Amazon AND Trade" width="“135”" height="20"/></a> in the 10th generation, Comet Lake is associated with Ice Lake to complete the field.
On paper, this change should be an excellent choice for Intel. When the company launched 8th generation chips, it significantly improved its performance. Our initial concerns that high-speed dual hearts may be better options than lower-rate quads were unfounded; The low clocks of the 8th generation mobile components did not prevent them from generating excellent results in comparison.
There is good reason to think that this is no longer the case. Here is one of the official Intel slides predicting performance improvements as customers who purchase a new 10th generation processor such as the Core i7-10710U (the six-core variant) can be expected at:
These are important gains for a single product generation. Overall performance up to 16% higher than Coffee Lake, 41% better productivity in Office 365 and the same battery life? Not bad But check the details.
This is from Intel's official disclaimer page. Each numbered entry – 1, 2, 3 – deals with one of the claims we have just shown you. I've highlighted the TDP listed for each CPU in each entry. Note that # 1 and # 2 – both performance claims – involve two very different system configurations. In both cases, the six-core Core i7-10710U was configured to work with a 25 W TDP, while the Core i7-8565U was handicapped by a 15W TDP.
The third data point, however, does not not show this configuration. Here, both chips operate in a 15W envelope. The problem here is that users usually do not have access to a method of switching between operating modes provided by OEM or Intel. It's a decision that the laptop manufacturer does. You can sometimes use third-party utilities or the Intel Extreme Tuning utility to tweak processor configurations, but you can not just switch between 15W and 25W configurations. Regardless of the configuration used by the manufacturer of your notebook, it keeps you away, and this information is usually not published.
We compared backward the launch of the 8th generation in 2017 to see how Intel had handled the messaging in this situation. The 8th generation family has experienced a similar slide compared to the rear generation family of the 7th generation.
We see a similar improvement (although much larger) and a similar footnote. Where does this lead us?
Nowhere good. In 2017, when Intel compared performance between the Core i7-8550U and the Core i7-7500U, it was not necessary to use TDP values to align its performance. The comparison was made with 15W allocated for both processors.
That's the only reason Intel can do that: energy consumption. TDP nominal values are not equivalent to the total CPU consumption and should be not to be read this way, giving a processor more TDP margin allows it to consume more power. When critics spent time with Ice Lake earlier this month, we noted in particular how to give a processor more TDP margin allows it to run faster, as shown below:
<img class=”aligncenter size-large wp-image-297005″ src=”https://www.extremetech.com/wp-content/uploads/2019/08/657642-intel-ice-lake-cpu-tests-pov-ray-1-640×438.png” alt=”657642-intel-ice-lake-cpu-pov-ray-tests "width =" 640 "height =" 438 "srcset =" https://www.extremetech.com/wp-content/uploads/2019/08/657642 -intel-ice-lake-cpu-tests-pov-ray-1-640×438.png 640w, https://www.extremetech.com/wp-content/uploads/2019/08/657642-intel-ice-lake- 300w cpu-tests-pov-ray-1-300×205.png, https://www.extremetech.com/wp-content/uploads/2019/08/657642-intel-ice-lake-cpu-tests-pov-ray -1-768×525.png 768w, https://www.extremetech.com/wp-content/uploads/2019/08/657642-intel-ice-lake-cpu-tests-pov-ray-1.png 911w "sizes = "(maximum width: 640px) 100vw, 640px" />
We do not know how much faster the Core i7-10710U is when you use a 25 W TDP than a 15W TDP. What matters is that Intel gives a false picture of the type of comparison it makes on its 10th generation launch slides. Compare the performance of laptops in two different TDP ranges for your performance indicators, only to reverse and compare what basically constitutes a different Machine configuration for battery life is hypocritical. Switching between the 15W and 25W operating modes may not seem like a problem, but it's not a switch that an end user can launch. When you buy one of these chips, you get either the 25W higher performance version or the lower 15W version, and the builders do not usually communicate the very fine points of their power management strategies. or their SKU selections.
The last reason to suspect that TDP limits CPU performance in this case? The winnings are not great enough Switching from quad to six cores may not be as big an upgrade as going from 2C / 4T to 4C / 8T, but it should still be a basic improvement of 1.5 times, and There are many landmarks that will show it. type of gain – if the chip does not already reach the thermal limits.
Intel is launching a full suite of U and Y class components, as shown below:
Apart from the Core i7-10710U, the improvements are hard to come by. The Core i7-105100U is a 4.8 GHz mono-core boost based on 1.8 GHz, single-core and 4.3 GHz. Intel has not revealed its CPU-focused boost frequencies like the Core i7-8665U, but this processor is a 1.9 GHz / 4.8 GHz processor. The total number of US for graphics and graphic frequency are the same between the two parts. The Core i7-10710U supports LPDDR4X-2933, LPDDR3-2133 or DDR4-2666, while the Core i7-8665U only supports DDR4-2400 or LPDDR3-2133, but these enhancements will only 39, limited value for users. Intel processors are not very tied to RAM bandwidth.
These chips will also carry other improvements made by Intel, such as faster Wi-Fi and support for Intel's dynamic tuning technology. They will collectively target the 7W envelope (Intel 10nm 10nm parts do not fit in values below 9W). They offer a maximum frequency of 4.9 GHz, compared to 4.1 GHz for Ice Lake 10 nm processors. According to Intel, the U and Y series are for customers who want exceptional processor performance, but do not care about graphics. Apart from the new 6-core SKU, all new chips are also quad-core parts.
Our reading of the situation is as follows: Intel is struggling to contain a resurgent AMD by doubling its level in a market where AMD has always been the weakest: the mobile. 10 nm had to be on the market by the 2020 holidays for a host of reasons, but Intel does not manufacture enough chips to engage in a 10-nm top-down refresh in this segment. We now have a combination of 14nm and 10nm components to meet the general needs of the market. The 10nm processors offer a higher CPI and a greatly improved graphics core, but a significantly lower frequency. The 14nm chips will theoretically anchor the product on the market with a "halo" six-core coin.
But this time, the situation is different. When Intel processors went from 2C / 4T processors to 4C / 8Ts in mobile systems, they kept the line on 2C / 4T configurations for multiple product cycles. Indeed, he had the thermal margin to spare. This time, the company telegraphed that its 15W six-core processor was out of breath for a metaphorical air. We do not know what are the real improvements between the Core i7-8565U and the Core i7-10710U, but we can bet that their size is lower than the 16 and 41% cited by Intel. And if, by chance, you get a 25W laptop with a Core i7-10710U, it will not offer a battery life equivalent to the same configuration with a 15W processor unless the manufacturer provides it with A much heavier battery – which means you may get more cores and an equivalent lifetime, but you will have to pay for it with extra weight.
One of the long-standing trends in semiconductor manufacturing has been a steady decline in the number of major foundry players. Twenty years ago, when 180-nm manufacturing was state-of-the-art technology, no less than 28 companies deployed the node. Today, three companies are developing 7 nm technology – Samsung, TSMC and Intel. A fourth, GlobalFoundries, has since left its leading edge business to focus on specialized foundry technologies such as its 22nm and 12nm FDX technologies.
What is sometimes forgotten in this discussion is the existence of a secondary group of foundry make deploy new nodes – but not at the forefront of technological research. China's Semiconductor Manufacturing International Corporation (SMIC) announced it would start recognizing 14-nm revenue from volume production by the end of 2019, just over five years after Intel launched its sales on this node. TSMC, Samsung and GlobalFoundries all have a production capacity of 14 nm, as does UMC, which introduced the node in 2017.
The secondary sources of a node, such as UMC and SMIC, are often not included in manufacturing comparative tables such as the one presented below, as the companies in question offer these nodes after being deployed as advanced products by big foundries. In many cases, they are operated by smaller customers with products that do not make headlines.
The SMIC, however, is a case apart. SMIC is the largest semiconductor manufacturer in mainland China and builds chips ranging from 350 nm to 14 nm. The company has two plants capable of handling 300 mm wafers, but if the transition to 14 nm is an important part of China's long-term semiconductor initiative, SMIC is not expected to have a large 14-nm capacity in the near future. The high utilization rate of the company (~ 94%) prevents it from acquiring a large additional capacity to devote to a production of 14 nm. SMIC is vital to China's long-term manufacturing goals; The country's "Made in China 2025" plan foresees that 70% of the national demand for semiconductors will come from local companies by 2025. It is essential to reach the production of SMIC and put on line new ranges. of products. This distinguishes the company from a foundry such as UMC, which has generally chosen not to compete with TSMC for advanced process nodes. SMIC wants this company – it just can not compete.
Zhao Haijun and Liang Mong Song, Co-CEOs of SMIC, published a report declaration on the company's 14-mile ramp, as follows:
FinFET research and development continues to accelerate. Our 14nm is in production at risk and should generate significant revenues by the end of the year. In addition, our second generation FinFET N + 1 has already started engaging customers. We maintain consistent, long-term cooperation with our customers and seize opportunities emerging from 5G, IoT, automotive and other trends.
Currently, only 16% of the semiconductors used in China are built in this country, but the country is increasing its semiconductor production capacity faster than anywhere else on the planet. The company is investing in a $ 10 billion manufacturing facility that will be used for dedicated 14-nm production. SMIC is already installing equipment in the completed building. Production should therefore increase in this plant in 2020. Once online, the company will have much more capacity of 14 nm (SMIC's main known customers are HiSilicon and Qualcomm). In the past, Texas Instruments had already built with the company (it's not clear if it still does), as well as Broadcom. TSMC and SMIC have been the subject of several lawsuits for misappropriation by intellectual property; both cases were settled amicably with substantial payments to TSMC.
Despite these expenses, analysts do not expect SMIC to immediately catch up with the main players in the foundry in other countries; Analysts told CNBC that it would be necessary a decade for society to bridge the gap with other major players. The exact dimensions on the 14nm node of SMIC are unknown. Foundry nodes are defined by the individual company and not by a global standard organization or by reference to a specific metric. Those seeking additional information on this topic find it here.