In early October, Microsoft will unveil the latest hardware for its future Surface products. In recent years, these updates have been generally modest, with some new products (Surface Studio, Surface Laptop) designed to fill various market niches. According to rumors, Microsoft will announce a major change at its October event: a Surface laptop with an AMD chip, rather than an Intel microprocessor.
(D) At a series of corresponding entries in non-public databases of European retailers, we are confident that the new Surface Laptop 3 15-inch model will definitely be equipped with AMD processors … According to the current state, we met three AMD-based models of the Surface Laptop 3 with a 15-inch screen.
If this is true, it would be a major marketing victory for AMD. Area may not have the marketing stamp of Apple, and the Microsoft PC market is not as big as a big OEM, but it would still be a gain of visibility for the APU of the company. At the same time, however, it would be a little surprising. AMD's current range of mobile APUs can name Ryzen 7 3700U and Ryzen 5 3500U processors as class 3000 processors, but it relies on the company's 12nm Zen + designs and not on mobile APUs of 7 nm currently under development. With improved processor performance and power consumption at 7nm, AMD expects the company's current 7nm mobile products to be a significant leap forward. But these parts are not there yet.
Intel's Intel Ice Lake and Intel's Coffee Lake processors are generally faster than the 3700U in a number of direct comparisons, though readers need to be aware of two important factors. First, these systems can have very different prices (PC Perspective compare at 1300 USD, Lenovo AMD system against an Ice Lake of about 1900 USD). Secondly, AMD systems can be designed to slow down their performance. Lenovo AMD systems keep clocks up to 60% faster than HP AMD systems with the same APU, according to one the comparison by NotebookCheck.
Nowadays, this type of stupidity controlled by manufacturers does not weigh on Microsoft products. But it's interesting to see the rumor that Microsoft would use AMD at a junction like this one. About eight years ago there was a persistent rumor Apple had seriously considered using the AMD Llano for an iteration of the MacBook Air. Rumors like this are not uncommon, but this one was stronger than most others. According to sources with whom we spoke at the time, Apple was not interested in the product because it was worried about the long – term roadmap. AMD and its ability to provide iterative editions of the family of processors that would improve thermal performance and performance of Air performance
C & # 39; possible the interest of Microsoft has been stung by the opposite this time. The Ryzen 7 3700U and Ryzen 5 3500U are solid pieces to start with. Although this story is a rumor and deserves to be treated as such, it is not impossible that the Ryzen family reaches at 12 nm a series of price / performance balances that Microsoft has found attractive, especially on the GPU side , where the Ryzen family offers strong assets. performance (Ice Lake has a built-in GPU faster, but is also relatively hard to find). What would be more interesting is that if Microsoft is confident to build a product line based on Ryzen because He estimates that AMD's 7 nm APU family will continue to offer excellent performance scaling.
Even if this is true, we probably will not hear about it on October 2nd, and we do not even know if this first rumor is true. I guess Microsoft will not be a surprise launch product for AMD's new 7 nm APUs. It's not impossible, but I suppose we would have heard if it happened, and there has been no word to date. Whatever the case may be, gaining room in a Microsoft Surface product would be a major win for AMD, producing 12nm or 7nm. The volume of absolute sales may not be huge for a given Surface device, but the gain in marketing would be considerable.
In terms of computing, dual-screen devices have never really made sense, but it is clear that more than one company is preparing to market this type of product. The latest Microsoft news suggests that we will see more products of this type in the future. The company has just obtained a new patent for dual-screen devices and it is not the only one to apply for it.
The latest patent from Microsoft relates to an application for a dual-screen device using a single flexible screen. The description states:
A flexible display can be attached to the first and second parts. The entire hinge can provide several features facilitating the use of a single flexible display. During the rotation of the first and second parts, the hinge assembly can change the length of the device under the flexible display to reduce the constraints on the flexible display. This aspect can be achieved with a cord that connects the first part to the hinge assembly. A length of a cord passage may change during rotation so that the cord pulls the first part toward the hinge assembly and / or allows the first part to be removed from the hinge assembly. hinge assembly according to the orientation.
Many patent drawings are extremely detailed examples of how such articulations bend and work as if:
There is a lot of discussion about how to interface the hinge design with the mechanisms of an OLED panel. In itself, this is further proof that the company is working on dual-screen devices. We've known this for a long time, with rumors both about Centaurus and a new dual-screen Surface device that could appear at the unveiling event of Microsoft. But MS is not the only company to engage in dual screen patents. The news surfaced earlier this month Dell patent on these types of devices too. Lenovo demonstrated its ThinkPad X1 foldable computer.
All these devices are not exactly the same, to be sure. Some of them, like Lenovo, have a folding screen, similar to the folding phone prototypes already launched by Samsung and Huawei. Some of them, such as the Microsoft and Dell patents, describe a device with multiple screens that fold, but not a "foldable PC."
The fact that companies patent the technology does not necessarily mean that they intend to market their devices. It is not uncommon for companies to engage in so-called "defensive" patents, in which they pre-emptively patent patents to ensure that other companies can not sue them, or have a war chest of patent licensors licensed in the event that they find themselves in a patent license war. By digging a little, you will find many articles on technology that explain how companies buy other companies just for their patents.
But it feels different. It's one thing for companies to strategically position themselves on patents in an area like mobile, which is heavily encumbered by patents. Dell, Microsoft, and other companies are filing patents on specific types of hinge designs and other aspects of construction – the type of protection you would want if you invested a lot of money in manufacturing concrete products. Like Mehedi Hassan marks, the author of this specific patent has written several similar documents and the design drawings are incredibly detailed. It's not a vague idea that Microsoft is trying to patent in order to conquer the market on a large class of devices, it's obviously a specific design for a particular product.
The interesting dual-screen devices we have seen so far are desktop-style notebooks, with a secondary display used to display information during games. Dual-screen devices that can be used as books or with a second screen partly used as a keyboard were presented, but there were always more questions than answers about the user interface and how the software would be used. (or more likely, updated) to take advantage of this feature.
Either many companies are spending money on secret dual-panel devices that they plan to market in the next 12 to 18 months, or we're seeing the results of many R & D activities in Skunkworks that may not be successful. It would not be the first time that a company would devote months or years of effort to products never put on the market, but the buzz around folding (or collapsible PCs) is wobbling in this liminal space between "an undeniable trend in products". and "Test the terrain to see if there is a market." Companies may hope that new form factors will rejuvenate the personal computer market, which is expected to continue to decline over the next few years. High-end boutique systems and 2-in-1 are among the only bright spots in the space. It is therefore logical that companies are looking for profitable niches likely to arouse the interest of consumers.
Feature Image: Microsoft Device Design "Andromeda" Never Launched, Thurrott.com
Nothing serious would happen if these security flaws could be corrected without prejudice to the consumer, but it is that Intel chips do not stop losing performance with each discovered vulnerability. The new vulnerability affects the range Xeon of the company that in 2011 included a feature that improved server performance by allowing network adapters and other devices to connect directly to the processor cache instead of the computer's RAM, as was always do.
So, this system, called DDIO, improves bandwidth, and reduces latency and power consumption. However, researchers have discovered that an attacker can take advantage of a failure to get keystrokes and other sensitive data that go through the memory of servers using an Intel Xeon processor, even if it is in a virtual environment. Each server runs a multitude of individual instances for each user via virtualization, and an attacker can access data from other people using the same server. even if the data is being written to a secure SSH session.
The attack was dubbed NetCAT (Network Cache), and Intel was forced to recommend users to disable DDIO and RDMA on untrusted networks, which performance decreases In the event of non-deactivation, the researchers warn that in the future new attacks may be discovered to steal other types of data; even if RDMA is disabled, they only used it to simplify the attack and could happen in the future. Therefore, it is very likely that we will see new attacks based on this method in the coming months.
All this was possible thanks to the reverse engineering of the DDIO, where they discovered that the cache was shared by the CPU and the peripherals even if they received suspicious or malicious data. Accessible information includes the precise moment when data packets are received, including those for SSH connections. With it, it is possible to deduce the content of each keystroke on the time required to press a key. For example, people write an "s" faster after an "a" than what is needed to go from "s" to "g". In the following video, you can see how it works.
In addition, they make a key recommendation: Processor manufacturers are more concerned with publishing secure features when they change the architecture of their chips before installing them on millions of devices around the world. AMD seems to have followed this path because its Ryzen processors have not yet presented any vulnerability causing them to lose performance, unlike Intel.
Written by Alberto Garcia
The expected behavior of AMD 7 nm Ryzen processors at a very high frequency compared to actual measured behavior is one of the most frequently asked questions by reviewers in recent weeks. AMD promised today to inform the user community today about the expected behavior of its processors and the changes that would be incorporated into future revisions to the UEFI.
To summarize briefly: Reports published at the end of July showed that some AMD processors only reached the upper clock frequency boost on a single central unit. Last week, the overclocker Der8aurer released the results of a user survey, which shows that only some The AMD 7nm Ryzen processors reached their maximum (the exact percentage varies depending on the model). Late last week, Paul Alcorn of Tom's Hardware released a thorough test from the impact of different versions of AMD AGESA and UEFI versions of the motherboard on the motherboard pointing. AGESA is the generic architecture of the encapsulated software AMD – the library of procedures used to initialize the processor and various components. Motherboard vendors use AGESA as a template for creating UEFI versions.
What THG found is that different versions of UEFI and AGESA versions have shown slightly different synchronization results. The latest versions have been slightly lower than previous versions used for journals. At the same time, however, these latest versions have also often maintained their recall clocks for longer before slowing down the processor.
It has also been proven that the acceleration temperatures have been subtly adjusted from 80 ° C to 75 before going back to 77. These changes would not necessarily affect performance – the processor increases slightly, but also longer – but that was not the case. And that's exactly what AMD was trying to accomplish. During his IFA presentation last week, Intel argued that these subtle variations were proof that AMD was trying to deal with a potentially important reliability issue with its processors. THG was not willing to subscribe to this explanation without additional information.
While all this was happening, AMD informed us that it would make an announcement on September 10 regarding a new update of AGESA.
The following text is directly from AMD and concerns the improvements that will be made to the updated UEFIs of different motherboard manufacturers. Normally, I do not quote this blog post extensively, but I think it is important to present the exact text of what AMD says.
The analysis indicates that the processor's boost algorithm has been affected by a problem that could lead to lower target frequencies than expected. This has been resolved. We also explored other possibilities for optimizing performance, which can further improve the frequency. These changes are now implemented in the flashable BIOSes of our motherboard partners. Our internal tests show that these modifications can add approximately 25 to 50 MHz to the current boost rates in various workloads.
AMD Reference Motherboard (AGESA 1003ABBA Beta BIOS)
AMD Wraith Prism and Noctua NH-D15S chillers
Windows Update May 10, 2019
Laboratory of ambient tests at 22 ° C
Streacom BC1 open work table
AMD 1.8.19.xxx Chipset Driver
AMD Ryzen Balanced diet plan
Default BIOS (except OC in memory)
These improvements will be available in flashable BIOSes from about two to three weeks, depending on the timing of testing and implementation of the manufacturer of your motherboard.
In the future, it is important to understand how our Boost technology works. Our processors perform intelligent real-time analysis of processor temperature, motherboard voltage regulator current (amperes), socket power (watts), charged cores, and power intensity. workload to optimize performance, from millisecond to millisecond. Make sure your system has adequate thermal paste; reliable cooling system; the latest BIOS of the motherboard; reliable BIOS settings / configuration; the latest AMD chipset driver; and the latest operating system can enhance your experience.
After installing the latest BIOS update, a consumer running a single threaded application on a PC with the latest software updates and sufficient voltage and thermal margin should benefit from the overvoltage frequency maximum of its processor. PCMark 10 is a good proxy for a user who tests the maximum boost rate of the processor in his system. If users run a workload such as Cinebench, which runs for an extended period of time, the operating frequencies may be lower than the maximum throughout the run.
In addition, we want to answer recent questions about reliability. We perform extensive technical analysis to develop reliability models and to model the life of our processors before starting serial production. Although AGESA 1003AB contains modifications aimed at improving system stability and user performance, no changes have been made for reasons of product longevity. We do not expect that boosting frequency enhancements for AGESA 1003ABBA will impact the life of your Ryzen processor. (Underline added).
In addition, AMD also provided information on the firmware changes implemented in AGESA 1003ABBA, intended to reduce the operating voltage of the processor by filtering requests for voltage / frequency increase for light applications. 1003ABBA AGESA now contains an activity filter designed to ignore "intermittent background noise from the operating system and applications." This should reduce the processor voltage to 1.2 v, unlike the higher peaks reported.
Finally, AMD will release a new surveillance SDK that will allow everyone to create a monitoring tool to measure the different facets of Ryzen processor performance. More than 30 API calls will be exposed in the new application, including:
Current operating temperature: Indicates the average core temperature of the CPU over a short sample period. By design, this metric filters transient spikes that can distort temperature reports.
Peak voltage (s) (PCV): Indicates the voltage identification (VID) requested by the CPU package of the voltage regulators on the motherboard. This voltage is configured to meet the needs of cores under active load, but does not necessarily correspond to the final voltage experienced by all cores of the CPU.
Average heart tension (ACV): Reports the average voltages experienced by all processor cores over a short sampling period, taking into account active power management, sleep states, VDROOP, and time of day. inactivity.
CDE (A), CDT (A), PPT (W): Current and power limits for the VRM and CPU socket of your motherboard.
Maximum speed: Maximum frequency of the fastest kernel during the sampling period.
Effective frequency: Frequency of the processor cores after taking into account the time spent in the sleep state (for example, basic sleep cc6 or packet sleep pc6). Example: A processor core operates at 4 GHz when it is awake, but in standby cc6 for 50% of the sampling period. The effective frequency of this nucleus would be 2 GHz. This value can give you an idea of how often cores use aggressive energy management features that are not immediately obvious (for example, clock or voltage changes).
Different voltages and clocks, including: SoC voltage, DRAM voltage, matrix clock, memory clock, etc.
Ryzen Master has already been updated to give the average base voltage values. AMD expects motherboard manufacturers to release their new UEFIs with the 1003ABBA AGESA version built in two weeks from now. As we wrote last week and despite rumors from Shamino, an Asus employee, AMD in no way describes these timing behavior adjustments as reliability-related.
Regarding AMD's statements on improved clocks, I want to wait to see how these changes impact the behavior of our own test processors before drawing any conclusions. I will say that I do not expect to see a lot of changes in overall performance – 25-50 MHz is only an improvement of 0.5 to 1% on a 4.2 processor GHz and we may not even be able to detect a performance change in a standard benchmark from such a change in clock. But we can directly monitor the clock speeds and will report the impact of these changes.
Earlier this week, reports revealed that some Windows 10 users were having problems with Windows 10 1903. The latest cumulative update released for the operating system, KB4512941, may result in an increase in the number of users. CPU utilization of up to 30% or even 100%. Separately, some users also report that Windows Desktop Search is completely corrupted.
According to Microsoft, the interrupted search problem only affects systems that have disabled the "Search Web" feature built into Desktop Search. I admit that this type of recognition always makes me a little grumpy, especially because I've never understood why anyone would like a web search feature built into the computer search. If I search on my desk, I am in the definition do not search the World Wide Web. Cluttering my results with data from places that will not contain what I'm looking for is not an added value, it's an active disadvantage of the interest of using a computer search rather than the web.
Regardless of these issues, Windows 10 1903 still has a long list of issues. The company still does not recommend the installation of 1903 on Surface Book 2 models with a discrete GPU because the update can break the discrete GPU's functionality. Some Qualcomm and Realtek device driver versions are not compatible with the update. Some users with an Intel audio driver have reported faster battery depletion than expected, and the company has not solved the problem of gamma ramps, color profiles, and nightlight settings. It produced spectacular (and disconcerting) visual results when testing the 5700 and 5700 XT launchers for AMD's Navi launch in July.
In most cases, Microsoft has "mitigated" the problem by preventing affected products from automatically updating to Windows 10 1903. The problem with this approach, however, is that it does not solve the problems of people who have already updated their version of 1903 and did not discover that it was the cause of their problems until the cancel window disappeared. It is easy to remove a single Windows update, such as the cumulative KB4512941 that causes new CPU usage issues, but you must proceed within 30 days to return to the full installation of 1903.
With regard to the damaged desktop search feature, this is a problem that I had previously encountered with previous updates of Windows 10. When I updated my desktop to Windows 10 1809, the desktop search was interrupted. As a result, I canceled the update, even though the data deletion bug that affected some users did not affect me.
Affected users should try to uninstall KB4512941. Users who still can not install 1903 must wait to see if Microsoft will be able to resolve these issues before 1909 appears. Microsoft did important changes how it tests future versions of Windows in the Windows Insider program. The result of these changes should be a longer overall test cycle and, hopefully, updates that will save less code in the future, but 1903 was released before the release. entry into force of these amendments. We may not see the impact of these changes until Windows publishes its 2020 updates.
Since at least Computex, Intel has told reviewers about the types of tests we run, the applications they typically use, and whether these tests reflect "real" performance. Specifically, Intel believes that tests such as Cinebench are far too test-driven, while applications that users actually use are virtually ignored.
Let's leave a few things aside.
Every company has benchmarks that it prefers and benchmarks that it does not like. The fact that some tests work better on AMD than on Intel, or on Nvidia against AMD, is notin itself, evidence that the benchmark was deliberately designed to favor one business or another. Companies tend to worry about benchmarks used by reviewers when they face increased competitive pressure in the marketplace. Those of you who think that Intel raises questions about the tests we look at collectively use in part because he loses in a lot of these tests They are not wrong. But the fact that a company has reasons of self interest to ask questions does not automatically mean that it is also wrong. And since I do not spend dozens of hours and sometimes at night testing the material to give people a misconception of how it works, I am always willing to go back on my own conclusions.
The following are my own thoughts on this situation. I do not pretend to speak for another critic than myself.
Being in favor of real-world hardware performance tests is one of the least controversial opinions one can have in computing. I met people who did not necessarily care about the difference between synthetic and real tests, but I never remember meeting anyone who thought the actual tests were irrelevant. The fact that almost everyone agrees on this point does not mean that everyone agrees to know where the boundaries between the real world and a synthetic landmark lie. Consider the following scenarios:
You will have your own opinion about which of these scenarios (if any) is a reference in the real world, and which do not. Let me ask you a different question – an issue that I think is more important than whether a test is "real" or not. Which of these hypothetical benchmarks tells you something useful about the performance of the tested product?
The answer is: "potentially, all." The benchmark I choose is a function of the question I am asking. A synthetic or standalone test that works as a good model for a different application continues to accurately model performance in this application. It can be a much better model in terms of actual performance than testing done in a highly optimized application for a specific architecture. Even if all the tests of the optimized application are "real world" – they reflect workloads and real tasks – the application itself may be an unrepresentative outlier.
All the scenarios I have described above can be good benchmarks, depending on their ability to generalize to other applications. The generalization is important Review. In my experience, reviewers generally try to balance known applications to favor a business with applications that work well on everyone's hardware. Often, if a vendor-specific feature is enabled in a dataset, revisions will include a second dataset with the same feature disabled, to provide a more neutral comparison. The use of provider-specific indicators can sometimes affect the ability of the test to speak to a wider audience.
Until now, we have strictly discussed the question of whether a test is real, whether the results are generalized to other applications. However, there is another way to frame the subject. Intel polled users to find out which applications they were actually using, and then presented us with this data. It looks like this:
The implication here is that by testing the most common applications installed on users' hardware, we can get a more efficient and representative use case. This feels intuitively true – but the reality is more complicated.
The fact that an application is frequently used does not make an objective reference. Some applications are not particularly demanding. Although there are absolutely scenarios in which the performance measurement of Chrome could be important, as in the case of low-end laptops, good reviews of these products already include these types of tests. In the context of high-end enthusiasts, it is unlikely that Chrome is a demanding application. Are there test cases that can make it taxing? Yes But these scenarios do not reflect the way the application is used the most.
The practical experience of using Chrome on a Ryzen 7 3800X is identical to its use on a Core i9-9900K. Even if that was not the case, Google makes it difficult to maintain a previous version of Chrome available for continuous A / B testing. Many people use extensions and adblockers, which have their own impact on performance. Does this mean that reviewers should not test Chrome? Of course not. That's why many notebook reviews make Test Chrome, especially in the context of browser-based battery life, where Chrome, Firefox, and Edge are known to produce different results. Adapt the marker to the situation.
There was a time when I spent a lot more time testing many of the apps on this list than at present. When I started my career, most of the reference suites were focused on desktop applications and basic 2D graphics tests. I remember that exchanging a person's graphics processor could dramatically improve the quality of the 2D image and the responsiveness of the Windows user interface, even without upgrading their monitor. When I wrote for Ars Technica, I wrote comparisons of CPU usage when decoding HD content, because at the time, there were significant differences to find. If you think back to the launch of Atom netbooks, many reviews have been devoted to issues such as responsiveness of the UI with a GPU solution Nvidia Ion and compared with Intel's built-in graphics. Why Because Ion has had a noticeable influence on the overall performance of the user interface. The examiners are not unaware of these problems. Publications tend to come back when there is significant differentiation.
I do not choose evaluation criteria only because the application is popular, although its popularity can appear in the final decision. The objective, in a general review, is to choose tests that will be generalized to other applications. The fact that someone has installed Steam or Battle.net does not tell me anything. Does this person play Overwatch or WoW Classic? Do they play Minecraft or No Man's Sky? Do they choose MMORPG or FPS type games, or are they just stuck in Goat Simulator 2017? Are they playing games at all? I can not know without more data.
Applications in this list that have significant performance differences in common tasks are typically already tested. Publications like Puget Systems publish regular performance comparisons in the Adobe suite. In some cases, the reason why applications are not tested more often is that there is a long-standing concern about the reliability and accuracy of the performance test suite that includes them most often.
I am always interested in better methods of measuring computer performance. Intel absolutely has a role to play in this process – the company has been helpful many times to find ways to highlight new features or solve problems. But the only way to find significant differences in the material is to find significant differences in tests Again, in general, you will see critics check laptops for shortcomings in battery life and energy consumption as well as performance. In GPUs, we are looking for differences in frame time and number of frames per second. Because none of us can execute each workload, we are looking for applications with generalizable results. At ET, I run several rendering applications specifically to make sure we do not favor any vendor or solution. That's why I tested Cinebench, Blender, Maxwell Render, and Render the crown. Handbrake is the ideal solution for everyone in media encoding, but we are testing H.264 and H.265 to ensure that we capture multiple test cases. When the tests prove to be inaccurate or insufficient to capture the data I need, I use different tests.
The very well-argued difference between the "synthetic" and "real" marks constitutes a poor framing of the question. Ultimately, the important question is whether the benchmark data presented by the evaluator collectively provides an accurate view of the expected performance of the device. Like Rob Williams details At Techgage, Intel is only too happy to use Maxon's Cinebench as a reference to a time when its own processor cores dominated performance. In a recent to post Ryan Shrout of Intel wrote on Medium:
Today, at IFA, we organized an event for journalists and analysts on a subject that is close to our heart and dear to our hearts: real-world performance. We've been running these events for a few months, starting with Computex, then E3, and we've learned a lot along the way. The process reinforced our view of synthetic benchmarks: they are useful if you want a quick and narrow perspective on performance. We always use them internally and we know that many of you know it too, but the reality is that they are less and less accurate in assessing the actual performance of the user, regardless of the segment of product in question.
It seems overwhelming. He continues with this slide:
To demonstrate the alleged inferiority of synthetic tests, Intel presents 14 distinct results, 10 of which are derived from 3DMark and PCMark. These two applications are generally considered as synthetic applications. When the company reports its own performance against ARM, it repeats the same problem:
Why does Intel refer to synthetic applications in the same blog article in which it specifically refers to them as a bad choice over supposedly "real world" tests? This may be because Intel makes its reference choices, like our critics, with an eye to representative and repeatable results, using affordable tests, with good features that do not crash or fail. for unknown reasons after installation. Maybe Intel is also struggling to keep up with the flow of regularly updated software and picks tests to represent its products it can rely on. Perhaps he wants to continue to develop his own synthetic performance tests, such as WebXPRT, without putting all this effort under a bus, even if he simultaneously tries to suggest that the reference tests on which AMD supports are inaccurate.
And maybe that's because all the synthetic / real framing is bad to start with.
Updated (5/9/2019): One thing I have not remembered is that Intel's most widely used application dataset is based entirely on laptops and 2-in-1 devices. This is described in the slide above. We would not be expect content creators working in 3D applications such as Blender, Cinebench or similar applications for workstations would use the 2 in 1 software. The fact that the hardware configurations measured by Intel are not representative of the systems we were expecting these applications are used to suggest that these applications are less important because of a reduced installation base.