We covered AMD ads on Ryzen and Navi during E3 throughout the week. We still have an aspect of the situation to discuss. We discussed Navi and its RDNA architecture, but we did not discuss any of the software enhancements that AMD plans to offer with its next GPUs. Some of these gains will also be available for GCN cards.
Let's talk about some features and improvements.
First, there are the quality of life gains generated by AMD's Radeon software. With Navi, the system automatically switches your TV to low latency game mode, if the display supports one. You will be able to save the settings to separate the files and reimport them if you need to install the driver completely from scratch or if you reinstall your entire operating system. Some improvements have also been made to the way WattMan reports its results.
The AMD Link streaming application now supports streaming on TVs, including Apple and Android TV. VR wireless streaming is now supported as well. These enhancements are not associated with any specific GPUs.
Radeon Chill is AMD technology to reduce the power consumption of GPUs during games. The software can now set frame rate limits on 60Hz screens to reduce the number of images rendered when you do not actively control your character due to your afk.
The AMD footnote on Radeon Chill deserves to be read. Under the right circumstances, this can significantly reduce the power consumption of the graphics processor, although this has an impact on the rate, and the total size of the gain varies from one title to the other. Any graphics processor that previously used Radeon Chill can take advantage of these enhancements.
Then, Radeon anti-lag. According to AMD, the company has invented a method to reduce the time between the moment you press a button in a game and the one where you see the results. To do this, some CPU jobs are delayed to ensure that they occur simultaneously alongside the GPU rather than being completed in advance.
Honestly, I can not say that I have observed a difference between the activation of Radeon Anti-Lag and its deactivation. AMD demonstrated that the effect worked with custom-built latency monitors attached to displays, and I think the company on which the monitor I tested had slightly higher latency. I am at an age when motor reflexes have already begun to decline, and if I am honest, I have never been a very good twitch player.
In the best case, this feature reduces your total latency by a few milliseconds. If you are good enough to compete in these spaces, it could be worth something. This is not something I feel able to comment on.
The anti-lag is supported in DX11 on all AMD GPUs. Support for DX9 games is a Navi-only feature. DX12 games are currently not supported because of the extremely different implementation requirements in this API.
Radeon Image Sharpening is a feature that combines adaptive contrast sharpness with the use of GPU resizing techniques to improve the quality of the base image without the need for a native 4K rendering penalty. The following slides compare RIS enabled or disabled.
RIS is disabled in the slide above.
RIS is enabled in this slide. The effect is very subtle. You may want to open the two images above in separate tabs, zoom in carefully, and then compare the final product. Although there is a clear improvement in the IQ in the picture "ON", it is a small one.
Nevertheless, small improvements to IQ are generally welcome. RIS was also designed by Timothy Lottes, who worked on FXAA at Nvidia. The use of this feature should not affect performance (the impact on performance is estimated at 1% or less). RIS is a Navi-only feature and is only supported by DX12 and DX9.
Finally, there is FidelityFX.
FidelityFX is AMD's new addition to GPUOpen. It is offered to any developer wishing to take advantage of it. Adaptive Contrast Sharpness can be used on any GPU if developers want it.
Some extra hardware details on Navi that were not in the previous articles, but probably should have (a frenetic briefing schedule and some scrambled note taking):
AMD plans to maintain GCN GPUs on the market to manage HPC workloads. The AMD engineer we spoke to compared GCN to an extremely efficient sword was it balanced properly, but that it was relatively tedious to use, while RDNA was more of a lightsaber in terms of concentration on elegance and the economy of movement. GPUs such as the MI50 and MI60 also offer much larger memory bandwidth and larger memory pools than any of the Navi cards coming on the market.
The RDNA should eventually replace the GCN in this space and correct some of the slow path anomalies that the GCN suffers. Irregular performance with some texture formats has been corrected, for example, and the RDNA has larger caches to prevent bubbles in the pipeline. Overall performance should be more predictable with rDNA-derived GPUs than with GCN.
There is nothing new in these details, but I thought to include them for the sake of completeness. This concludes our E3 coverage.
AMD announced last night its new Radeon GPUs at E3. The new Radeon RX 5700 and 5700XT are positioned as responses to the Nvidia RTX 2060 and 2070 family, rather than a frontal assault on the RTX 2080. As previously announced, the Radeon VII will remain on the market over the RTX 2080 .
Navi, however, seems to be a significant step forward for AMD on several fronts. We will have a deep architectural dive in the near future, but for now, let's look at speeds, flows and competitive positioning. You can click all the slides to open a larger version in a new window.
The Radeon 5700XT is a 40-unit design, with 2,560 stream processors, 9.75 TFLOP floating point performance, and clocks far superior to anything we've seen before. AMD's new RDNA architecture, which finally GCN replaces, is significantly more efficient than its predecessor, with a projected increase of 1.25 times in performance per clock. The TDP on the 5700XT is 225W, compared to 295W on the Vega 64. The power supply is via an 8-pin connector and a six-pin connector.
The Radeon 5700 is a 36 CPU design with 2,304 stream processors and the same 8GB RAM pool as the 5700 XT. As expected, there is no sign of HBM on these products. The 5700 features a 180 W TDP and the same 1x 8-pin + 1x 6-pin power distribution system.
According to AMD, its new RDNA architecture is a major improvement over GCN, with substantial improvements in clock performance, raw clock speed, and watt performance.
The performance improvement per watt of 1.25x does not take into account clock speed gains but adds to them. It's a good time to talk about the newly defined synchronization scheme by AMD, so let's talk about it.
The basic clock of these cards is equivalent to what you will see if you run a type of workload type of power virus like Furmark. The "Game Clock" is a conservative estimate of the clock that you will see when running current titles over long periods of time. According to AMD, this is not the GPU's median clock rate over time during gaming – it's actually a bit lower than expected to allow for variations in silicon and cooling from one system to another. AMD derived its Game Clock values by measuring the average speed of the game's graphics processors on 25 different games.
The Boost Clock is an opportunistic clock that the GPU will try to hit when possible. Even this value does not represent the maximum potential speed (AMD has described it as "close to the maximum"). With improvements to the underlying architecture and overall design, GPU clocks are much higher than anything we've seen before from AMD. (We will talk about this in more detail in the coming days.)
According to AMD, the power improvements to give RDNA 1.5 times better performance in limited power environments than an equivalent GCN configuration. The change to 7nm represents just over 20% of the total gain, with improvements in design and power of about 15-18%. Most improvements result from improving the performance of the core of the graphics processor. We will write as teaser for deep diving: RDNA can execute instructions at each cycle, compared to GCN, which takes at least four cycles. The net result of these enhancements is a significantly better GPU than the cards it replaces, dramatically improving performance and simultaneously reducing power consumption.
AMD expects the Radeon 5700XT to deliver performance around 1.14 times faster than Vega 64, while consuming 23% less energy. TDPs rated on the 5700XT and 5700 are still superior to their Nvidia counterparts, but TDP is not a substitute for power; we will have to test the hardware to see how the GPUs compare. The performance improvement by area is substantial. Vega 64 was 495mm2 part, while Navi is at 251mm2 part. The RTX 2060 and 2070, on the other hand, are 445mm2.
These gains should put the 5700XT slightly higher than the GTX 1080 and the RTX 2070 in terms of overall performance. AMD also told us that we have read our reactions in terms of colder reference noise.
The 5700 and 5700XT models not only use wrapped fans that vent heat from the system, but AMD also promises to lock them to a volume of 43dbA. (It is not known whether it is absolute maximum volume or maximum volume provided you do not manually set the fan to 100%). This should respond to one of the recurring complaints about AMD's reference cards that they are often much more powerful than the competition. New buyers will receive a redeemable card for a three-month subscription to the Microsoft Xbox Game Pass for PC service.
We will have much more to say about Navi and its underlying architecture in the days to come. The GPU seems to have taken a significant step forward in reducing AMD's power over Nvidia and appears to be reaching a higher performance / watt target than Radeon VII.
The price of both cards is mobile, depending on the performance shown. The Radeon RX 5700XT will be available for $ 449, while the RX 5700 is $ 379. The price of the RX 5700 is significantly higher than the current RTX 2060 cards, which sell for $ 335 at the bottom of the market. The RX 5700XT is a $ 450 card, compared to the $ 500 price of the RTX 2070. A 50th birthday of $ 500 from the card will also be available.
Samsung and AMD have signed a licensing agreement with Radeon IP to the mobile giant. This is a major initiative that has important long-term ramifications for Samsung, which is looking to further develop its own SoCs, and is part of a long-term trend towards increased specialization.
Ten years ago, AMD sold its low-power mobile graphics business to Qualcomm, which was behind the Adreno brand. At the time, the transaction presented a reasonable tax interest for the company. Although some analysts have criticized AMD for not having tried to impose on the smartphone market, Intel's own extinction in this space has proved the wisdom of this approach. In 2009, AMD did not have the resources to actively pursue the IP development of smartphones and competitive solutions in the field. Today, the situation is very different – and Samsung does not pay AMD to design solutions, but the IP stack and his own. The terms of the agreement have not been disclosed.
Samsung's decision to license AMD's Radeon technology reflects several trends in the mobile space. In recent years, we have seen more smartphone manufacturers working on customizing their own SoCs. This customization is not always processor-centric. Some companies continue to trust ARM for Cortex processors, but have built their own Neural Processing Units (NPUs) and / or provided a custom IP address in the form of DSP and GPU blocks, such as Qualcomm. Whatever the case may be, the trend of vendors, including companies that are not generally known for their high-end smartphones, is to incorporate more custom semiconductor design work.
Samsung has been pursuing a split strategy for several years. He decided to send his own Exynos SoC based on a custom CPU design in some international markets, while using the Qualcomm SoC for US products to ensure the best possible experience. According to Anandtech, the current Exynos 9820 is a tribrid, with two M4 cores (Samsung's custom design), two Cortex-A75 processors, and four Cortex-A55 chips. The overall processor design however remains lagging behind the energy efficiency and overall performance of Qualcomm Snapdragon and ARM Cortex-A76.
The decision to license GPU technology is interesting because the heart of the ARM Mali graphics processor in games like the Galaxy S10 is quite good compared to the Snapdragon 855. Although a gap remains favorable to Snapdragon 855 in the together, the performance difference of the GPU is smaller. than the relative difference in processor performance. The implication here is that Samsung thinks it can further reduce the gap that separates it from Qualcomm, where it exists.
The above graph is from Anand review of the Exynos 9820, but the graphics processor performance tests show very different characteristics. In 3DMark, Android devices dramatically outperform the iPhone; in GFXBench, the opposite is true.
Overall, this change should help Samsung continue to develop its own intellectual property and develop custom GPU solutions. However, it will be a few years before we see the fruits of this agreement. AMD can immediately grant patents to Samsung, but building a new GPU based on a new IP address is always time consuming.
Intel and AMD extend their GPU partnership and launch new hardware combining Intel processors and a discrete AMD GPU. The two companies collaborated on Hades Canyon, a high-end concept combining a graphics processor "Vega" AMD (not quite), with an Intel processor connected via Intel EMIB.
The new Islay Canyon NUCs (next computer unit) are more modest that the Hades Canyon hits hard. These tiny systems include a RX 540 graphics processor to increase overall system power, coupled with a 15-W Intel processor (Whiskey Lake-U). The official Intel name for small kits is: "NUC 8 Mainstream-G Mini PC".
The Islay Canyon NUCs measure 4.6 x 4.4 x 2 inches and can use features such as Intel Optan or Intel Core i7-8565U processors. Mini DisplayPort, HDMI 2.1, three USB 3.1 Gen 2 Type A ports, a Type C port, Gigabit Ethernet and an SDXC card reader are also included.
As for the RX 540, its GPU is much more modest that part of Kaby Lake-G, with its integrated HBM. The RX 540 is a mobile GPU with a 512: 32: 16 configuration and 2GB of integrated GDDR5. The performance of the card must be respectable for a low-end and economical graphics processor. 2GB of RAM is good for low-end 720p / 1080p games, and although the GPU does not set a speed record, it should handle a game like League of Legends without any problems.
There is a wide range of NUC in construction Under this general "floor plan", if you want, as shown below:
Intel offers these kits in a variety of flavors, ranging from a complete pre-build solution to a barebone solution. What you choose will have a significant impact on the final price. Finally, the DRAM in these machines is welded, which can have an impact on their long-term attractiveness for some users. However, 8 GB of LPDDR3-1866 is appropriate for the types of use cases for which this system is likely to be purchased. While the computer market tends to evolve periodically to higher RAM capacities, Windows 10 has not changed its minimum RAM requirements. The average PC that I still use in my own living room is a first-generation i7 920 Core with 8GB DDR3-1600 that works very well. Given that, I would not have feared the relatively expensive RAM load, although some people choose to buy a system that allows upgrades, just on principle.
The prices of these new models have not yet increased, but they should soon be on sale.
With the new information we hear around the PlayStation 5, it goes without saying that we also hear tips on the graphics processor that feeds. There have been some major categories of leaks around the chip.
Last year, AdoredTV suggested AMD aggressively bring a set of GPUs to the market to undermine Nvidia's Turing value. These leaks collectively resembled:
Now, anybody posting to 4 channels (SFW variant of 4chan) claims to work at AMD, with additional information on the "Radeon RX 3080" (AMD thinks that AMD will combine the Radeon and Ryzen brands into similar numbers for each family).
According to this leak, Navi has 1 MB of additional L2 cache compared to Polaris (3 MB total). The L1 is now 32 KB, against 16 KB. The high-end graphics processor offers a memory bandwidth of 410 GB / s on a 256-bit bus, against 256 GB / s on a current RX 590.
None of this is in conflict with the previous information leak, but the following information suggests that Navi will target the performance of the Vega 56 / GTX 1080 as opposed to the GTX 1080 / RTX 2070 – and that is new C is also good news if you hope that AMD will bring a devastating blow to Nvidia's RTX family.
Here's the problem: first, the gap between the Vega 56 and the GTX 1080 is between 10 and 20%, depending on the games you are comparing. They are not placed in an equivalent way; The Vega 64 was the closest point of comparison to the GTX 1080. The RTX 2070, meanwhile, is about 8% faster than the GTX 1080. The correspondence of the Vega 56 with Navi is therefore much slower than that of the RTX 2070.
While a Navi that corresponds to the Vega 56 would be disappointing compared to expectations, matching the performance of the Vega 56 while reducing power and price would constitute a collective set of improvements larger than that of the Radeon VII with respect to Vega 64. this year.
The Radeon VII has significantly improved its performance compared to Vega 64, with minor changes in power and noise. This hypothetical Navi would reduce prices, increase performance and significantly reduce power (and therefore noise). Without a doubt, this would bring more benefits – but not necessarily where people want it the most.
If the previous leak is true, of course, Navi would really be looking for stars. A $ 250 GPU competing with a $ 500 GPU would create gaping holes in Nvidia's price range and product line.
The main reason for thinking that AMD would step as this is its own position in the GPU market. AMD may have most of the console space, but it has been virtually driven out of the high-end markets for desktops and laptops. Getting out of the door could bring him back to the mind.
The biggest reason to think that they will not do it? Let's be honest. In the six years since the launch of Hawaii, AMD's graphics performance has not been as good. Hawaii was a big competitor but problems were not solved until third party cooler designs were available. Fury X had a glue problem in its water cooler and could not properly beat the GTX 980 Ti.
When the Polaris family was abandoned, we were faced with problems and concerns about an improper power supply to the PCIe connector due to an incorrect distribution of the 12V graphics processor. Vega 56 and Vega 64 were available only in limited quantities, at exorbitant prices (they already had the same GPU price boom in 2017) and in two different SKUs with two different colder z heights. It may seem strange as an internal problem in baseball, but I promise you that the last thing AMD wanted for its niche graphics processor was the design two niche cooling solutions, only one that can be used depending on the room you own. The commendable improvements in the performance of Radeon VII have been overshadowed by its volume and the general lack of new features compared to other cards in the market.
AMD has had some positive points since. The Radeon Nano was a well received niche product and the HD 7790 has improved AMD's low-end performance compared to the HD 7770 in a cost-effective manner. But AMD needs more than a niche product or a single budget card well received. He needs a new architecture that shows that it can compete from top to bottom with Nvidia. Even if he does not launch new cards in each segment, he must demonstrate a architecture it's able to resize. But any honest review of the past six years reveals that AMD has struggled to do exactly that. It has cost hundreds of millions of dollars in the console market and it still has a very strong lock on the shallow where the RX 570 plays, but it's been a long time that AMD has not yet introduced the GPU by shooting without problem or problem. problem in sight.
That's not to say that I think Navi is a failure. A USD 250 Vega 56 or GTX 1080 equivalent would represent a significant leap in performance over current AMD silicon with a fraction of the power. We do not know what the ray tracing The situation is still unknown because we do not know if the silicone that Navi uses in 2020 is identical to the silicone that Navi will be marketed by AMD in 2019. To date, AMD has been very discreet on ray tracing. except to say that they did not expect to introduce the feature until they could do it, from top to bottom. This could imply that they prevent the possibility of launching a future GPU, possibly aligned with the official debut of the PS5. There are also rumors of different flavors of Navi and different performance targets making their appearance, some arriving in 2020. Alternatively, they could belong to Arcturus, supposed successor of Navi.
The players who called on AMD to reach the "Ryzen" level in gaming performance were not wrong; the company needs one. Although the company still maintains its performance against Nvidia in certain price ranges, it now uses nearly twice the power. It's an ugly and unsustainable position. Navi will be the architecture to change everything? I hope so – but we have been waiting for proper follow-up in Hawaii for six years now. 4chan is not exactly a reliable source of data. However, GPU shipments in the last six years of AMD have not been so solid.
When Google announced its Stadia custom streaming solution, it made it clear that it had partnered with AMD to deploy a custom graphics processor with 10.7 TFLOP computing power and 8GB HBM2 RAM. There was no mention of the processor used, but a note in Google's presentation referred to Hyper-Threading Technology – not to SMT.
PCGamesN has confirmed the implications of this index with AMD. Ryzen and Epyc will not feed the Stadia processor part, at least not initially. Of course, Google still has the opportunity to change CPU provider or introduce new equipment. Google may have also opted for Radeon GPUs, but not for Ryzen processors. because of the general trends in the gaming market.
According to Nvidia, AMD's share of the computer market is below average – Team Red holds less than 15% of the market, according to the Steam Hardware survey. But there is very little difference between a modern PC and a modern gaming console, and AMD's overall market share is considerably different when you factor in sales of the Xbox One and PS4. If you build your games for multiple platforms, you need to know a little about AMD's GCN. This benefits Google even though Stadia games are natively developed for the platform.
Ryzen does not have the same accumulated market share. It's widely accepted that next-generation Sony and Microsoft consoles will use AMD processors, but the Xbox and PS4 family processors are based on AMD's low-power Jaguar mobile chip. Ryzen's overall gaming performance has reached parity with Intel, but if you want to choose the processor solution that has been the absolute gaming standard over the last decade, Intel wins this comparison. Developers are also more likely to be familiar with Intel microarchitectures. None of this means that Ryzen is a bad choice for the game – this is not the case – but AMD has a distinct advantage in the GPU space related to its overall market share of the game that it can not not exploit in the processors.
It's unclear why Google has been restrained about its material partners at initial launch of Stadia. The company may have finalized the details of its partners or tried to avoid a discussion about the specifications and performance in favor of marketing the idea of Stadia as a service. Some technology companies prefer not to disclose their hardware specifications. Apple and Microsoft both refuse to model processor numbers that they ship with their own hardware, as if hiding important information was a benefit. It is also possible that Google intends to deploy the AMD and Intel hardware as it sees fit and does not want to deal with customers who believe that the performance gaps are somehow related to the fact that they use a "Intel" instance or "AMD". for example for their streaming game.
Overall, stadium ratings were mixed. There is a chance that this can increase game fidelity by allowing the use of multi-GPU configurations in corporate settings, where the heat and noise of the stacks of cards would not pose any problem. It does, however, depend on Google's ability to overcome latency and uneven performance issues that have hindered other video game streaming services. There are also concerns about what we had I know with Stadia – a game service that essentially locks any game code, thus preventing any modification and exploration, and having important implications for the preservation of video game history. Even if you firmly believe in streaming, it is not clear that Google is going to be the company to break into this space in the face of companies like Sony, Microsoft and Nvidia, all of whom have more direct experience of space or construction. custom hardware to activate it.