AMD announced last night its new Radeon GPUs at E3. The new Radeon RX 5700 and 5700XT are positioned as responses to the Nvidia RTX 2060 and 2070 family, rather than a frontal assault on the RTX 2080. As previously announced, the Radeon VII will remain on the market over the RTX 2080 .
Navi, however, seems to be a significant step forward for AMD on several fronts. We will have a deep architectural dive in the near future, but for now, let's look at speeds, flows and competitive positioning. You can click all the slides to open a larger version in a new window.
The Radeon 5700XT is a 40-unit design, with 2,560 stream processors, 9.75 TFLOP floating point performance, and clocks far superior to anything we've seen before. AMD's new RDNA architecture, which finally GCN replaces, is significantly more efficient than its predecessor, with a projected increase of 1.25 times in performance per clock. The TDP on the 5700XT is 225W, compared to 295W on the Vega 64. The power supply is via an 8-pin connector and a six-pin connector.
The Radeon 5700 is a 36 CPU design with 2,304 stream processors and the same 8GB RAM pool as the 5700 XT. As expected, there is no sign of HBM on these products. The 5700 features a 180 W TDP and the same 1x 8-pin + 1x 6-pin power distribution system.
According to AMD, its new RDNA architecture is a major improvement over GCN, with substantial improvements in clock performance, raw clock speed, and watt performance.
The performance improvement per watt of 1.25x does not take into account clock speed gains but adds to them. It's a good time to talk about the newly defined synchronization scheme by AMD, so let's talk about it.
The basic clock of these cards is equivalent to what you will see if you run a type of workload type of power virus like Furmark. The "Game Clock" is a conservative estimate of the clock that you will see when running current titles over long periods of time. According to AMD, this is not the GPU's median clock rate over time during gaming – it's actually a bit lower than expected to allow for variations in silicon and cooling from one system to another. AMD derived its Game Clock values by measuring the average speed of the game's graphics processors on 25 different games.
The Boost Clock is an opportunistic clock that the GPU will try to hit when possible. Even this value does not represent the maximum potential speed (AMD has described it as "close to the maximum"). With improvements to the underlying architecture and overall design, GPU clocks are much higher than anything we've seen before from AMD. (We will talk about this in more detail in the coming days.)
According to AMD, the power improvements to give RDNA 1.5 times better performance in limited power environments than an equivalent GCN configuration. The change to 7nm represents just over 20% of the total gain, with improvements in design and power of about 15-18%. Most improvements result from improving the performance of the core of the graphics processor. We will write as teaser for deep diving: RDNA can execute instructions at each cycle, compared to GCN, which takes at least four cycles. The net result of these enhancements is a significantly better GPU than the cards it replaces, dramatically improving performance and simultaneously reducing power consumption.
AMD expects the Radeon 5700XT to deliver performance around 1.14 times faster than Vega 64, while consuming 23% less energy. TDPs rated on the 5700XT and 5700 are still superior to their Nvidia counterparts, but TDP is not a substitute for power; we will have to test the hardware to see how the GPUs compare. The performance improvement by area is substantial. Vega 64 was 495mm2 part, while Navi is at 251mm2 part. The RTX 2060 and 2070, on the other hand, are 445mm2.
These gains should put the 5700XT slightly higher than the GTX 1080 and the RTX 2070 in terms of overall performance. AMD also told us that we have read our reactions in terms of colder reference noise.
The 5700 and 5700XT models not only use wrapped fans that vent heat from the system, but AMD also promises to lock them to a volume of 43dbA. (It is not known whether it is absolute maximum volume or maximum volume provided you do not manually set the fan to 100%). This should respond to one of the recurring complaints about AMD's reference cards that they are often much more powerful than the competition. New buyers will receive a redeemable card for a three-month subscription to the Microsoft Xbox Game Pass for PC service.
We will have much more to say about Navi and its underlying architecture in the days to come. The GPU seems to have taken a significant step forward in reducing AMD's power over Nvidia and appears to be reaching a higher performance / watt target than Radeon VII.
The price of both cards is mobile, depending on the performance shown. The Radeon RX 5700XT will be available for $ 449, while the RX 5700 is $ 379. The price of the RX 5700 is significantly higher than the current RTX 2060 cards, which sell for $ 335 at the bottom of the market. The RX 5700XT is a $ 450 card, compared to the $ 500 price of the RTX 2070. A 50th birthday of $ 500 from the card will also be available.
One of the most interesting releases of the year is not at all a modern game, it's a new look at a 22-year-old title. Quake II is probably a game that modern players even know, which adds to the interest aroused by the work done by Nvidia to refresh this old game with a massive facelift.
according to at NV, The RTX-enabled version of Quake II will play on a GeForce RTX "or other capable hardware" graphics card, which probably means this version in an exclusive NV. However, it is a good idea to allow people to experience ray tracing on older GPUs. Virtually all NV GPUs that support RTX ray throwing (and Nvidia has enabled the capabilities of its Pascal GPU family) should be able to handle Quake II, given the absolutely anemic requirements of the game in all other categories. We have already seen how games with modest GPU requirements can be enhanced with a recent effect with a recent mod that added path tracing to Minecraft.
Cylindrical projection mode for a wide-angle field of view on wide screens
The following is an explanation of how the ray tracing works in Quake II. If you just want to see some game sequences, NV's launch video is also embedded below:
According to Nvidia, at least one RTX 2060 is recommended, but Nvidia has every interest in upgrading its own GPU line. We hope that the company has reasonably ensured that the game is well done on Pascal, even if it does not emphasize in his own marketing.
The Quake II chaingun remains my favorite chaingun application of all titles, and I can not wait to see what NV has done with one of the major FPS titles ever published. Quake II was id's first attempt to write a game with a real story, and I would not recommend anyone to start it with a thrilling, nuanced story of derring-do-it will be a little primitive compared to modern game standards. But I suspect it will hold up remarkably well and I can not wait to take it for a test drive.
This version will be limited to the first three levels of the game, unless the full version is installed on your system. In this case, you will be able to read the full version. Nvidia will also release the source code of its own mod, which will allow it to be compatible with other QII mods developed during the game's life.
<img width=”640″ height=”360″ src=”https://www.extremetech.com/wp-content/uploads/2018/10/GTX-2070-Feature-e1539711236296-640×360.jpg” class=”attachment-full size-full wp-post-image” alt=”GTX-2070-Feature "srcset =" https://www.extremetech.com/wp-content/uploads/2018/10/GTX-2070-Feature-e1539711236296-640×360.jpg 640w, https: //www.extremetech. com / wp-content / uploads / 2018/10 / GTX-2070-Feature-e1539711236296-300×169.jpg 300w, https://www.extremetech.com/wp-content/uploads/2018/10/GTX-2070-Feature -e1539711236296-768×432.jpg 768w, https://www.extremetech.com/wp-content/uploads/2018/10/GTX-2070-Feature-e1539711236296-223×126.jpg 223w, https://www.extremetech.com /wp-content/uploads/2018/10/GTX-2070-Feature-e1539711236296-106×59.jpg 106w, https://www.extremetech.com/wp-content/uploads/2018/10/GTX-2070-Feature- e1539711236296-348×196.jpg 348w "sizes =" (maximum width: 640 pixels) 100vw, 640 pixels "/>
In the months following Nvidia's Turing launch, CEO Jen-Hsun Huang has been aggressive in positioning the GPU family as a critical success. Sales data (not to mention Nvidia's revenues), but the cryptocurrency sales slump and the drop in data center shipments make it much harder to understand how Nvidia's shipments have changed between Pascal and Turing . Steam Hardware Survey data, however, is available for both periods. We followed the changes month by month in the adoption of Turing and compared them to the same period (measured in months since the launch) of Pascal.
Until now, none of these comparisons have been good for Turing. While we recognize that the Steam Hardware Survey is not a perfect indicator for GPU market share, it remains the best overall guide on what consumers buy for themselves. This month, finally, Nvidia can affirm that the single data point evolves in its favor: RTX 2070 The adoption is faster than that of the GTX 1080 in 2016, according to Steam reports in both periods. We are far enough away from Pascal's launch that we can predict a strong GPU availability in this period, and the cryptocurrency market has not exploded yet, pushing GPU prices across the stratosphere.
The slideshow below compares the percentage of Steam users with a given graphics processor, measured in the months following its launch. The charts have a 0% period to indicate the time lag between when they started and when they actually appear in the Steam Hardware Survey. If a GPU is launched in May, May is considered the first month. The RTX 2080 and 2070 use a period of 8 months to reflect the time elapsed since launch, while the RTX 2060 uses a four-month window (launched in January).
Since Nvidia chose to increase its GPU prices when it was launched, Turing GPUs with the same brand structure (x60, x70, x80) all represent about a more expensive price band than their Pascal predecessors. To account for this, we also compare these apple-apples-apples cards to the cost.
Here is the complete data table.
As always, take these comparisons knowing that they are not (and do not claim to) absolute guarantee metrics. But Nvidia continues to make aggressive statements about Turing's success in the market. From the most recent of the company conference call:
Our strategy with RTX was to take the lead and move the world to Ray's tracing. And at this point, I think it's pretty safe to say that the leadership position we've adopted has turned into a move that has transformed next-generation game Ray Tracing into standard. Almost all gaming platforms must use Ray tracing and some of them have already announced that the partnerships we have developed are fantastic. Microsoft DXR supports Ray tracing, Unity supports Ray tracing, Epic supports Ray tracing, editors such as EA. Adopted RTX and supported ray and film tracing studios, Pixar has announced its intention to use RTX and use artifacts to speed up the rendering of movies.
And so, Adobe and Autodesk turned to RTX, which will bring Ray's tracing to their content and tools. And so, I think now, it's fair to say that Ray's tracing is the next generation and will be spreading around the world.
It is quite true that various platforms and game engines have announced support for ray tracing, but the idea that capacity has exploded from nowhere to world domination in a few months is at best misleading. In the best case, we are at the beginning of a progressive technological transition that will see the tracing adopted as a complementary technology to pixelation. We have no information about the type of ray that traces the SP5 will actually be able to. Game engines add RT support, but it takes years for gaming engine support to translate into robust gaming support. It is this type of problem that explains why we recommended patience and caution when Turing was launched eight months ago.
An ecosystem of words is not the same thing as an ecosystem of games that you can buy and enjoy right now, and there are not enough RT compatible titles to be qualified for. ecosystem – by far. From here, Turing will have been replaced for a long time by something new. And claiming to win a major victory by pushing ray tracing in the corporate markets and 3D rendering, where it has been for years, is not a feat to do. Nvidia and other companies have been working for ten years to improve tracing. This is the availability of real-time ray tracing for games, in particular, which would constitute such a change from the current pixilated status quo. The adoption of Turing has fallen behind Pascal, almost at all levels. Only now, eight full months after launch, are we seeing signs that Turing SKUs outperform a Pascal equivalent – and we've seen it so far in just one GPU SKU.
To date, Nvidia's long Turing refresh cycle has been completed. The GTX 1650 – the latest GPU – replaces the GTX 1050 / GTX 1050 Ti.
As could be expected, Nvidia cut the TU116 used for the GeForce 1660 and GeForce 1660 Ti to create TU117. The new GPU packs about 2/3 of the ROPs, cores and memory bandwidth of the GTX 1660. A comparison between the Pascal and Turing family GPUs in this price range is presented below.
As with other Pascal cards, the label on the 1650 does not match the price mark. This GPU replaces more the 1050 Ti than the 1050. In the budget category, these relatively low absolute prices accumulate fast enough, as all those who have already had to build a tight price level at the time of the year. attest. Compared to the 1050 Ti, the GTX 1650 is expected to deliver significantly better performance. The base clock of the GPU is 1.15x higher, which should result in an overall performance improvement of at least 1.1x, or even more. Boost clocks are 1.19 times higher, which means that gains could be greater than the clock before Turing kernel enhancements are taken into account.
However, we really can not tell you what is the behavior of the GTX 1650, because Nvidia has chosen not to test any GPU to anyone. It is not unusual for Nvidia or AMD to put less emphasis on its low-end products, but in this case, Nvidia deliberately withheld the GTX 1650 driver to prevent reviewers who obtained secure cards from third to test the GPU, even if they had any. the material in hand. C & # 39; not how desktop GPU revisions are usually handled. But we have a good idea of why Nvidia did it.
On paper, the GTX 1650 should be at less 15% faster than the GTX 1050 Ti. We would not blink if you told us that the current gains are in the range 1.2x – 1.25x – 15% is the minimum gain we expect based solely on the clock.
The problem is that the AMD RX 570 is much more than 1.25 times faster than the GTX 1050 Ti. TechSpot did a exhaustive comparison of both GPUs and found that the RX 570 is, on average, 1.43 times faster than the GTX 1050 Ti at 1080p. According to the Steam Hardware Survey, the RX 570 currently has a market share of 0.34%, against 9.68% for the 1050% Tyna testifies the market power of Nvidia – but not the competitiveness of its hardware in this price range.
The RX 570 will certainly consume much more power than the GTX 1650 – we found at the launch of the 1660 that the RX 590 consumed almost twice as much power as the GTX 1660, and that the RX 570 could double the power absorbed by the GTX 1650. But launching a brand new GPU that loses against the two-year part of your competitor is not particularly handsome isand that's probably why Nvidia has refused to sample the GTX 1650 driver until today, ensuring that even if the hardware sites are hogging a graphics processor, they would not be able to test it.
Given this positioning, we recommend that you wait until a trigger is triggered before pressing it on a GTX 1650. We expect it to be a stronger map than the 1050 Ti, but we do not want not speculate on its performance compared to the RX 570..
With the new information we hear around the PlayStation 5, it goes without saying that we also hear tips on the graphics processor that feeds. There have been some major categories of leaks around the chip.
Last year, AdoredTV suggested AMD aggressively bring a set of GPUs to the market to undermine Nvidia's Turing value. These leaks collectively resembled:
Now, anybody posting to 4 channels (SFW variant of 4chan) claims to work at AMD, with additional information on the "Radeon RX 3080" (AMD thinks that AMD will combine the Radeon and Ryzen brands into similar numbers for each family).
According to this leak, Navi has 1 MB of additional L2 cache compared to Polaris (3 MB total). The L1 is now 32 KB, against 16 KB. The high-end graphics processor offers a memory bandwidth of 410 GB / s on a 256-bit bus, against 256 GB / s on a current RX 590.
None of this is in conflict with the previous information leak, but the following information suggests that Navi will target the performance of the Vega 56 / GTX 1080 as opposed to the GTX 1080 / RTX 2070 – and that is new C is also good news if you hope that AMD will bring a devastating blow to Nvidia's RTX family.
Here's the problem: first, the gap between the Vega 56 and the GTX 1080 is between 10 and 20%, depending on the games you are comparing. They are not placed in an equivalent way; The Vega 64 was the closest point of comparison to the GTX 1080. The RTX 2070, meanwhile, is about 8% faster than the GTX 1080. The correspondence of the Vega 56 with Navi is therefore much slower than that of the RTX 2070.
While a Navi that corresponds to the Vega 56 would be disappointing compared to expectations, matching the performance of the Vega 56 while reducing power and price would constitute a collective set of improvements larger than that of the Radeon VII with respect to Vega 64. this year.
The Radeon VII has significantly improved its performance compared to Vega 64, with minor changes in power and noise. This hypothetical Navi would reduce prices, increase performance and significantly reduce power (and therefore noise). Without a doubt, this would bring more benefits – but not necessarily where people want it the most.
If the previous leak is true, of course, Navi would really be looking for stars. A $ 250 GPU competing with a $ 500 GPU would create gaping holes in Nvidia's price range and product line.
The main reason for thinking that AMD would step as this is its own position in the GPU market. AMD may have most of the console space, but it has been virtually driven out of the high-end markets for desktops and laptops. Getting out of the door could bring him back to the mind.
The biggest reason to think that they will not do it? Let's be honest. In the six years since the launch of Hawaii, AMD's graphics performance has not been as good. Hawaii was a big competitor but problems were not solved until third party cooler designs were available. Fury X had a glue problem in its water cooler and could not properly beat the GTX 980 Ti.
When the Polaris family was abandoned, we were faced with problems and concerns about an improper power supply to the PCIe connector due to an incorrect distribution of the 12V graphics processor. Vega 56 and Vega 64 were available only in limited quantities, at exorbitant prices (they already had the same GPU price boom in 2017) and in two different SKUs with two different colder z heights. It may seem strange as an internal problem in baseball, but I promise you that the last thing AMD wanted for its niche graphics processor was the design two niche cooling solutions, only one that can be used depending on the room you own. The commendable improvements in the performance of Radeon VII have been overshadowed by its volume and the general lack of new features compared to other cards in the market.
AMD has had some positive points since. The Radeon Nano was a well received niche product and the HD 7790 has improved AMD's low-end performance compared to the HD 7770 in a cost-effective manner. But AMD needs more than a niche product or a single budget card well received. He needs a new architecture that shows that it can compete from top to bottom with Nvidia. Even if he does not launch new cards in each segment, he must demonstrate a architecture it's able to resize. But any honest review of the past six years reveals that AMD has struggled to do exactly that. It has cost hundreds of millions of dollars in the console market and it still has a very strong lock on the shallow where the RX 570 plays, but it's been a long time that AMD has not yet introduced the GPU by shooting without problem or problem. problem in sight.
That's not to say that I think Navi is a failure. A USD 250 Vega 56 or GTX 1080 equivalent would represent a significant leap in performance over current AMD silicon with a fraction of the power. We do not know what the ray tracing The situation is still unknown because we do not know if the silicone that Navi uses in 2020 is identical to the silicone that Navi will be marketed by AMD in 2019. To date, AMD has been very discreet on ray tracing. except to say that they did not expect to introduce the feature until they could do it, from top to bottom. This could imply that they prevent the possibility of launching a future GPU, possibly aligned with the official debut of the PS5. There are also rumors of different flavors of Navi and different performance targets making their appearance, some arriving in 2020. Alternatively, they could belong to Arcturus, supposed successor of Navi.
The players who called on AMD to reach the "Ryzen" level in gaming performance were not wrong; the company needs one. Although the company still maintains its performance against Nvidia in certain price ranges, it now uses nearly twice the power. It's an ugly and unsustainable position. Navi will be the architecture to change everything? I hope so – but we have been waiting for proper follow-up in Hawaii for six years now. 4chan is not exactly a reliable source of data. However, GPU shipments in the last six years of AMD have not been so solid.
When Nvidia launched Turing, it decided to split the brand of its product by dividing the GeForce brand into two segments. GTX cards, even those based on Turing-class GPUs, would not include new specialized hardware features that support Nvidia's ray tracing. RTX cards, which support Microsoft DirectX ray tracing with some additional hardware capabilities, would do so.
Last month, the company changed this direction. DXR / RTX Capabilities would be unlocked on Nvidia GTX GPUs, including latest generation Pascal cards, via a special driver update. But Nvidia's early discussions on DXR also highlighted the fact that allowing ray tracing was a ruinous performance – heavy enough that only specialized cards like Turing could handle it in the first place. The new WHQL 425.31 driver enables RTX GPUs on GTX.
The ray tracing has a lot of performance, but the results of the Pascal card tests with this feature suggest that at least the 1080 Ti is able to activate it from time to time. Nvidia has published the results of its own tests in three games: Metro Exodus, Battlefield V and Shadow of the Tomb Raider.
At 1920 × 1080, the GeForce GTX 1080 Ti is able to maintain frame rates higher than 50 frames per second in two of the three titles when the quality used is less than "Ultra". In the third game, Exodus Metro, only the 1080 Ti can crack 30 frames per second in this mode.
The relatively strong results for the 1080 and even the 1070, however, suggest that GTX players should at least be able to sample games in ray tracing mode. Even the GTX 1060 can exceed 30 frames / second in Battlefield V. For those looking for independent results, PC Gamer has confirmed its own numbers are broadly in line with Nvidia's.
Note that these results do not include frame times, which typically occurred on Turing GPUs during DXR calculations. As a result, it is possible that Pascal does not work as well as a Turing GPU at a similar pace. Without information on the time frame, we can not draw any conclusion: there is one way or another.
It is not clear if Nvidia's decision to open support for GTX cards has implications on AMD's ability to support these features. The fact that both cards support a common code does not always mean that their implementations of a feature do not benefit from accurate tuning. All DXR support tuning was probably done only on Nvidia GPUs. Therefore, the operation of AMD cards in these workloads or with these features is still unknown.