AMD announced last night its new Radeon GPUs at E3. The new Radeon RX 5700 and 5700XT are positioned as responses to the Nvidia RTX 2060 and 2070 family, rather than a frontal assault on the RTX 2080. As previously announced, the Radeon VII will remain on the market over the RTX 2080 .
Navi, however, seems to be a significant step forward for AMD on several fronts. We will have a deep architectural dive in the near future, but for now, let's look at speeds, flows and competitive positioning. You can click all the slides to open a larger version in a new window.
The Radeon 5700XT is a 40-unit design, with 2,560 stream processors, 9.75 TFLOP floating point performance, and clocks far superior to anything we've seen before. AMD's new RDNA architecture, which finally GCN replaces, is significantly more efficient than its predecessor, with a projected increase of 1.25 times in performance per clock. The TDP on the 5700XT is 225W, compared to 295W on the Vega 64. The power supply is via an 8-pin connector and a six-pin connector.
The Radeon 5700 is a 36 CPU design with 2,304 stream processors and the same 8GB RAM pool as the 5700 XT. As expected, there is no sign of HBM on these products. The 5700 features a 180 W TDP and the same 1x 8-pin + 1x 6-pin power distribution system.
According to AMD, its new RDNA architecture is a major improvement over GCN, with substantial improvements in clock performance, raw clock speed, and watt performance.
The performance improvement per watt of 1.25x does not take into account clock speed gains but adds to them. It's a good time to talk about the newly defined synchronization scheme by AMD, so let's talk about it.
The basic clock of these cards is equivalent to what you will see if you run a type of workload type of power virus like Furmark. The "Game Clock" is a conservative estimate of the clock that you will see when running current titles over long periods of time. According to AMD, this is not the GPU's median clock rate over time during gaming – it's actually a bit lower than expected to allow for variations in silicon and cooling from one system to another. AMD derived its Game Clock values by measuring the average speed of the game's graphics processors on 25 different games.
The Boost Clock is an opportunistic clock that the GPU will try to hit when possible. Even this value does not represent the maximum potential speed (AMD has described it as "close to the maximum"). With improvements to the underlying architecture and overall design, GPU clocks are much higher than anything we've seen before from AMD. (We will talk about this in more detail in the coming days.)
According to AMD, the power improvements to give RDNA 1.5 times better performance in limited power environments than an equivalent GCN configuration. The change to 7nm represents just over 20% of the total gain, with improvements in design and power of about 15-18%. Most improvements result from improving the performance of the core of the graphics processor. We will write as teaser for deep diving: RDNA can execute instructions at each cycle, compared to GCN, which takes at least four cycles. The net result of these enhancements is a significantly better GPU than the cards it replaces, dramatically improving performance and simultaneously reducing power consumption.
AMD expects the Radeon 5700XT to deliver performance around 1.14 times faster than Vega 64, while consuming 23% less energy. TDPs rated on the 5700XT and 5700 are still superior to their Nvidia counterparts, but TDP is not a substitute for power; we will have to test the hardware to see how the GPUs compare. The performance improvement by area is substantial. Vega 64 was 495mm2 part, while Navi is at 251mm2 part. The RTX 2060 and 2070, on the other hand, are 445mm2.
These gains should put the 5700XT slightly higher than the GTX 1080 and the RTX 2070 in terms of overall performance. AMD also told us that we have read our reactions in terms of colder reference noise.
The 5700 and 5700XT models not only use wrapped fans that vent heat from the system, but AMD also promises to lock them to a volume of 43dbA. (It is not known whether it is absolute maximum volume or maximum volume provided you do not manually set the fan to 100%). This should respond to one of the recurring complaints about AMD's reference cards that they are often much more powerful than the competition. New buyers will receive a redeemable card for a three-month subscription to the Microsoft Xbox Game Pass for PC service.
We will have much more to say about Navi and its underlying architecture in the days to come. The GPU seems to have taken a significant step forward in reducing AMD's power over Nvidia and appears to be reaching a higher performance / watt target than Radeon VII.
The price of both cards is mobile, depending on the performance shown. The Radeon RX 5700XT will be available for $ 449, while the RX 5700 is $ 379. The price of the RX 5700 is significantly higher than the current RTX 2060 cards, which sell for $ 335 at the bottom of the market. The RX 5700XT is a $ 450 card, compared to the $ 500 price of the RTX 2070. A 50th birthday of $ 500 from the card will also be available.
If all you do is the best news of series such as Computex, the PC market seems to be going well. DRAM prices have failed. The new 7 nm AMD and 10 nm Intel processors are about to generate additional gains. But under the hood, semiconductor companies have suffered a collective defeat.
Global chip sales dropped to $ 101.2 billion in the first quarter of 2019, compared to $ 116.2 billion in the first quarter of 2018. This is the largest year-over-year decline since the depths of the Great Recession, according to IHS Markit. Samsung, whose sales fell 34 percent from last year, suffered the largest losses in the NAND and DRAM markets. The prices of these two components have improved considerably. The memory chips, noted IHS, were at the origin of most of the fall. Remove their impact and sales dropped 4.4%. But the "catastrophic fall" has also been caused by other factors, including declining demand in major markets.
It is not difficult to know which companies are exposed to the NAND and DRAM markets by consulting these graphs. Intel regained first place at the top of the market thanks to the massive decline of Samsung. As everyone's income went down, Nvidia was the only other non-memory company to be so successful. It was experiencing a rise in RTX stocks, a sharp decline in data center sales and a downward comparison from a quarter where crypto-salts were still driving the market. Nvidia's comparative financial data will improve in the second quarter and beyond, once we have left the period in which we could have benefited from inflated cryptographic salts.
The global computer application chip market fell 16.7% in the first quarter of 2019. It is interesting to note that IHS claims that Nvidia's sales have dropped in part because of AMD's direct graphics competition in the data center. This is the first time we hear this statement. AMD has not talked much about its sales volume or its data center performance, but its overall position in the data centers and on the AI / ML markets in which it wishes to penetrate has been considered weak enough. We examined this topic because of questions about GPU AI and ML performance tests. The truth is that it's hard to find people who work with AMD cards. There are many reasons, including the relative state of maturity and compatibility of OpenCL versus CUDA.
At the moment, the best players in the industry are still planning a recovery at the end of the year. We should know whether this is true or not, because advanced economic indicators will begin to appear before we get official economic reports. While worries about Brexit and the EU may rage around Halloween and the trade war between the two countries, the current semiconductor market situation for 2019 is still uncertain.
In fact, the semiconductor market is characterized by many uncertainties, even at a more granular level. Qualcomm has been declared a massive monopolist and the whole of its revenue generation system could consequently be permanently changed. Nvidia is trying to convince consumers to pay a significant premium for ray tracing assistance. AMD's new 7-nm processors and graphics processors are expected to bolster their own efforts to regain market share. One wonders about Intel's 10 nm efforts and about its future Ice Lake processors. Questions arise regarding the impact of the US sales ban on Huawei on the broader semiconductor market and on China's reaction if any of its Flagship companies is seriously damaged. How these trends and issues evolve will affect how the rest of 2019 will be built.
One of the most interesting releases of the year is not at all a modern game, it's a new look at a 22-year-old title. Quake II is probably a game that modern players even know, which adds to the interest aroused by the work done by Nvidia to refresh this old game with a massive facelift.
according to at NV, The RTX-enabled version of Quake II will play on a GeForce RTX "or other capable hardware" graphics card, which probably means this version in an exclusive NV. However, it is a good idea to allow people to experience ray tracing on older GPUs. Virtually all NV GPUs that support RTX ray throwing (and Nvidia has enabled the capabilities of its Pascal GPU family) should be able to handle Quake II, given the absolutely anemic requirements of the game in all other categories. We have already seen how games with modest GPU requirements can be enhanced with a recent effect with a recent mod that added path tracing to Minecraft.
Cylindrical projection mode for a wide-angle field of view on wide screens
The following is an explanation of how the ray tracing works in Quake II. If you just want to see some game sequences, NV's launch video is also embedded below:
According to Nvidia, at least one RTX 2060 is recommended, but Nvidia has every interest in upgrading its own GPU line. We hope that the company has reasonably ensured that the game is well done on Pascal, even if it does not emphasize in his own marketing.
The Quake II chaingun remains my favorite chaingun application of all titles, and I can not wait to see what NV has done with one of the major FPS titles ever published. Quake II was id's first attempt to write a game with a real story, and I would not recommend anyone to start it with a thrilling, nuanced story of derring-do-it will be a little primitive compared to modern game standards. But I suspect it will hold up remarkably well and I can not wait to take it for a test drive.
This version will be limited to the first three levels of the game, unless the full version is installed on your system. In this case, you will be able to read the full version. Nvidia will also release the source code of its own mod, which will allow it to be compatible with other QII mods developed during the game's life.
While most mobile users are looking for the lightest laptop possible, those with high computing requirements for applications such as scientific work, modeling, media processing, and AI want to get as much performance as possible on the road. With this week's announcement at Computex, Nvidia gives them access to its Turing architecture GPUs, as well as to its RTX and AI tracing cores.
There are six new Quadro mobile GPUs. Three are based on Turing with RTX: the 5000, 4000 and 3000. Three are Turing but without RTX cores – The T2000 and T1000. If you want to shell out extra money for RTX, this will have a lot to do with whether your favorite applications plan to use it for ray tracing or AI inference. The two low-end models, the P620 and P520, are updates to the currently numbered Pascal architecture models, the P600 and P500.
The 5000 will support up to 16GB of GDDR6, like the current P5200, which makes it more capable than many desktop GPUs. Those hoping to get performance and battery life will be happy to know that it consumes only 110 watts, compared to the 150 watts of the P5200, despite 500 additional CUDA cores, 384 tensors and 48 RT cores. Most other models offer slightly lower upgrades than the flagship product and have energy requirements similar to current versions.
At the press conference on the new GPUs, Nvidia presented convincing comparisons with its previous generation and the Vega 20 on a MacBook Pro. For example, Lightroom's new enhanced capacity should run 4x as fast as on any Core i7 or another. Vega 20. Similarly, the rendering of ray tracing in Maya should be almost 10 times faster. Nvidia also showed how RTX RT kernels give life to photorealistic architectural renderings, filling realistic shadows and making colors deeper in 3D Enscape than the same scene rendered without RTX. The use of RT cores should also be more efficient than similar functions performed on more general CUDA cores.
Nvidia has worked closely with the manufacturer of high-resolution RED video cameras on real-time mobile workflows for 6K and even complete workflows for 8K video, using Adobe Premiere Pro 2019 and DaVinci Resolve 16 software. , in addition to the tools specific to RED. Although most content is currently limited to 4K or 1080p, many videographers shoot in 6K or 8K to give them some post-processing flexibility in panning or zooming. In addition, the distribution of high-resolution content is more widespread.
Although Nvidia's innovative RTX capabilities have been slow to gain momentum, the company continues to add to the list of applications that will use them. For example, Solidworks will be one of the first to use Nvidia's AI-based AI-based descrambling for ray tracing. Until now, it's only a demo, but companies have announced that it will be in Solidworks later this year. Nvidia says that the new feature installed on its RT cores will accelerate up to 10 times the generation of silent renders.
We anticipate that the first mobile workstations based on new GPUs will be introduced this week. Ads will continue for the remainder of the quarter, likely from typical vendors, including HP, MSI, Dell and Lenovo. We will have to wait for the first workstations to get a real sense of performance improvement, because with laptops, thermal regulation can quickly become the biggest limitation in terms of graphics performance and computation.
<img width=”640″ height=”360″ src=”https://www.extremetech.com/wp-content/uploads/2018/10/GTX-2070-Feature-e1539711236296-640×360.jpg” class=”attachment-full size-full wp-post-image” alt=”GTX-2070-Feature "srcset =" https://www.extremetech.com/wp-content/uploads/2018/10/GTX-2070-Feature-e1539711236296-640×360.jpg 640w, https: //www.extremetech. com / wp-content / uploads / 2018/10 / GTX-2070-Feature-e1539711236296-300×169.jpg 300w, https://www.extremetech.com/wp-content/uploads/2018/10/GTX-2070-Feature -e1539711236296-768×432.jpg 768w, https://www.extremetech.com/wp-content/uploads/2018/10/GTX-2070-Feature-e1539711236296-223×126.jpg 223w, https://www.extremetech.com /wp-content/uploads/2018/10/GTX-2070-Feature-e1539711236296-106×59.jpg 106w, https://www.extremetech.com/wp-content/uploads/2018/10/GTX-2070-Feature- e1539711236296-348×196.jpg 348w "sizes =" (maximum width: 640 pixels) 100vw, 640 pixels "/>
In the months following Nvidia's Turing launch, CEO Jen-Hsun Huang has been aggressive in positioning the GPU family as a critical success. Sales data (not to mention Nvidia's revenues), but the cryptocurrency sales slump and the drop in data center shipments make it much harder to understand how Nvidia's shipments have changed between Pascal and Turing . Steam Hardware Survey data, however, is available for both periods. We followed the changes month by month in the adoption of Turing and compared them to the same period (measured in months since the launch) of Pascal.
Until now, none of these comparisons have been good for Turing. While we recognize that the Steam Hardware Survey is not a perfect indicator for GPU market share, it remains the best overall guide on what consumers buy for themselves. This month, finally, Nvidia can affirm that the single data point evolves in its favor: RTX 2070 The adoption is faster than that of the GTX 1080 in 2016, according to Steam reports in both periods. We are far enough away from Pascal's launch that we can predict a strong GPU availability in this period, and the cryptocurrency market has not exploded yet, pushing GPU prices across the stratosphere.
The slideshow below compares the percentage of Steam users with a given graphics processor, measured in the months following its launch. The charts have a 0% period to indicate the time lag between when they started and when they actually appear in the Steam Hardware Survey. If a GPU is launched in May, May is considered the first month. The RTX 2080 and 2070 use a period of 8 months to reflect the time elapsed since launch, while the RTX 2060 uses a four-month window (launched in January).
Since Nvidia chose to increase its GPU prices when it was launched, Turing GPUs with the same brand structure (x60, x70, x80) all represent about a more expensive price band than their Pascal predecessors. To account for this, we also compare these apple-apples-apples cards to the cost.
Here is the complete data table.
As always, take these comparisons knowing that they are not (and do not claim to) absolute guarantee metrics. But Nvidia continues to make aggressive statements about Turing's success in the market. From the most recent of the company conference call:
Our strategy with RTX was to take the lead and move the world to Ray's tracing. And at this point, I think it's pretty safe to say that the leadership position we've adopted has turned into a move that has transformed next-generation game Ray Tracing into standard. Almost all gaming platforms must use Ray tracing and some of them have already announced that the partnerships we have developed are fantastic. Microsoft DXR supports Ray tracing, Unity supports Ray tracing, Epic supports Ray tracing, editors such as EA. Adopted RTX and supported ray and film tracing studios, Pixar has announced its intention to use RTX and use artifacts to speed up the rendering of movies.
And so, Adobe and Autodesk turned to RTX, which will bring Ray's tracing to their content and tools. And so, I think now, it's fair to say that Ray's tracing is the next generation and will be spreading around the world.
It is quite true that various platforms and game engines have announced support for ray tracing, but the idea that capacity has exploded from nowhere to world domination in a few months is at best misleading. In the best case, we are at the beginning of a progressive technological transition that will see the tracing adopted as a complementary technology to pixelation. We have no information about the type of ray that traces the SP5 will actually be able to. Game engines add RT support, but it takes years for gaming engine support to translate into robust gaming support. It is this type of problem that explains why we recommended patience and caution when Turing was launched eight months ago.
An ecosystem of words is not the same thing as an ecosystem of games that you can buy and enjoy right now, and there are not enough RT compatible titles to be qualified for. ecosystem – by far. From here, Turing will have been replaced for a long time by something new. And claiming to win a major victory by pushing ray tracing in the corporate markets and 3D rendering, where it has been for years, is not a feat to do. Nvidia and other companies have been working for ten years to improve tracing. This is the availability of real-time ray tracing for games, in particular, which would constitute such a change from the current pixilated status quo. The adoption of Turing has fallen behind Pascal, almost at all levels. Only now, eight full months after launch, are we seeing signs that Turing SKUs outperform a Pascal equivalent – and we've seen it so far in just one GPU SKU.