One of the new features built into DirectX 12 is support for variable rate shading, also known as coarse shading. The idea of variable rate shading is simple: in the vast majority of 3D games, the player does not pay the same attention to everything on the screen. As for the GPU, however, every pixel on the screen is usually shaded at the same rate. VRS / CGS makes it possible to redistribute the work of the shader for a single pixel on groups of larger pixels; Intel demonstrated this feature during its Architecture Day last year, featuring a 2 × 2 and 4 × 4 grid block.
In a blog post explaining the subject, Microsoft writes:
VRS allows developers to selectively reduce the shading rate in areas of the image that do not affect visual quality, allowing them to gain additional performance in their games. It's really exciting, because additional performance means an increase in the number of framing processors and low-specification hardware to run better games than ever before.
VRS also allows developers to do the opposite: use increased shading only in areas where it matters most, which means even better visual quality in games.
VRS is a tip in a long line of tips to help developers focus GPU power where they need it most. This is the kind of technique that will become more and more important as Moore's law slows down and it becomes harder and harder to extract more power from GPUs. Process node advances. 3DMark has recently added a new reference to show the impact of VRS.
First, here is a comparison of the enabled versus disabled feature.
VRS disabled. Image provided by UL. Click to enlarge.
VRS enabled. Image provided by UL. Click to enlarge.
There is also a video of the effect in action, which gives you an idea of its moving appearance.
With regard to the impact on performance, Hot material recently took the function for a rotation on Intel's 10th generation graphics processors The performance improvement related to the activation of this feature was about 40%.
These gains are not unique to Intel. HH has also tested several Nvidia GPUs and has also seen significant gains for these cards. Unfortunately, VRS is currently limited to Nvidia and Intel only. AMD does not support this feature and may not be able to enable it in current versions of Navi.
It always takes time to create support for features like this, so the lack of an option at the beginning does not necessarily constitute a critical issue. However, at the same time, features that reduce the graphics processor's computing power by reducing the impact of using various features tend to be popular among developers. It can help games run on low-power solutions and in form factors that they would not otherwise support. All rasterization is basically a trick to model the real world without having to make one, and choosing where to spend its resources to optimize performance is something that developers love to improve efficiency. At the moment, support is limited to a few architectures – Turing and Intel Gen 11 integrated – but this will change over time.
VRS support was changed in Wolfenstein 2: The New Colossus and shipped in Wolfenstein: Youngblood. Firaxis has also demonstrated the effect in Civilization VI, which implies that support could come in this direction at some point. The new VRS Performance Test is a free update to 3DMark Advanced or Professional Edition if you have these versions, but is not included in the free Basic Edition.
NoteNote: An earlier version of this story indicated that VRS was currently not supported by any game. This claim was based on data from another site that was true at the time of creation but that is not more accurate after the support update in Wolfenstein 2 and its shipment in Youngblood. AND regrets the error.
The first image of this article is the VRS on screenshot provided by UL. Have you noticed? Fun to check anyway.
In February, I recounted how AMD and Nvidia collectively launched the refreshment cycle of the least-popular high-end GPUs in the history of the video game industry. After the launch of the AMD Navi 5700 and 5700 XT and the Nvidia replica with the RTX 2060 Super and 2070 Super, it makes sense to come back to this conclusion. How did things improve a little over six months later?
In fact, they have improved a lot if you buy at the top of the market. Before reviewing the details of the changes, let me clarify some of the terms. Historically, GPU price ranges look like this:
Budget: $ 150 or less.
Mid-range: $ 150 – $ 300
High end: $ 300 – $ 500
Ultra-high: $ 500 +.
When Nvidia introduced the RTX family, prices went up considerably. Instead of the GTX 1070 around $ 370 and the GTX 1080 between $ 500 and $ 550, the RTX 2070 was a $ 500 GPU, the RTX 2080 was $ 700, and the 2080 Ti actually worked between 1,100 and 1,200 dollars ($ 1,000 technically, but nobody ever got them, as far as I know).
A publication like ours has two basic ways: manage your own price band and insert the new cards, or change our price bands and increase them to meet the needs of the manufacturer. If you take the latter approach, AMD's Navi graphics cards are now "mid-range" cards, despite price tags of $ 350 and $ 400. It's also the way you end up with articles referring to the iPhone XR as "entry-level" or "budget" at $ 750, as if Apple did not just kill the only pseudo-budget device proposed, the iPhone SE at 350 USD.
Adjusting price brackets to reflect what businesses are selling is not wrong, as long as it matches what customers purchase The next quarterly Nvidia figures should provide further confirmation here, but the available data suggests that Turing's sales were way behind Pascal at launch and that they may not have recovered since. If Nvidia really thought they had established ray tracing as a feature that players were willing to pay, the RTX 2060, 2070, and 2080 GPUs would not have been reduced.
As far as ExtremeTech is concerned, at least for the moment, the Navi 5700 and 5700 XT cards are high-end cards, as are the RTX 2060, 2060 Super, 2070 and 2070 Super. The RTX 2080, 2080 Super and 2080 Ti belong to their own distinct category of ultra-high-end devices.
We have recently measured the long-term performance evolution with various graphics processors, but we can use this dataset for different purposes. Keep in mind that in the graphics series below, the GeForce RTX 2080 (non-Super) delivers nearly the same performance as the RTX 2070 Super. (The 2070S is typically between 95 and 105% of the performance of RTX 2080).
Comparing RTX 2070S / 2080 to GTX 1080, we find that the minimum frame rates are 1.18 times higher at 1080p, 1.28 times higher at 1440p and 1.4x higher at 4K. The average frame rates of our entire game range are 1.3 times higher at 1080p, 1.4 times higher at 1440p and 1.44x higher at 4K.
I do not have the same level of data on the GTX 1070 as the RTX 2060 Super, but we know that the 2060S also improves performance by about 1.15, as it almost works out of the same way as the RTX 2070 of origin. The new $ 400 GPU award brings it closer to the original GTX 1070 than the OG 1080.
As for AMD, the 5700 and 5700 XT effectively replace Vega 56 and 64. The slideshow below contains the results of our RX 5700 and 5700 XT tests. The Radeon RX 5700 matches the Vega 64 in almost every test, but costs $ 350 instead of $ 500. It consumes 74% of the power of Vega 64 while outperforming the RTX 2060.
ace upgrades for the current owners of Vega 56 and Vega 64, the best case will be between Vega 56 and RX 5700 XT. In this case, I value my earnings, but I'm pretty sure they are not as important as the improvements between Turing's Turing prices and Pascal and Turing. Vega 56 was typically 1.08x to 1.12x slower than Vega 64, but the 5700 XT's lead over Vega 64 varies significantly depending on the game. In some cases, both GPUs are linked.
AMD players with older cards or Nvidia players wishing to change sides are the most likely customers to use the RX 5700 and RX 5700 XT, and the performance offered by these cards make it a potentially interesting upgrade to these markets.
The new AMD launches have restored a better, more consumer-friendly balance in the upper end of the GPU market. The market of very high-end remains less friendly. The RTX 2080 Super offers the smallest performance improvement of all "Super" cards and does not do a very good job justifying its $ 200 price premium over the RTX 2070 Super. The Radeon VII and the RTX 2080 Super are only justifiable if you play in 4K and, honestly, they are not so convincing even in this situation.
AMD has not yet announced plans for the midrange market, but the company also needs to work on maps to refresh this space as well. Let's hope it will not be long before we have much more efficient and powerful chips ready to replace the RX 570, 580 and 590.
As for whether Navi or Turing is a better way to upgrade, it will depend a bit on what you want: A little more speed (compared to the competition), or features such as ray tracing? Some users may not think that even these gains are enough, which I understand. But we can at least say that it are performance / dollar gains over the previous generation. Six months ago, it was not possible.
According to the Steam Hardware survey, one of the most troubling points of recent years is the total inability of AMD to gain market share. We have always warned readers that SHS may not be accurate because of problems we observed in the data set, but it was never clear what these problems were. Other analytics companies have reported market share gains for AMD, but the Steam Hardware survey, which represents the best data the audience of us has access to, has never shown a real gain for AMD .
A recent interview from Scott Herkelman by Hot Hardware clears this point. I wrote about this interview in a different story yesterday, but this point is specific enough to deserve its own escape.
According to Herkelman, the problem with SHS goes back to the discontinuity of August 2017. The problem was introduced because of a bug in the Steam Hardware Survey, but it is assumed that systemic under-coverage in AMD systems persists until our days.
The Steam study, according to Herkelman, is not aimed at measuring the market share of companies' IT equipment. It is supposed to indicate to the developers the type of products available on the market. If Steam's hardware survey went crazy in 2017, it was because Steam counted each individual connection on an iCafe as another example of its computer. system configuration Imagine if you had 10 of your best friends to play Steam games on your PC. Each of them logged on to their own account – which led to 10 copies of your system configuration downloaded to the platform and counted as separate shipments.
This is essentially what happened with Steam. And according to AMD, although the company has made some corrections to its data, Valve has never been keen to ensure that its numbers can track the actual market share. AMD, on the other hand, is significantly underrepresented in iCafe games.
"They've changed their algorithm a bit, but they really are not motivated to change that," Herkelman said, "because the purpose of their data is not market share. The purpose of their data is to show general trends to game developers … they certainly do not follow our real share …. you can see the same thing really happening in our CPU share. She is still under-represented, it's the same exact curve and everything is related to iCafes. "
To reinforce his argument, Herkelman has released images like this, showing how the AMD processor adoption rates dramatically changed when Steam added PUBG to China and then changed again after the company updated its algorithms. In both cases, AMD's market share was lower as a result of the update.
The problem, of course, is that Herkelman's comments do not change the fact that the Steam Hardware Survey is again our best source of specific data on the type of hardware used by the player community. Even when AMD and Nvidia publish market data, they rarely publish information about specific products or prices.
The only thing that makes no sense in all of this is why Valve does not care about inaccuracies in its own data set. The purpose of SHS may not be to present accurate market share data, but it is hardly better to provide inaccurate data to developers. If the developers think that more players own the GTX 1060 and 1050 Ti cards than they really are (these being the first two GPUs of the survey, with respectively 16.01% and 10.63% of the shares of market), they will draw erroneous conclusions as to which cards to use. target for future development.
The only conclusion we can draw is that Valve does not believe that the remaining inaccuracy is enough to have an impact on what the developers are doing. Clearly, AMD was sufficiently convinced by the subject to publicly expose the problems associated with using the SHS for market share estimates. The impact this could have on Nvidia cards is also unclear – adoption rates on some of these GPUs can also be skewed by errors.
We will sometimes continue to refer to SHS because there is little practical choice. This is the only publicly available data set of this type. This could, however, explain why AMD's overall market share of processors has increased in other reports but has remained relatively static on Steam. If iCafe Chinese installations grow faster than other types of games and if AMD is not represented in this market, it will not seem that it gains much market share, whether in CPU or GPU. We mainly used the SHS to compare the generational adoption of Turing with that of Pascal, who should Be less impacted. But if AMD 's adoption figures are incorrect, at least some Nvidia SKU figures will also be incorrect.
It's no secret that high-end GPU prices have recently dropped, thanks to AMD's recent launch of its RX 5700 and RX 5700 XT. AMD has now stirred the pot a little, claiming that she had managed to bluff Nvidia to reduce costs, only to get the rug out from under and reduce the price even more
This data is included in Hot Hardware's interview / podcast with Scott Herkelman, vice president of Radeon at AMD. Scott then explains how AMD carefully planned, assessing the capabilities of RTX cards in terms of Nvidia's clock speeds, chip sizes, revenue targets and expected margins, and more again. AMD's initial prices for the RX 5700 and 5700 XT were $ 500 and $ 379, but after Nvidia unveiled its Super family, the company reduced them to $ 350 and $ 400. According to AMD, this has always been the plan.
Herkelman explains how AMD carefully analyzed the family of RTX graphics processors, including their prices, chip sizes, and the margin of safety they had left. He chose his initial prices for the RX series while waiting for Nvidia to subcontract them, which allowed him to offer his own price improvements:
The prices we originally published we waited to see what they published, then we took the appropriate step: not only to dislodge their Super series, but also to block their 2060 and 2070 series. Because we knew that they were having a slower success and we wanted to do a double jebait, not only block their super strategy, but also slow down the 2060s and 2070s.
First of all, let me say that everything AMD has said about the analysis of Turing's safety margin, its price level and other market factors, seems to me quite worthy of note. the outset. We perform similar analyzes ourselves and AMD is in a better position than us to understand some aspects of Nvidia's manufacturing situation. Nvidia has maintained very high margins on its GPUs; the company wide margin last year was about 60%. When he raised prices with Turing, we argued that Nvidia was doing so in part because it did not have any real competition from AMD.
Similarly, it makes perfect sense that AMD brings to the market a part that gives it an advantage. The sizes of Navi matrices are much smaller than Turing. The RX 5700 and 5700 XT are 251mm2, while the RTX 2060 and RTX 2070 are 445 mm2. The RTX 2070S, 2080 and 2080S are even bigger, at 545 mm2However, AMD does not compete directly with the RTX 2070S.
Without information on platelet performance and cost, however, we can not directly compare what Nvidia and AMD are likely to pay for the final chips. It is true that AMD has a decisive advantage in terms of chip size, but AMD also supports a new node and should pay at least a little more for wafers. The way all this is played out, in the end, is uncertain. But obviously, AMD felt it had a usable advantage.
Nevertheless, once Nvidia announced the launch of the "Super" family, AMD was in danger of reducing its prices. At $ 500 and $ 379, the RX 5700 and 5700 XT would have clashed against the RTX 2060S and RTX 2070S instead of RTX 2060 and RTX 2060S. This represents a more difficult challenge for both cards, at higher prices. He would also have been forced to repeat almost the beginnings of Radeon VII, in which AMD could have matched the RTX 2080 in terms of performance, price, new features and not receiving a warm welcome. Yes, Navi made include new features, and we still want to talk about it in more detail, but there are no revolutionary new features to discuss that could have changed the equation.
At equal prices, the 5700 and 5700 XT would have been less well positioned than today compared to the RTX 2060S and 2070S. AMD has made the decision to position its GPUs more advantageously by reducing prices. Do I think that they planned this? Absolutely Do I think that AMD would have kept its prices higher if the 5700 and 5700 XT cards had been faster? Yes I can hardly argue otherwise. I spent six months writing articles on how AMD would not give its processors if it achieved performance equal to Intel's, simply because some fanboys thought that it would not work. was a good idea. There is no reason to think that the company wants to improve its processor margins, but it agrees to give GPUs a fraction of what it might ask. Our slide show, with the results of our reviews RX 5700 XT and 5700 compared to the RTX 2060, 2070 and 2080 (the 2080 replacing the 2070S) is presented below:
If Nvidia had not lowered prices with its Super cards, I doubt very much that AMD would have canceled its own higher prices. That's not to say that AMD did not have a plan in advance, but it was a fairly predictable and simple plan.
The biggest and most important delivery here is that companies absolutely will be raise prices when there is no competition. Nvidia did not magically find a way to reduce Turing costs the same month that AMD launched new GPUs. They raised prices with Turing partly because there was no competition with AMD to stop them.
As soon as AMD re-entered the market with a competitive part, GPU prices went down again. Had AMD been able to export competing parts last year, Nvidia may not have been able to raise its prices. If Nvidia had not been focused on player compression like juicero fruit juice (and could not misinterpreted the crypto market new games sales), it would not have increased the prices anyway. AMD's attempt to establish a relatively prosaic price cut against Team Green is not really a success, but high-end GPUs are undeniably cheaper than today. It's a win for everyone, no matter what material you prefer.
One of the areas in which exams are sometimes inadequate is to give insight into the evolution of component performance over time. While the annual launch of new hardware provides periodic opportunities to review how older components overlap, launches focus on the processor or graphics processor be launched, not the previous cards. Title-specific coverage, on the other hand, is usually measured and written at the time of a game or product review. Combine these two trends and it may be more difficult than it should be for players to understand the evolution of performance over time and determine which GPUs retain their value better than others.
With this in mind, we have completed the analysis of the datasets we collected during our recent review of AMD data. Radeon RX 5700 and Radeon RX 5700 XTas well as the Gigabyte Aorus RTX 2080 Xtreme 8 GB. We have updated all of our GPU datasets in late June / early July 2019, which is a good time to revisit the evolution of Pascal, Turing and GCN's performance over the past nine months.
We are looking for two trends. First, players were concerned about the impact of the Meltdown and Specter patches on game performance. Secondly, some circles have the impression that Nvidia GPUs lose performance faster than their AMD counterparts, Nvidia GPUs were designed for or because the company deliberately prevents old maps from making newer GPUs look better by comparison.
If I am honest, I have never believed the most sinister version of this argument. Nvidia and AMD pursued somewhat different optimization strategies in the pre-DX12 wasand it is reasonable to assume that Nvidia is focusing its optimization efforts on the new GPUs rather than the old ones. This is not unique to Nvidia, however. Now that AMD has put the RDNA on the market, it may also need to decide how to prioritize its time when optimizing its different architectures. There is a difference between saying that Nvidia can focus more on optimizing for the new cards and saying that Nvidia is deliberately handicapping the old GPUs. In any case, the objective here is to measure the evolution of performances over time in the same series of titles. We will see where the results will lead us.
All our tests were done on an Asus Prime Z370-A motherboard, with 32 GB of DDR4-3200 using an Intel Core i7-8086K processor. The Nvidia GPUs of September 2018 were tested with the help of the Turing 411.63 launch pilot, while the June tests used the Nvidia 430.86 driver. The AMD Radeon 64 and VII processors used the Adrenaline 19.5.2 driver. A Samsung 1TB 970 EVO was used for storage. The September 2018 tests were run on Windows 10 1803, while the June 2019 tests ran on Windows 10 1903. All Meltdown, Specter, and related patches were left in default states.
Although the date of comparison is September 2018 and June 2019, it is clearly a small foundation in the case of Radeon VII (the Radeon VII was not launched until February). The performance of the Radeon VII in June is compared to that of its launch in this case.
Two games showed a drop in performance on Radeon and GeForce hardware: Ashes of the Singularity: Escalation and Warhammer II. Both games showed declines in all APIs, although AotS: Escalation lost more performance. This is theorized as the result of the protections of Specter et al. No other game has seen a decline in performance, and the declines in these specific titles were not large enough to change the overall trend of our game line.
We measured the performance in Ashes of the Singularity: Climbing, Deus Ex: Divided Humanity, Hitman, Metro Redux Redux, Middle Earth: Shadow of War, Tomb Rising, Warhammer, The Bear's Tomb, Assassin's Creed: Origins, and Far Cry 5. The performances shown for each GPU in each resolution reflect the geometric mean of our results. We used a geometric mean of an arithmetic mean to calculate the averages in order to take into account that the minimum frame rates can vary considerably depending on the games – Hitman, for example, the minimum frame rates of regular returns for all GPUs between 4 – 12fps.
Our September coverage used a standard GeForce RTX 2080, while June 2019 data is based on the Gigabyte Aorus RTX 2080 Xtreme, which has slightly higher clock rates. This may have had a slight impact on performance (1 to 2%), but the difference is not large enough to cause a problem.
The slideshow below contains our results, represented graphically by resolution and by minimum and average rates.
The minimum frame rate enhancements at 1080p and 1440p have been optimally resolved for Vega 64, RTX 2080 and GTX 1080 Ti. The average improvement level is lower in all cards, but this is not necessarily surprising. Vega 64 and Radeon VII are both based on GCN, and GCN has been AMD's main architecture for a number of years, enough to be optimized at this stage. The Nvidia RTX 2080 records the most consistent gains in all resolutions, probably due to the small clock tick or the fact that Turing is the most recent architecture and offers the greatest performance yet on the table. Even the GTX 1080 takes some pictures in 1080p.
There is nothing to indicate that Nvidia has taken steps to hinder Pascal's performance or to make his older 2016 GPUs less attractive than the newer ones. There is no Pascal performance regression or cadence problem in any game that does not affect all GPU (that's why we think that the decline in performance of Ashes and Warhammer II is related to the processor and not to the GPU). The performance of Vega 64 is perhaps the most advanced, but Vega 64 has been on the market for less time than the Pascal family. Despite these improvements, the GTX 1080 is only slightly equal to or slightly less than all resolutions and at the minimum and average rates. The relative performance of these AMD and NV cards relative to each other has changed little.
The implication of these results is only beneficial, whatever the GPU you have. The AMD or Nvidia, Vega, and Pascal cards both show the expected performance, while the slightly larger improvements in Turing correspond to what we expected from a relatively newer architecture. Obviously, the details of how the GPU architecture ages will be specific to each architecture, and since this review does not focus on the older maps of the Maxwell or Kepler era, we can not talk about situations with these GPUs. But for Pascal, Nvidia's latest generation architecture seems to age well, while AMD has also improved Vega's performance.
Whenever a large manufacturer performs a node transition, he must choose the foundry partner with whom to work. Nowadays, this comes down to one of two (possibly three) choices: TSMC, Samsung and potentially Intel, which still seems to run a smelter, but has not made any significant customer announcements for some time. For the most part, this is a two-way race between TSMC and Samsung.
TSMC worked well on 7 nm, blocking a large part of the big companies and the first customers. The two companies pursued different strategies on 7 nm, Samsung deciding to wait until its node is ready until its ultraviolet extreme lithography (EUV) is ready, while TSMC has introduced 7 nm as a node conventional lithography and plans to deploy an EUV for mass-manufacturing next year.
Samsung has been a little light on the customers, although we know that IBM will build something with the long-term foundry. At a press conference in Seoul this week, Nvidia Korea's chief, Yoo Eung-joon confirmed that the two companies collaborate on the future GPUs drawings.
"It is significant that Samsung Electronics' 7-nanometer process is being used in the manufacture of our next-generation graphics processor," Yoo said. "Until recently, Samsung was working very hard to find partners for cooperation in the foundry."
The exact terms of the agreement have not been revealed, any more than the parts that will be built at Samsung, but it is possible that Nvidia will launch fully with his new partner. Designs for one foundry can not be easily or simply transferred to another, which means that the same chips in two state-of-the-art foundries mean a doubling of design work. Nvidia could choose to use both companies, but using Samsung would simplify some aspects of the design process. Samsung's 7nm price would be extremely competitive, hoping to win the prize for its design.
In terms of when we could see Nvidia 7nm chips, nobody knows. AMD will launch its 7nm Navi chips this Sunday, and Nvidia has already responded by adjusting the prices of their existing RTX cards. But we also know that Navi will make the big jump to "Big Navi" next year, and this is a place where Nvidia will probably have a 7nm response on its side. If AMD's 7nm technology proves effective, Nvidia will not want to deal with it with its own aging Turing family. If AMD is still lagging behind in terms of overall competitiveness, Nvidia will be able to cope with its new launch.
In any case, we do not think that Turing will keep the market as long as Pascal has done. Pascal holds the record for the longest running GPU architecture in the consumer space, having dominated from May 2016 to September 2018. We do not know at all when but 12-14 months from now until launch is a pretty reasonable bet.