There is a strange rumor that AMD has destroyed or intends to destroy its reference GPU RX 5700 and RX 5700 XT designs. Eleven AIB custom cards are on the market. It started with the French site Cowcatland, which was titled:
The translation of this title indicates that the AMD reference GPUs for the 5700 and 5700 XT systems reached EOL status just five weeks after launch. This is not true. According to AMD, the goal and purpose here are not to compete with AIB partners. "We expect the Radeon RX 5700 Series graphics card offering to remain strong in the market, and that many models are starting to arrive from our AIB partners," said AMD. "In line with usual practice, once the AMD reference card inventory has been sold, AMD will continue to support new partner designs with the Radeon RX 5700 Series Reference Design Kit."
AMD provides reference designs for AIBs that want to accelerate the marketing of cards without designing their own coolers or reference graphics cards. The first boards are usually based on these reference products. The delay between AIB shipments and the availability of the reference card can be relatively short or differ by a few weeks. Some fans are unhappy that it has been five weeks since AIB was designed, although this has already been the case with Nvidia launches. AMD does not destroy its reference cards and they will still be manufactured in the future.
The community of enthusiasts is not particularly satisfied with the delay of the ventilation cards or the fact that these cards are or the fact that the 5700 and 5700 XT are louder than the equivalent Nvidia GPUs. The hope is that dual or triaxial cooling fans offer better acoustics than AMD's default reference designs. This is usually a very good bet.
After testing the 5700, 5700 XT, Vega 64, Radeon VII components and an associated mix of Ti 2060, 2070, 2080 and 2080 parts (both manufactured by Nvidia and not), I would honestly say, the battle for a fan compared to an open-air cooler can be a bit inflated. Thermally, there is an obvious difference between the two solutions (the fans discharge hot air, while the free-air coolers simply move it inside the chassis). What a difference means because your system depends a lot on its preconditions. Open air coolers can offer better performance in spacious cases with good airflow, while fans provide more consistent results. The relative volume of the two solutions depends on their cooler design. A fan may be stronger than a cooler in the open air or vice versa. The 5700 XT (a fan) is much quieter than the Vega 64 (another fan). Vega 64 and Radeon VII (open pit design) have very similar noise profiles.
An interesting aspect of Navi exams, however, is the degree of divergence of noise measurements from different examination sites. Anandtech, for example, indicates that the 5700 XT is a 54 dB (A) solution, compared to 61 dB for the Radeon Vega 64.
This 54 / 61dB (A) solution seems to correspond more to my own subjective experience using the Radeon Vega 64, Radeon VII, 5700 XT and Nvidia associated graphics processors. The reason I say this is that, for me, the 5700 XT is much better than the Radeon 64 or Radeon VII, reminiscent of the bad old days of powerful GPUs like the R9 290X.
Other critics, however, make very different statements:
Guru3D claims that the Vega 64 and Radeon 5700 XT are identical in terms of database (A) and that Radeon VII is much stronger. Since the distance to the target obviously has an impact on the noise measurements, the fact that Anandtech and Guru3D measure different levels of sound does not worry me. What is even more interesting is that one of the articles shows that Vega 64 and 5700 XT are comparable, while the other does not.
TechPowerUp has a third distribution, with 5700 XT scores and 5700 identical and the Radeon VII lower than Vega 64. Three well-thought-out websites for technical journals, three separate results. From my own subjective experience, the one that "seems" the most correct is that of Anandtech – but a number of factors will affect noise measurements, especially the relative levels of background noise. , the file opening tests compared to those closed, distance from the target and the equipment used to perform the test. It is also possible that the individual GPU variation also works here.
In my opinion, the 5700 and 5700 XT are resolutely on the "Quietly Quiet" side of "Is this GPU quiet enough to be used or not?" It is not as quiet as the RTX 2060 or 2070 that we tested for the same exam. C & # 39; greatly quieter than the Radeon VII or the Vega 64. It is known that I wear earplugs when testing these two cards in case of opening to avoid hearing damage, although the fact that I already have lesions hearing in my left ear also made me paranoid to hurt him further. I used a Vega 64 in my own system and I did not like how loud it was for games without headphones. The Radeon 5700 XT does not cause the same problem.
Radeon AIB cards have often been quieter than reference models, so it is likely that this will continue to be the case. We will check if these cards offer reasonable value for money, but they will be when they arrive on the market in larger quantities. Reference card templates will continue to exist alongside these new cards.
Nvidia has launched a trio of new GPUs in its high-end product families, replacing or augmenting the existing RTX 2060, 2070, and 2080 with a new family of "Super" cards: the RTX 2060 Super, the RTX 2070 Super, and coming) RTX 2080 Super. These new cards are still based on the same 12 nm process and offer the same features as their predecessors. What they offer is a significantly improved performance with the same MSRP as the cards they replace. The RTX 2060S and RTX 2070S are what Nvidia should have launched last year. The RTX 2080S will not actually be tested until July 23rd.
This chart shows the distribution of improvements and upgrades on the maps. The RTX 2060 Super is for all intents and purposes a RTX 2070. According to the reviews, the RTX 2070S offers about 96% of the performance of a RTX 2080. These new GPUs display the same prices as their former occupants, which means that Nvidia has actually canceled part of the increase the prices they inflicted. the market in the first place when they launched the Turing family.
The increase in straight line performance for the RTX 2060S and 2070S compared to RTXs from the original 2060 and 2070 seems to be between 1.1x and 1.2x, the average in both cases being around 1,15x. These GPUs are typically 15% faster than their predecessors at the same price. The market has repaid Nvidia with low adoption rates. As we have already explained, the adoption rate of Turing was significantly slower than that of Pascal at all price and product levels that we could reasonably compare, practically in all areas. This remains true today. Nine months after its launch, the most adopted RTX GPU is the RTX 2070, with 1.1% of the market.
In February 2017 (equivalent comparison point), the GTX 1060 held 4.08% of the market, followed by GTX 1070 (3%) and 1080 (1.41%). The 1050 Ti had another 1.04% and the 1050, 0.52%. The top 5 Pascal cards accounted for 10.05% of the market.
Comparing with Turing today, we see very different numbers. The RTX 2070 is the most adopted card, at 1.1%. RTX 2060 follows, at 0.85%, RTX 2080 (0.75%), RTX 2080 Ti (0.42%) and GTX 1660 Ti (0.4%). Turing's 5 best cards collectively hold 3.43% of the market.
According to Chris Stobing at PCMag, the RTX 2070 Super "wins the price of our publisher's choice as a $ 499 high-end card that the GeForce RTX 2080 should have been – its stellar performance at its price compensates for the wait and does it without equal. "
Some design changes have been made to account for increased performance. the RTX 2070S now features an 8-pin + 6-pin connection rather than a single 8-pin connector, and TDPs have increased in some cases to cope with increased power consumption. The RTX 2070S uses the same GPU as the RTX 2080 / 2080S (a return to the previous standard), while the RTX 2060S is based on a developed version of the same TU106 core that it used.
Rise of the Tomb Raider shows the RTX 2070 and RTX 2070S more or less related to 1080p, but the faster GPU stands out at higher resolutions. The gains here are large enough to represent a reset of Nvidia's global product stack. The RTX 2070S is actually a RTX 2080, priced at 500 USD instead of 700 USD. The RTX 2060S is a RTX 2070 at 400 USD instead of 500 USD. When the RTX 2080S arrives, it will likely improve by 8 to 10 percent over the RTX 2080, as well as improve its overall specifications.
This is what Nvidia should have launched last September. This is a price large enough to change the overall value proposition of the cards. We will be looking at the issue of ray tracing support and the overall value of these proposed cards in relation to AMD's next GPUs in the coming weeks, but one thing is clear: AMD should probably adjust its own launch.
I know that people are reluctant to trust the benchmarks of the manufacturers, but in this case we wish the AMD project to be optimized, and we can assume that the company gave it to us. Previously, ignoring the extreme values of 22% and 15%, AMD predicted that the 5700XT would be 2.6% faster than the RTX 2070, but would cost $ 50 less than $ 449. Even if you include the outliers, the 5700XT is only 5.8% ahead of the RTX 2070. The RTX 2070S seems to offer a steady improvement of about 1.15 times compared to the RTX 2070.
The straight line comparison suggests that the 2070S will be a tough spot for the 5700XT. The RX 5700 is in a slightly different position.
The RX 5700 was on average 8.8% faster than the RTX 2060 if we throw the 22% outliers and 10% faster if we do not. The fact that this GPU also charges a price increase makes it easy to compare points for AMD. With the 5700s at $ 379 and the $ 2060S at $ 400, the fact that Nvidia also gains about 1.15 times the performance of this card gives AMD a little more room to maneuver.
AMD may or may not react with price adjustments before launch, but the 2070S looks like increased competition. And these new Turing cards are a significant improvement over previous models. All of this makes a very interesting comparison in a few days.
We covered AMD ads on Ryzen and Navi during E3 throughout the week. We still have an aspect of the situation to discuss. We discussed Navi and its RDNA architecture, but we did not discuss any of the software enhancements that AMD plans to offer with its next GPUs. Some of these gains will also be available for GCN cards.
Let's talk about some features and improvements.
First, there are the quality of life gains generated by AMD's Radeon software. With Navi, the system automatically switches your TV to low latency game mode, if the display supports one. You will be able to save the settings to separate the files and reimport them if you need to install the driver completely from scratch or if you reinstall your entire operating system. Some improvements have also been made to the way WattMan reports its results.
The AMD Link streaming application now supports streaming on TVs, including Apple and Android TV. VR wireless streaming is now supported as well. These enhancements are not associated with any specific GPUs.
Radeon Chill is AMD technology to reduce the power consumption of GPUs during games. The software can now set frame rate limits on 60Hz screens to reduce the number of images rendered when you do not actively control your character due to your afk.
The AMD footnote on Radeon Chill deserves to be read. Under the right circumstances, this can significantly reduce the power consumption of the graphics processor, although this has an impact on the rate, and the total size of the gain varies from one title to the other. Any graphics processor that previously used Radeon Chill can take advantage of these enhancements.
Then, Radeon anti-lag. According to AMD, the company has invented a method to reduce the time between the moment you press a button in a game and the one where you see the results. To do this, some CPU jobs are delayed to ensure that they occur simultaneously alongside the GPU rather than being completed in advance.
Honestly, I can not say that I have observed a difference between the activation of Radeon Anti-Lag and its deactivation. AMD demonstrated that the effect worked with custom-built latency monitors attached to displays, and I think the company on which the monitor I tested had slightly higher latency. I am at an age when motor reflexes have already begun to decline, and if I am honest, I have never been a very good twitch player.
In the best case, this feature reduces your total latency by a few milliseconds. If you are good enough to compete in these spaces, it could be worth something. This is not something I feel able to comment on.
The anti-lag is supported in DX11 on all AMD GPUs. Support for DX9 games is a Navi-only feature. DX12 games are currently not supported because of the extremely different implementation requirements in this API.
Radeon Image Sharpening is a feature that combines adaptive contrast sharpness with the use of GPU resizing techniques to improve the quality of the base image without the need for a native 4K rendering penalty. The following slides compare RIS enabled or disabled.
RIS is disabled in the slide above.
RIS is enabled in this slide. The effect is very subtle. You may want to open the two images above in separate tabs, zoom in carefully, and then compare the final product. Although there is a clear improvement in the IQ in the picture "ON", it is a small one.
Nevertheless, small improvements to IQ are generally welcome. RIS was also designed by Timothy Lottes, who worked on FXAA at Nvidia. The use of this feature should not affect performance (the impact on performance is estimated at 1% or less). RIS is a Navi-only feature and is only supported by DX12 and DX9.
Finally, there is FidelityFX.
FidelityFX is AMD's new addition to GPUOpen. It is offered to any developer wishing to take advantage of it. Adaptive Contrast Sharpness can be used on any GPU if developers want it.
Some extra hardware details on Navi that were not in the previous articles, but probably should have (a frenetic briefing schedule and some scrambled note taking):
AMD plans to maintain GCN GPUs on the market to manage HPC workloads. The AMD engineer we spoke to compared GCN to an extremely efficient sword was it balanced properly, but that it was relatively tedious to use, while RDNA was more of a lightsaber in terms of concentration on elegance and the economy of movement. GPUs such as the MI50 and MI60 also offer much larger memory bandwidth and larger memory pools than any of the Navi cards coming on the market.
The RDNA should eventually replace the GCN in this space and correct some of the slow path anomalies that the GCN suffers. Irregular performance with some texture formats has been corrected, for example, and the RDNA has larger caches to prevent bubbles in the pipeline. Overall performance should be more predictable with rDNA-derived GPUs than with GCN.
There is nothing new in these details, but I thought to include them for the sake of completeness. This concludes our E3 coverage.