The next generation of consoles will be in less than 18 months and Microsoft is starting to share a little more information about its priorities for the next generation of Xbox consoles. Readability, load times, and upward compatibility of controllers and software are Redmond's top priorities with the launch of Xbox Next.
"I think the area we really want to focus on next generation is the frame rate and game playability," Spencer said. said Gamespot:
Make sure the games load incredibly quickly and that the game runs at the fastest possible frame rate. We are also the Windows company, so we see the work that is done (for the PC) and that of the developers. People love games at 60 frames per second. It is therefore essential that the game design works at 4K 60 (fps).
What is interesting, is that this generation, we really focused on 4K visuals and on the way we bring the movies in 4K Blu-ray and video streaming. With Xbox One X, allowing games to work with 4K visuals will bring really important visual enhancements next generation But playability is probably the main focus of this generation. How fast are the games loaded? Do I feel that I can get into the game as fast as possible and while he is playing? How do you feel? Does this game look like any other game than the one I saw? This is our goal. "
That's more or less what ET predicted earlier this year. 60fps is a much more realistic target for Xbox Next than for the 240fps rumor circulating. Despite various vague claims that the Xbox Next will support 8K, Spencer does not make any sensual mention as a game resolution target. There is no chance that the 2020 console will have a sufficiently powerful GPU to support this resolution. We are happy to see the company focus on other aspects of the game.
According to Microsoft, backward compatibility is a key pillar of the progression of the Xbox. The Xbox One, Xbox 360 and OG Xbox games will all continue to be supported on Xbox Next, Spencer told Gamespot. The company promised that this backward compatibility commitment would also apply to controllers, stating: "So, really, the products you've bought from us, whether it's the games or the controllers you're using, we want to make sure they are compatible. future compatible with the most faithful version of our console, which at that time will obviously be the one we just launched. "
Historically, there has been a handful of games that specifically targeted 60 frames per second for console play, but it was an unusual frame frequency target. The Xbox One X and PS4 Pro have expanded the list of titles offering this pace by encouraging developers to release updates for new and existing games that would add new resolution options or allow to play at higher rates than the basic title supported. In fact, moving the video game industry (backwards) to a 60-fps target would be a feat.
There is reason to think that the two console manufacturers could get by. The Xbox Next and PlayStation 5 will both achieve higher levels of performance than the existing Xbox One and PS4 Pro. The use of Ryzen and a RDNA-derived graphics processor for both platforms ensures superior console performance, but the perceived level of visual quality improvement offered by console generation over the next decreased each cycle. Rather than simply looking for new levels of detail, Spencer wants developers to focus on consistency and load times, two other areas in which major generational gains can be generated, including adoption of SSDs. .
A major question is how the 1080p / 4K splitting will be handled. Spencer refers to a 4K / 60fps target, but 1080p still represents a high percentage of TVs sold and the installation base for the old standard is huge. The easiest way for Microsoft to handle a 1080p output limit is to render internally at 4K, and then output at 1080p. This makes it possible to effectively apply oversampled AA to the overall image and dramatically improve the image quality compared to the standard 1080p resolution. With the PS4 Pro and Xbox One X, Microsoft and Sony have provided developers with many ways to leverage the added power of new consoles to enhance the basic experience. We expect a similar approach. One of the benefits of having a powerful GPU with a low-resolution display is that you can enable secondary features like AA without worrying about the performance impact. We hope Microsoft will bring some of this flexibility to its Xbox Next design.
The PC player in me can not help but notice that the already almost empty line between consoles and PCs will be even finer at the next cycle. The consoles previously offered backward compatibility, but often come with qualifiers related to the version of your hardware and limited to a previous platform. Microsoft will not only support Xbox One games on Xbox Next, it will continue to support Xbox 360 and OG Xbox systems, as well as Xbox One devices. This is exactly the type of backwards compatibility that we expected when upgrading from one PC version to another, and it's nice to see the consoles catching up after a few decades.
The flip side, of course, is that the debate console against PC becomes more difficult each generation. At this point you can also simply ask for "controller or keyboard?" (Keyboard, natch). Functionally, at the hardware level, we are the PC games.
There is a strange rumor that AMD has destroyed or intends to destroy its reference GPU RX 5700 and RX 5700 XT designs. Eleven AIB custom cards are on the market. It started with the French site Cowcatland, which was titled:
The translation of this title indicates that the AMD reference GPUs for the 5700 and 5700 XT systems reached EOL status just five weeks after launch. This is not true. According to AMD, the goal and purpose here are not to compete with AIB partners. "We expect the Radeon RX 5700 Series graphics card offering to remain strong in the market, and that many models are starting to arrive from our AIB partners," said AMD. "In line with usual practice, once the AMD reference card inventory has been sold, AMD will continue to support new partner designs with the Radeon RX 5700 Series Reference Design Kit."
AMD provides reference designs for AIBs that want to accelerate the marketing of cards without designing their own coolers or reference graphics cards. The first boards are usually based on these reference products. The delay between AIB shipments and the availability of the reference card can be relatively short or differ by a few weeks. Some fans are unhappy that it has been five weeks since AIB was designed, although this has already been the case with Nvidia launches. AMD does not destroy its reference cards and they will still be manufactured in the future.
The community of enthusiasts is not particularly satisfied with the delay of the ventilation cards or the fact that these cards are or the fact that the 5700 and 5700 XT are louder than the equivalent Nvidia GPUs. The hope is that dual or triaxial cooling fans offer better acoustics than AMD's default reference designs. This is usually a very good bet.
After testing the 5700, 5700 XT, Vega 64, Radeon VII components and an associated mix of Ti 2060, 2070, 2080 and 2080 parts (both manufactured by Nvidia and not), I would honestly say, the battle for a fan compared to an open-air cooler can be a bit inflated. Thermally, there is an obvious difference between the two solutions (the fans discharge hot air, while the free-air coolers simply move it inside the chassis). What a difference means because your system depends a lot on its preconditions. Open air coolers can offer better performance in spacious cases with good airflow, while fans provide more consistent results. The relative volume of the two solutions depends on their cooler design. A fan may be stronger than a cooler in the open air or vice versa. The 5700 XT (a fan) is much quieter than the Vega 64 (another fan). Vega 64 and Radeon VII (open pit design) have very similar noise profiles.
An interesting aspect of Navi exams, however, is the degree of divergence of noise measurements from different examination sites. Anandtech, for example, indicates that the 5700 XT is a 54 dB (A) solution, compared to 61 dB for the Radeon Vega 64.
This 54 / 61dB (A) solution seems to correspond more to my own subjective experience using the Radeon Vega 64, Radeon VII, 5700 XT and Nvidia associated graphics processors. The reason I say this is that, for me, the 5700 XT is much better than the Radeon 64 or Radeon VII, reminiscent of the bad old days of powerful GPUs like the R9 290X.
Other critics, however, make very different statements:
Guru3D claims that the Vega 64 and Radeon 5700 XT are identical in terms of database (A) and that Radeon VII is much stronger. Since the distance to the target obviously has an impact on the noise measurements, the fact that Anandtech and Guru3D measure different levels of sound does not worry me. What is even more interesting is that one of the articles shows that Vega 64 and 5700 XT are comparable, while the other does not.
TechPowerUp has a third distribution, with 5700 XT scores and 5700 identical and the Radeon VII lower than Vega 64. Three well-thought-out websites for technical journals, three separate results. From my own subjective experience, the one that "seems" the most correct is that of Anandtech – but a number of factors will affect noise measurements, especially the relative levels of background noise. , the file opening tests compared to those closed, distance from the target and the equipment used to perform the test. It is also possible that the individual GPU variation also works here.
In my opinion, the 5700 and 5700 XT are resolutely on the "Quietly Quiet" side of "Is this GPU quiet enough to be used or not?" It is not as quiet as the RTX 2060 or 2070 that we tested for the same exam. C & # 39; greatly quieter than the Radeon VII or the Vega 64. It is known that I wear earplugs when testing these two cards in case of opening to avoid hearing damage, although the fact that I already have lesions hearing in my left ear also made me paranoid to hurt him further. I used a Vega 64 in my own system and I did not like how loud it was for games without headphones. The Radeon 5700 XT does not cause the same problem.
Radeon AIB cards have often been quieter than reference models, so it is likely that this will continue to be the case. We will check if these cards offer reasonable value for money, but they will be when they arrive on the market in larger quantities. Reference card templates will continue to exist alongside these new cards.
In February, I recounted how AMD and Nvidia collectively launched the refreshment cycle of the least-popular high-end GPUs in the history of the video game industry. After the launch of the AMD Navi 5700 and 5700 XT and the Nvidia replica with the RTX 2060 Super and 2070 Super, it makes sense to come back to this conclusion. How did things improve a little over six months later?
In fact, they have improved a lot if you buy at the top of the market. Before reviewing the details of the changes, let me clarify some of the terms. Historically, GPU price ranges look like this:
Budget: $ 150 or less.
Mid-range: $ 150 – $ 300
High end: $ 300 – $ 500
Ultra-high: $ 500 +.
When Nvidia introduced the RTX family, prices went up considerably. Instead of the GTX 1070 around $ 370 and the GTX 1080 between $ 500 and $ 550, the RTX 2070 was a $ 500 GPU, the RTX 2080 was $ 700, and the 2080 Ti actually worked between 1,100 and 1,200 dollars ($ 1,000 technically, but nobody ever got them, as far as I know).
A publication like ours has two basic ways: manage your own price band and insert the new cards, or change our price bands and increase them to meet the needs of the manufacturer. If you take the latter approach, AMD's Navi graphics cards are now "mid-range" cards, despite price tags of $ 350 and $ 400. It's also the way you end up with articles referring to the iPhone XR as "entry-level" or "budget" at $ 750, as if Apple did not just kill the only pseudo-budget device proposed, the iPhone SE at 350 USD.
Adjusting price brackets to reflect what businesses are selling is not wrong, as long as it matches what customers purchase The next quarterly Nvidia figures should provide further confirmation here, but the available data suggests that Turing's sales were way behind Pascal at launch and that they may not have recovered since. If Nvidia really thought they had established ray tracing as a feature that players were willing to pay, the RTX 2060, 2070, and 2080 GPUs would not have been reduced.
As far as ExtremeTech is concerned, at least for the moment, the Navi 5700 and 5700 XT cards are high-end cards, as are the RTX 2060, 2060 Super, 2070 and 2070 Super. The RTX 2080, 2080 Super and 2080 Ti belong to their own distinct category of ultra-high-end devices.
We have recently measured the long-term performance evolution with various graphics processors, but we can use this dataset for different purposes. Keep in mind that in the graphics series below, the GeForce RTX 2080 (non-Super) delivers nearly the same performance as the RTX 2070 Super. (The 2070S is typically between 95 and 105% of the performance of RTX 2080).
Comparing RTX 2070S / 2080 to GTX 1080, we find that the minimum frame rates are 1.18 times higher at 1080p, 1.28 times higher at 1440p and 1.4x higher at 4K. The average frame rates of our entire game range are 1.3 times higher at 1080p, 1.4 times higher at 1440p and 1.44x higher at 4K.
I do not have the same level of data on the GTX 1070 as the RTX 2060 Super, but we know that the 2060S also improves performance by about 1.15, as it almost works out of the same way as the RTX 2070 of origin. The new $ 400 GPU award brings it closer to the original GTX 1070 than the OG 1080.
As for AMD, the 5700 and 5700 XT effectively replace Vega 56 and 64. The slideshow below contains the results of our RX 5700 and 5700 XT tests. The Radeon RX 5700 matches the Vega 64 in almost every test, but costs $ 350 instead of $ 500. It consumes 74% of the power of Vega 64 while outperforming the RTX 2060.
ace upgrades for the current owners of Vega 56 and Vega 64, the best case will be between Vega 56 and RX 5700 XT. In this case, I value my earnings, but I'm pretty sure they are not as important as the improvements between Turing's Turing prices and Pascal and Turing. Vega 56 was typically 1.08x to 1.12x slower than Vega 64, but the 5700 XT's lead over Vega 64 varies significantly depending on the game. In some cases, both GPUs are linked.
AMD players with older cards or Nvidia players wishing to change sides are the most likely customers to use the RX 5700 and RX 5700 XT, and the performance offered by these cards make it a potentially interesting upgrade to these markets.
The new AMD launches have restored a better, more consumer-friendly balance in the upper end of the GPU market. The market of very high-end remains less friendly. The RTX 2080 Super offers the smallest performance improvement of all "Super" cards and does not do a very good job justifying its $ 200 price premium over the RTX 2070 Super. The Radeon VII and the RTX 2080 Super are only justifiable if you play in 4K and, honestly, they are not so convincing even in this situation.
AMD has not yet announced plans for the midrange market, but the company also needs to work on maps to refresh this space as well. Let's hope it will not be long before we have much more efficient and powerful chips ready to replace the RX 570, 580 and 590.
As for whether Navi or Turing is a better way to upgrade, it will depend a bit on what you want: A little more speed (compared to the competition), or features such as ray tracing? Some users may not think that even these gains are enough, which I understand. But we can at least say that it are performance / dollar gains over the previous generation. Six months ago, it was not possible.
We covered AMD ads on Ryzen and Navi during E3 throughout the week. We still have an aspect of the situation to discuss. We discussed Navi and its RDNA architecture, but we did not discuss any of the software enhancements that AMD plans to offer with its next GPUs. Some of these gains will also be available for GCN cards.
Let's talk about some features and improvements.
First, there are the quality of life gains generated by AMD's Radeon software. With Navi, the system automatically switches your TV to low latency game mode, if the display supports one. You will be able to save the settings to separate the files and reimport them if you need to install the driver completely from scratch or if you reinstall your entire operating system. Some improvements have also been made to the way WattMan reports its results.
The AMD Link streaming application now supports streaming on TVs, including Apple and Android TV. VR wireless streaming is now supported as well. These enhancements are not associated with any specific GPUs.
Radeon Chill is AMD technology to reduce the power consumption of GPUs during games. The software can now set frame rate limits on 60Hz screens to reduce the number of images rendered when you do not actively control your character due to your afk.
The AMD footnote on Radeon Chill deserves to be read. Under the right circumstances, this can significantly reduce the power consumption of the graphics processor, although this has an impact on the rate, and the total size of the gain varies from one title to the other. Any graphics processor that previously used Radeon Chill can take advantage of these enhancements.
Then, Radeon anti-lag. According to AMD, the company has invented a method to reduce the time between the moment you press a button in a game and the one where you see the results. To do this, some CPU jobs are delayed to ensure that they occur simultaneously alongside the GPU rather than being completed in advance.
Honestly, I can not say that I have observed a difference between the activation of Radeon Anti-Lag and its deactivation. AMD demonstrated that the effect worked with custom-built latency monitors attached to displays, and I think the company on which the monitor I tested had slightly higher latency. I am at an age when motor reflexes have already begun to decline, and if I am honest, I have never been a very good twitch player.
In the best case, this feature reduces your total latency by a few milliseconds. If you are good enough to compete in these spaces, it could be worth something. This is not something I feel able to comment on.
The anti-lag is supported in DX11 on all AMD GPUs. Support for DX9 games is a Navi-only feature. DX12 games are currently not supported because of the extremely different implementation requirements in this API.
Radeon Image Sharpening is a feature that combines adaptive contrast sharpness with the use of GPU resizing techniques to improve the quality of the base image without the need for a native 4K rendering penalty. The following slides compare RIS enabled or disabled.
RIS is disabled in the slide above.
RIS is enabled in this slide. The effect is very subtle. You may want to open the two images above in separate tabs, zoom in carefully, and then compare the final product. Although there is a clear improvement in the IQ in the picture "ON", it is a small one.
Nevertheless, small improvements to IQ are generally welcome. RIS was also designed by Timothy Lottes, who worked on FXAA at Nvidia. The use of this feature should not affect performance (the impact on performance is estimated at 1% or less). RIS is a Navi-only feature and is only supported by DX12 and DX9.
Finally, there is FidelityFX.
FidelityFX is AMD's new addition to GPUOpen. It is offered to any developer wishing to take advantage of it. Adaptive Contrast Sharpness can be used on any GPU if developers want it.
Some extra hardware details on Navi that were not in the previous articles, but probably should have (a frenetic briefing schedule and some scrambled note taking):
AMD plans to maintain GCN GPUs on the market to manage HPC workloads. The AMD engineer we spoke to compared GCN to an extremely efficient sword was it balanced properly, but that it was relatively tedious to use, while RDNA was more of a lightsaber in terms of concentration on elegance and the economy of movement. GPUs such as the MI50 and MI60 also offer much larger memory bandwidth and larger memory pools than any of the Navi cards coming on the market.
The RDNA should eventually replace the GCN in this space and correct some of the slow path anomalies that the GCN suffers. Irregular performance with some texture formats has been corrected, for example, and the RDNA has larger caches to prevent bubbles in the pipeline. Overall performance should be more predictable with rDNA-derived GPUs than with GCN.
There is nothing new in these details, but I thought to include them for the sake of completeness. This concludes our E3 coverage.
AMD announced last night its new Radeon GPUs at E3. The new Radeon RX 5700 and 5700XT are positioned as responses to the Nvidia RTX 2060 and 2070 family, rather than a frontal assault on the RTX 2080. As previously announced, the Radeon VII will remain on the market over the RTX 2080 .
Navi, however, seems to be a significant step forward for AMD on several fronts. We will have a deep architectural dive in the near future, but for now, let's look at speeds, flows and competitive positioning. You can click all the slides to open a larger version in a new window.
The Radeon 5700XT is a 40-unit design, with 2,560 stream processors, 9.75 TFLOP floating point performance, and clocks far superior to anything we've seen before. AMD's new RDNA architecture, which finally GCN replaces, is significantly more efficient than its predecessor, with a projected increase of 1.25 times in performance per clock. The TDP on the 5700XT is 225W, compared to 295W on the Vega 64. The power supply is via an 8-pin connector and a six-pin connector.
The Radeon 5700 is a 36 CPU design with 2,304 stream processors and the same 8GB RAM pool as the 5700 XT. As expected, there is no sign of HBM on these products. The 5700 features a 180 W TDP and the same 1x 8-pin + 1x 6-pin power distribution system.
According to AMD, its new RDNA architecture is a major improvement over GCN, with substantial improvements in clock performance, raw clock speed, and watt performance.
The performance improvement per watt of 1.25x does not take into account clock speed gains but adds to them. It's a good time to talk about the newly defined synchronization scheme by AMD, so let's talk about it.
The basic clock of these cards is equivalent to what you will see if you run a type of workload type of power virus like Furmark. The "Game Clock" is a conservative estimate of the clock that you will see when running current titles over long periods of time. According to AMD, this is not the GPU's median clock rate over time during gaming – it's actually a bit lower than expected to allow for variations in silicon and cooling from one system to another. AMD derived its Game Clock values by measuring the average speed of the game's graphics processors on 25 different games.
The Boost Clock is an opportunistic clock that the GPU will try to hit when possible. Even this value does not represent the maximum potential speed (AMD has described it as "close to the maximum"). With improvements to the underlying architecture and overall design, GPU clocks are much higher than anything we've seen before from AMD. (We will talk about this in more detail in the coming days.)
According to AMD, the power improvements to give RDNA 1.5 times better performance in limited power environments than an equivalent GCN configuration. The change to 7nm represents just over 20% of the total gain, with improvements in design and power of about 15-18%. Most improvements result from improving the performance of the core of the graphics processor. We will write as teaser for deep diving: RDNA can execute instructions at each cycle, compared to GCN, which takes at least four cycles. The net result of these enhancements is a significantly better GPU than the cards it replaces, dramatically improving performance and simultaneously reducing power consumption.
AMD expects the Radeon 5700XT to deliver performance around 1.14 times faster than Vega 64, while consuming 23% less energy. TDPs rated on the 5700XT and 5700 are still superior to their Nvidia counterparts, but TDP is not a substitute for power; we will have to test the hardware to see how the GPUs compare. The performance improvement by area is substantial. Vega 64 was 495mm2 part, while Navi is at 251mm2 part. The RTX 2060 and 2070, on the other hand, are 445mm2.
These gains should put the 5700XT slightly higher than the GTX 1080 and the RTX 2070 in terms of overall performance. AMD also told us that we have read our reactions in terms of colder reference noise.
The 5700 and 5700XT models not only use wrapped fans that vent heat from the system, but AMD also promises to lock them to a volume of 43dbA. (It is not known whether it is absolute maximum volume or maximum volume provided you do not manually set the fan to 100%). This should respond to one of the recurring complaints about AMD's reference cards that they are often much more powerful than the competition. New buyers will receive a redeemable card for a three-month subscription to the Microsoft Xbox Game Pass for PC service.
We will have much more to say about Navi and its underlying architecture in the days to come. The GPU seems to have taken a significant step forward in reducing AMD's power over Nvidia and appears to be reaching a higher performance / watt target than Radeon VII.
The price of both cards is mobile, depending on the performance shown. The Radeon RX 5700XT will be available for $ 449, while the RX 5700 is $ 379. The price of the RX 5700 is significantly higher than the current RTX 2060 cards, which sell for $ 335 at the bottom of the market. The RX 5700XT is a $ 450 card, compared to the $ 500 price of the RTX 2070. A 50th birthday of $ 500 from the card will also be available.
Samsung and AMD have signed a licensing agreement with Radeon IP to the mobile giant. This is a major initiative that has important long-term ramifications for Samsung, which is looking to further develop its own SoCs, and is part of a long-term trend towards increased specialization.
Ten years ago, AMD sold its low-power mobile graphics business to Qualcomm, which was behind the Adreno brand. At the time, the transaction presented a reasonable tax interest for the company. Although some analysts have criticized AMD for not having tried to impose on the smartphone market, Intel's own extinction in this space has proved the wisdom of this approach. In 2009, AMD did not have the resources to actively pursue the IP development of smartphones and competitive solutions in the field. Today, the situation is very different – and Samsung does not pay AMD to design solutions, but the IP stack and his own. The terms of the agreement have not been disclosed.
Samsung's decision to license AMD's Radeon technology reflects several trends in the mobile space. In recent years, we have seen more smartphone manufacturers working on customizing their own SoCs. This customization is not always processor-centric. Some companies continue to trust ARM for Cortex processors, but have built their own Neural Processing Units (NPUs) and / or provided a custom IP address in the form of DSP and GPU blocks, such as Qualcomm. Whatever the case may be, the trend of vendors, including companies that are not generally known for their high-end smartphones, is to incorporate more custom semiconductor design work.
Samsung has been pursuing a split strategy for several years. He decided to send his own Exynos SoC based on a custom CPU design in some international markets, while using the Qualcomm SoC for US products to ensure the best possible experience. According to Anandtech, the current Exynos 9820 is a tribrid, with two M4 cores (Samsung's custom design), two Cortex-A75 processors, and four Cortex-A55 chips. The overall processor design however remains lagging behind the energy efficiency and overall performance of Qualcomm Snapdragon and ARM Cortex-A76.
The decision to license GPU technology is interesting because the heart of the ARM Mali graphics processor in games like the Galaxy S10 is quite good compared to the Snapdragon 855. Although a gap remains favorable to Snapdragon 855 in the together, the performance difference of the GPU is smaller. than the relative difference in processor performance. The implication here is that Samsung thinks it can further reduce the gap that separates it from Qualcomm, where it exists.
The above graph is from Anand review of the Exynos 9820, but the graphics processor performance tests show very different characteristics. In 3DMark, Android devices dramatically outperform the iPhone; in GFXBench, the opposite is true.
Overall, this change should help Samsung continue to develop its own intellectual property and develop custom GPU solutions. However, it will be a few years before we see the fruits of this agreement. AMD can immediately grant patents to Samsung, but building a new GPU based on a new IP address is always time consuming.