AMD saw its share of the graphics market increase in the second quarter of 2019, with total shipments surpassing those of Nvidia for the first time in five years. At the same time, Nvidia maintains an absolute block on the market for desktop extension cards, with about two-thirds of the total market share. And while these gains are significant, it's also important to understand why they did not explode AMD's overall financial statistics for the second quarter.
First, let's talk about the total graphics market. There are three players here: Intel, AMD and Nvidia. Since this report considers the entire graphical space and two-thirds of the systems are delivered without a separate graphics processor, AMD and Nvidia are minority players in this market. AMD, however, has one advantage: it builds processors with an integrated graphics solution, such as Intel. Nvidia does not do it. We must therefore recognize that the total market space includes companies offering a very different product range:
Intel: Integrated only (until next year), do not create separate graphics processors, but represent the majority of total shipments.
AMD: Integrated GPUs and discrete cards, but not very present in high-end mobile games.
Nvidia: No integrated solutions. Discrete GPUs only.
According to JPRAMD's shipments increased by 9.8%, Intel's dropped 1.4% and Nvidia's shipments remained unchanged at 0.04%. This is reminiscent of reports released early this year, which suggest that AMD would take a share of the Intel market because of the shortage of processors. In addition to its overall report, JPR also publishes a separate document on the Desktop Expansion Card (AIB) market. This report considers only the discrete GPU space between Nvidia and AMD (Intel will compete in this space when Xe is launched next year). AMD and Nvidia have shared this space – and again, AMD has seen significant growth, with a 10% increase in its market share.
If you pay attention to the financial reports, you may remember that AMD's sales results for the second quarter of 2019 were reasonable, but not spectacular. Both companies recorded a decline in sales from one year to the next. Nvidia's Q2 2020 results, released a few weeks ago, show gaming revenue down 27% year-on-year. AMD does not split sales of graphics processors and processors – it groups them into one category – but its combined revenue and design reports were also lower on an annual basis:
During the first half of the yearIt was thought that AMD was gaining market share at Intel's expense, but it was generally thought that these gains were in the lower end of the market. For example, AMD launched its first Chromebooks with old Carrizo APUs. This explains the growth in unit shipments in the total GPU space, as well as the reasons why the company did not make a huge profit from its gains. The growth of the AIB market can be explained by the sale of GPUs like the RX 570. This card has always been a very good value – Nvidia did not bother to distribute revision GPUs for the GTX 1650 because the RX 570 is significantly faster. according to several comments. But GPU sales have dropped overall. According to JPR, AIB's sales fell by 16.6% quarter-on-quarter and 39.7% year-on-year.
This explains why AMD's strong market share wins This does not translate into an improvement in C & G sales revenue. The company generates less revenue from low-end sales than high-end cards. And market share improvements were overshadowed by a significant drop in AIB sales year-over-year, likely due to the persistence of the cryptic hangover and weak enthusiasts market in the second quarter.
The third quarter will be a much larger quarter for both companies. Not only is this trend improving only seasonally, but Nvidia and AMD have introduced lower prices and new products. The AMD Navi is equipped with the excellent 5700 and 5700 XT systems, both faster than the Nvidia refreshments of the RTX 2060 and RTX 2070 (now called RTX 2060 Super and RTX 2070 Super respectively). Nvidia, on the other hand, offers ray tracing and variable rate shading, two features used in very few games today, but which may become more popular in the future. AMD lacks these features.
Both companies have developed opposing strategies to increase their respective market share. It will be interesting to see how consumers react to their distinct value propositions or not.
One of the new features built into DirectX 12 is support for variable rate shading, also known as coarse shading. The idea of variable rate shading is simple: in the vast majority of 3D games, the player does not pay the same attention to everything on the screen. As for the GPU, however, every pixel on the screen is usually shaded at the same rate. VRS / CGS makes it possible to redistribute the work of the shader for a single pixel on groups of larger pixels; Intel demonstrated this feature during its Architecture Day last year, featuring a 2 × 2 and 4 × 4 grid block.
In a blog post explaining the subject, Microsoft writes:
VRS allows developers to selectively reduce the shading rate in areas of the image that do not affect visual quality, allowing them to gain additional performance in their games. It's really exciting, because additional performance means an increase in the number of framing processors and low-specification hardware to run better games than ever before.
VRS also allows developers to do the opposite: use increased shading only in areas where it matters most, which means even better visual quality in games.
VRS is a tip in a long line of tips to help developers focus GPU power where they need it most. This is the kind of technique that will become more and more important as Moore's law slows down and it becomes harder and harder to extract more power from GPUs. Process node advances. 3DMark has recently added a new reference to show the impact of VRS.
First, here is a comparison of the enabled versus disabled feature.
VRS disabled. Image provided by UL. Click to enlarge.
VRS enabled. Image provided by UL. Click to enlarge.
There is also a video of the effect in action, which gives you an idea of its moving appearance.
With regard to the impact on performance, Hot material recently took the function for a rotation on Intel's 10th generation graphics processors The performance improvement related to the activation of this feature was about 40%.
These gains are not unique to Intel. HH has also tested several Nvidia GPUs and has also seen significant gains for these cards. Unfortunately, VRS is currently limited to Nvidia and Intel only. AMD does not support this feature and may not be able to enable it in current versions of Navi.
It always takes time to create support for features like this, so the lack of an option at the beginning does not necessarily constitute a critical issue. However, at the same time, features that reduce the graphics processor's computing power by reducing the impact of using various features tend to be popular among developers. It can help games run on low-power solutions and in form factors that they would not otherwise support. All rasterization is basically a trick to model the real world without having to make one, and choosing where to spend its resources to optimize performance is something that developers love to improve efficiency. At the moment, support is limited to a few architectures – Turing and Intel Gen 11 integrated – but this will change over time.
VRS support was changed in Wolfenstein 2: The New Colossus and shipped in Wolfenstein: Youngblood. Firaxis has also demonstrated the effect in Civilization VI, which implies that support could come in this direction at some point. The new VRS Performance Test is a free update to 3DMark Advanced or Professional Edition if you have these versions, but is not included in the free Basic Edition.
NoteNote: An earlier version of this story indicated that VRS was currently not supported by any game. This claim was based on data from another site that was true at the time of creation but that is not more accurate after the support update in Wolfenstein 2 and its shipment in Youngblood. AND regrets the error.
The first image of this article is the VRS on screenshot provided by UL. Have you noticed? Fun to check anyway.
Most of our GPU hedging solutions apply from a consumer perspective and game benchmarking, but I promised to look at the computation side of performance at the time. Launch of Radeon VII. With the 5700 XT having debuts recently, we had the opportunity to revisit this issue with a new GPU architecture from AMD and compare the ADNR to GCN.
In fact, the overall computing situation is at an interesting juncture. AMD said it wanted to become a more serious player in enterprise computing environments, but also said that GCN would continue to exist alongside RDNA in this space. The Radeon VII is a mainstream variant of AMD's MI50 accelerator, with half-speed FP64 support. If you know that you need double-precision FP64 calculations, for example, the Radeon VII fills this slot so that no other GPU in this comparison will do it.
The Radeon VII has the highest RAM bandwidth and is the only GPU in this comparison to offer double-precision performance. But while these GPUs have relatively similar paper specifications, there is a big difference in performance – and the numbers do not always break you down as you think.
One of AMD's main discussion topics with the 5700 XT is now Navi represents a fundamentally new GPU architecture. The 5700 XT proved to be moderately faster than the Vega 64 in our tests on the consumer side of the equation, but we also wanted to check the situation in calculation. However, keep in mind that the novelty of the 5700 XT also plays against us here. Some applications may need to be updated to take full advantage of its capabilities.
Our test results contain data from both Mixer 2.80 and the Blender standalone benchmark, 1.0beta2 (published August 2018). Blender 2.80 is a major version of the application, which contains a number of important changes. The standalone benchmark is not compatible with the Nvidia RTX family, which must be tested with the latest software version. Initially, we tested the beta version of Blender 2.80, and the final version was dropped. So we dumped the beta results and tested them again.
There are significant performance differences between the Blender 1.0beta2 performance test and version 2.80, and one scene, Classroom, does not display correctly in the new version. This scene was removed from our comparisons of 2.80. Blender allows the user to specify a mosaic size in pixels to control the amount of scene to be processed simultaneously. The Python file code for the Blender 1.0beta2 benchmark indicates that the test uses a 512 × 512 (X / Y coordinate) mosaic size for GPUs and 16 × 16 for CPUs. However, most of the scene files contained in the test use a default 32 × 32 mosaic size if they are loaded into Blender 2.80.
We tested Blender 2.80 in two different modes. First, we tested all compatible scenes using the default mosaic size of loaded scenes. This was 16 × 16 for Barbershop_Interior and 32 × 32 for all other scenes. We then tested the same renderings with a default mosaic size of 512 × 512. Up to now, the rule with respect to tile sizes was that larger sizes were good for GPUs, whereas smaller sizes were good for processors. This seems to have changed somewhat with Blender 2.80. The AMD and Nvidia GPUs have very different reactions to larger tile sizes, AMD GPUs accelerating with higher tile sizes, and Nvidia GPUs losing performance.
Since the scene files we are testing were created in an older version of Blender, it is possible that this will affect our overall results. We have worked extensively with AMD for several weeks to explore aspects of Blender's performance on GCN GPUs. GCN, Pascal, Turing, and RDNA all have a different result structure when going from 32 × 32 to 512 × 512, with Turing losing less performance than Pascal and RDNA getting more performance than GCN in most cases.
All our GPUs have benefited greatly from not using a 16 × 16 tile size for Barbershop_Interior. Although this test uses 16 × 16 by default, rendering is not very good at this tile size on GPUs.
Solving the different results we saw in the reference Blender 1.0Beta2 compared to the beta version of Blender 2.80 and, finally, Blender 2.80 final blocks this revision for several weeks and we exchanged several AMD drivers while we worked on it. All our Blender 2.80 results have been analyzed with Adrenaline 2019 Edition 19.8.1.
All GPUs have been tested on an Intel Core i7-8086K system using an Asus Prime Z370-A motherboard. The Vega 64, Radeon RX 5700 XT and Radeon VII have all been tested with Adrenalin 2019 Edition 19.7.2 (16/07/2019) for all but Mixer 2.80. All Blender 2.80 tests were performed with 19.8.1 and not 19.7.2. The Nvidia GeForce GTX 1080 and Gigabyte Aorus RTX 2080 have both been tested with Nvidia's 431.60 Game Ready driver (7/23/2019).
CompuBench 2.0 runs GPUs through a series of tests to measure various aspects of their computing performance. Kishonti, the developers of CompuBench, however, do not seem to offer significant details on how they designed their tests. Level set simulation can refer to the use of level sets for the analysis of surfaces and shapes. The Catmull-Clark subdivision is a technique used to create smooth surfaces. N-body simulations are simulations of dynamic particle systems under the influence of forces such as gravity. The optical flow TV-L1 is an implementation of an optical flow estimation method, used in computer vision.
SPEC Workstation 3.1 contains many of the same workloads as SPECViewPerf, but also has additional GPU compute workloads that we separate separately. A complete breakdown of the workstation test and its suite of applications can be found here. SPEC Workstation 3.1 was run in its native 4K test mode. Although this series of tests was not submitted to SPEC for official publication, our SPEC Workstation 3.1 tests complied with the rules set out by the organization for testing, which can be found here.
We've put together two sets of results: a series of synthetic synthesis tests, created with SiSoft Sandra and examining various aspects of comparing these chips, including processing power, memory latency and internal characteristics, as well as A more extensive test suite touch compute and render performance in various applications. The SiSoft Sandra 2020 tests being all unique to this application, we chose to break them down into their own slide show.
The results of the Gigabyte Aorus RTX 2080 should be read as equivalent to those of an RTX 2070S. Both GPUs work almost the same in consumer workloads and must also match each other in the workstation.
SiSoft Sandra is a general purpose system information utility and a complete suite of performance evaluation. Although this is a synthetic test, it is probably the most comprehensive synthetic evaluation utility, and its developer, Adrian Silasi, has spent decades refining and improving it. , adding new features and testing as processors and GPUs evolve.
Our results specific to SiSoft Sandra are presented below. Some of our OpenCL results are a little odd with the 5700 XT, but according to Adrian, it has not yet been able to optimize the code for running on the 5700 XT. Consider these results as preliminary – interesting, but perhaps not yet indicative – for this GPU.
Our benchmarks SiSoft Sandra 2020 go largely in the same direction. If you need a double precision floating point, the Radeon VII is a calculating monster. Although the number of buyers falling into this category does not appear clearly, there are certain places, such as image processing and high-precision workloads, where the Radeon VII shines.
The RDNA-based Radeon 5700 XT is no less noticeable in these tests, but we are also in contact with Silasi regarding the problems we encountered during testing. Improved assistance may change some of these results in the coming months.
Now that we've touched on Sandra's performance, let's move on to the rest of our suite of references. Our other results are included in the slideshow below:
What do these results tell us? Many things rather interesting. First of all, the RDNA is downright impressive. Do not forget that we have tested this graphics processor in professional and calculation – oriented applications, none of them having been updated or corrected to run. There are clear signs that this has had an impact on our benchmark results, including tests that would not work or that were running slowly. Despite this, the 5700 XT impresses.
Radeon VII impresses as well, but in different ways than the 5700 XT. SiSoft Sandra 2020 shows the advantage that this card can bring to double-precision workloads, where it performs much better than anything else on the market. Artificial intelligence and machine learning have become much more important lately, but if you work in an area where dual precision GPUs are essential, Radeon VII has considerable firepower. SiSoft Sandra includes tests that are based on D3D11 rather than OpenCL. But since OpenCL is CUDA's main competitor, I chose to keep it in all cases, with the exception of memory latency tests, which overall showed lower latency for CUDA. all GPUs with D3D compared to OpenCL.
AMD has already announced its intention to keep GCN on the market for calculations, as Navi is oriented towards the consumer market, but there is no indication that it intends to continue GCN's evolution on a trajectory distinct from RDNA. The most likely reason is that GCN will not be replaced at the top of the computer market before Big Navi is ready for 2020. According to what we have seen, there is enough to be excited about it. There are already applications for which RDNA is significantly faster than Radeon VII, despite the big difference between cards in terms of double precision capacity, RAM bandwidth and memory capacity.
Blender 2.80 presents an interesting series of comparisons between RDNA, GCN and CUDA. The use of higher paver sizes has a huge impact on GPU performance, but the fact that this difference is good or bad depends on the brand of graphics processor you use and the architectural family to which it belongs. The Pascal and Turing GPUs had better results with the larger ones. The size of 512 × 512 tiles was better in total for all GPUs, but only because the total rendering time on Barbershop_Interior has been improved by more than the rendering time of all other scenes for Turing and Pascal GPUs. The RTX 2080 was the fastest GPU in our Blender benchmarks, but the 5700 XT has excellent results in terms of performance.
I do not want to make global statements about the parameters of Blender 2.80; I am not an expert in 3D rendering. These test results suggest that Blender works best with larger mosaic settings on AMD GPUs, but smaller mosaic settings may yield better results for Nvidia GPUs. In the past, AMD and Nvidia GPUs had larger tile sizes. This pattern could also be related to the specific scenes in question. If you use Blender, I suggest you try different scenes and sizes of pavers.
In the end, these results suggest that GPU performance is more variable in some of these professional markets than one might expect for gaming. There are specific tests in which the 5700 XT is significantly faster than the RTX 2080 or the Radeon VII and other tests in which it falls clearly behind them. The immaturity of OpenCL drivers may explain some of it, but we see flashes of genius in these performance figures. The double precision performance of the Radeon VII puts it in a class of its own, but the Radeon RX 5700 XT is a much cheaper and quieter card. Depending on the type of application you are targeting, AMD's new $ 400 GPU might be the best choice on the market. In other scenarios, the Radeon VII and RTX 2080 cards both claim to be the fastest card available.
A function image is the final rendering of the Benchmark_Pavilion scene included in the Blender 1.02beta standalone performance test.
When Microsoft launched Windows 10, its position vis-à-vis DirectX 12 was clear: Windows 10 would be the only operating system that supported the company's latest API. For years, the company has maintained this position. Then, earlier this year, Microsoft announced that a game – World of Warcraft – would be allowed to take advantage of the DX12 API with Windows 7.
The reason for this allocation? Probably China. World of Warcraft has always attracted a lot of Chinese people, and Blizzard's decision to add support for DX12 to WoW was an important step for both the developer and the API. Microsoft has announced the extension of this program. In a short blog post pointing to an array of API documents, Microsoft marks:
We received a warm welcome from the player community and continued to work with several gaming studios to further evaluate this work. To better support larger scale game developers, we are releasing the following resources to allow game developers to run their DirectX 12 games on Windows 7.
The development orientation document The procedure for moving DX12 to Windows 7 contains useful information about the difficulty of running games running under the old operating system and the differences between them. Microsoft states:
We only carried the D3D12 runtime on Windows 7. Therefore, the difference between the graphics kernel and Windows 7 still requires some game code changes, mainly with regard to the presentation code path, the use of monitored barriers and the management of memory residences. detailed below). Early users have reported that it took between a few days and two weeks of work for their D3D12 games to be ready to run on Windows 7, although the actual engineering work required for your game may vary.
There are technical differences between DX12 under Windows 7 and DX12 under Windows 10. DirectML (Direct Machine Learning) is not supported on Windows 7, but all other features implemented in the Windows 10 update include: 39, October 2018 are supported. There are differences in terms of using the APIs (D3D12 on Windows 7 uses different APIs present) and some fencing usage patterns are also not supported.
However, there are some limitations to take care of. Only Windows 7 64-bit with SP1 installed is supported. No PIX or D3D12 debug layer on Windows 7, no shared area or interop inter-API, no SLI / LDA support, no D3D12 video, and no WARP support. According to Microsoft, "HDR support is orthogonal to D3D12 and requires DXGI / Kernel / DWM functionality under Windows 10 but not Windows 7. This seems to imply that HDR content can work in Windows 7, but it may be even for the developer. implement correctly.
It's honestly a little surprising to see. Windows 7 is expected to take a firm retreat in a few months. The implication here is that Microsoft is taking this step to meet the needs of players who still use Windows 7, but the Steam Hardware Survey suggests that it is a distinct minority of players. According to the SHS, Windows 10 represents a market share of 71.57%, while Windows 7 64 bits is indexed at 20.4%. What is interesting here is that the SHS sinks a lot more to Windows 10 than a generic survey on the operating system.
StatCounter data puts Windows 10 at 58.63% of the market in July, compared to 31.22% of Windows 10. This suggests that players tend to update their hardware faster than the mass market, which is logic. But from what we've read, Windows 7 players can be concentrated in China, where he remains the most popular bone. 49.46% of Chinese gamers use Windows 7, compared to only 41.13% of PC games occurring under Windows 10. Although we assume that Chinese players will probably use Windows 10 more – and it is not clear that they are – the user share of this country is even more important.
We do not know at all how Microsoft will handle it problem with global support, but that may be the way Microsoft offers some degree of backward compatibility without being willing to do anything equivalent in terms of providing security. Microsoft wants its client base – all that – to be Windows 10. It's amazing to see the company deploying DX12 technology in the opposite direction, but we'd be stunned if it granted Windows 7 a reprieve and allowed it to publish patches.
MS could also hope to encourage developers to embrace DX12 more widely. Three years after its debut, neither DX12 nor Vulkan have done much to revolutionize APIs or games. Developers make use the APIs, but we found a relatively weak use of these to extract something unique. The need to support older hardware and a wide range of users, as well as the fact that these APIs require developers to become more familiar with the underlying hardware, seem to weigh heavily on their overall use.
The high-performance GPU industry is a two-horse race for nearly two decades. After the collapse of 3dfx, no new company has emerged to seriously challenge the ATI / Nvidia split. While Intel holds a substantial stake in the entire GPU market, its integrated business focuses only on 2D, video and basic 3D games. Intel's next Intel Xe architecture, scheduled for 2020, will be a serious attempt to break into the consumer market. Now, there is a word of potential fourth player in the field, although this may be in a more specialized area.
according to at the THG, Jingjia Micro is an integrated civil-military company that focuses primarily on GPU development for the military market so far. The company started by building the first Chinese GPU, the JM5400, built at 65 nm. The success of the JM5400 has allowed the company to expand and migrate to new manufacturing nodes. Its next products, the JM7000 and 7200, were built on 28 nm. Jingjia Micro now wants to extend its reach and target the performance of the GTX 1050 and 1080 with a pair of new models – the JM9231 and JM9271.
To post at cnbeta has additional information. Currently, it is said that the JM7200 offers performance equivalent to that of the GeForce GT 640, although its power envelope is much lower – 10W, compared to the Nvidia of 50W specified for this card. We would like this claim to be independently verified. The OEM variant of the GT 640 was a 40nm Fermi-based part, but this chip had a 65W TDP. The 50W variant was a Kepler derived piece built on 28 nm – the same process node used by Jingjia Micro. The JM part would also have 4GB of RAM, while the GT 640 50W version would only have 1GB of GDDR5.
The JM9231 and JM9271 are expected to be the first fully programmable GPUs developed by Jingjia Micro; References to the older JM5400 and JM7200 families are based on fixed-function rendering pipelines. These limitations would not be handled by modern Windows APIs, but the company started as a military GPU provider, and these applications obviously have very different requirements for product and API certification.
The new JM parts obviously will not look for the most upscale Nvidia or AMD cards, but even a high-end performance from 2016 to 2017 would allow them to fight for the midrange and budget markets . The provision of the software stack and developer support would obviously be crucial for all games on the market, and it does not appear to be clear whether the JM9231 or JM9271 includes performance enhancements or ideas that we have never seen before. sellers Such events are rare, but not unknown. PowerVR has already attempted to establish itself as the third-largest PC graphics player with the Kyro and Kyro II, which have captured market share as a unique solution with higher memory bandwidth efficiency than ATI or Nvidia.
The use of HBM memory in a product of this type is rather interesting, as is the comparatively low memory bandwidth (according to HBM standards). Since both products do not support modern APIs, it is possible that they are exclusively for military use – although in this case, referencing the GTX 1080 would be a bit strange. Whatever the case may be, China is clearly putting on more aggressive competition in terms of overall silicon performance. A few more years, and we could see new products from vendors we had never seen defying "home" alternatives such as AMD, Nvidia and (if its Xe launch goes well), Intel.
The next generation of consoles will be in less than 18 months and Microsoft is starting to share a little more information about its priorities for the next generation of Xbox consoles. Readability, load times, and upward compatibility of controllers and software are Redmond's top priorities with the launch of Xbox Next.
"I think the area we really want to focus on next generation is the frame rate and game playability," Spencer said. said Gamespot:
Make sure the games load incredibly quickly and that the game runs at the fastest possible frame rate. We are also the Windows company, so we see the work that is done (for the PC) and that of the developers. People love games at 60 frames per second. It is therefore essential that the game design works at 4K 60 (fps).
What is interesting, is that this generation, we really focused on 4K visuals and on the way we bring the movies in 4K Blu-ray and video streaming. With Xbox One X, allowing games to work with 4K visuals will bring really important visual enhancements next generation But playability is probably the main focus of this generation. How fast are the games loaded? Do I feel that I can get into the game as fast as possible and while he is playing? How do you feel? Does this game look like any other game than the one I saw? This is our goal. "
That's more or less what ET predicted earlier this year. 60fps is a much more realistic target for Xbox Next than for the 240fps rumor circulating. Despite various vague claims that the Xbox Next will support 8K, Spencer does not make any sensual mention as a game resolution target. There is no chance that the 2020 console will have a sufficiently powerful GPU to support this resolution. We are happy to see the company focus on other aspects of the game.
According to Microsoft, backward compatibility is a key pillar of the progression of the Xbox. The Xbox One, Xbox 360 and OG Xbox games will all continue to be supported on Xbox Next, Spencer told Gamespot. The company promised that this backward compatibility commitment would also apply to controllers, stating: "So, really, the products you've bought from us, whether it's the games or the controllers you're using, we want to make sure they are compatible. future compatible with the most faithful version of our console, which at that time will obviously be the one we just launched. "
Historically, there has been a handful of games that specifically targeted 60 frames per second for console play, but it was an unusual frame frequency target. The Xbox One X and PS4 Pro have expanded the list of titles offering this pace by encouraging developers to release updates for new and existing games that would add new resolution options or allow to play at higher rates than the basic title supported. In fact, moving the video game industry (backwards) to a 60-fps target would be a feat.
There is reason to think that the two console manufacturers could get by. The Xbox Next and PlayStation 5 will both achieve higher levels of performance than the existing Xbox One and PS4 Pro. The use of Ryzen and a RDNA-derived graphics processor for both platforms ensures superior console performance, but the perceived level of visual quality improvement offered by console generation over the next decreased each cycle. Rather than simply looking for new levels of detail, Spencer wants developers to focus on consistency and load times, two other areas in which major generational gains can be generated, including adoption of SSDs. .
A major question is how the 1080p / 4K splitting will be handled. Spencer refers to a 4K / 60fps target, but 1080p still represents a high percentage of TVs sold and the installation base for the old standard is huge. The easiest way for Microsoft to handle a 1080p output limit is to render internally at 4K, and then output at 1080p. This makes it possible to effectively apply oversampled AA to the overall image and dramatically improve the image quality compared to the standard 1080p resolution. With the PS4 Pro and Xbox One X, Microsoft and Sony have provided developers with many ways to leverage the added power of new consoles to enhance the basic experience. We expect a similar approach. One of the benefits of having a powerful GPU with a low-resolution display is that you can enable secondary features like AA without worrying about the performance impact. We hope Microsoft will bring some of this flexibility to its Xbox Next design.
The PC player in me can not help but notice that the already almost empty line between consoles and PCs will be even finer at the next cycle. The consoles previously offered backward compatibility, but often come with qualifiers related to the version of your hardware and limited to a previous platform. Microsoft will not only support Xbox One games on Xbox Next, it will continue to support Xbox 360 and OG Xbox systems, as well as Xbox One devices. This is exactly the type of backwards compatibility that we expected when upgrading from one PC version to another, and it's nice to see the consoles catching up after a few decades.
The flip side, of course, is that the debate console against PC becomes more difficult each generation. At this point you can also simply ask for "controller or keyboard?" (Keyboard, natch). Functionally, at the hardware level, we are the PC games.