New rumors are running that Intel may seek Samsung's assistance at 14 nm, although there are also reasons to doubt it. If this is true, it suggests that Santa Clara will remain stuck at 14 nm for a significant amount of time for at least a few games, notwithstanding recent discussions on Ice Lake.
according to SE daily (via Google Translate), Intel and Samsung are in the final stages of trading for additional capacity. Intel would have chosen to work with Samsung rather than TSMC because of concerns regarding the improved competitive performance of Huawei and AMD. TSMC stated that it thought it could continue to manufacture chips for Huawei, and that it allegedly prompted Intel to prefer Samsung as a partner, due to the possibility that new business decisions regarding retaliation would be taken against companies that do business with Huawei.
I do not want to go so far as to say that it's wrong, but the chronology seems extremely compressed. Negotiations on smelting capacity between two large companies will not be negotiated in a weekend, and the US government total blockade Huawei is still quite new. In addition, taking action against TSMC for believing that it could continue to manufacture systems on a chip for Huawei would, in some respects, be excessive. Huawei is facing enormous problems in bringing its products to market for reasons that have nothing to do with its ability to supply SoC. Even with perfect support for the smelter, its manufacturing supply chain is threatened in an existential way, not to mention its access to software and support tools.
The idea that Intel would choose to use a smelter other than its main competitor, AMD, possible Intel could be sensitive to the idea that it was passed, hat in hand, to the same company that supplies its competitors. The partnership with Samsung – whose 14nm node is generally in excellent condition and has been used for AMD hardware at GlobalFoundries after GF fired it many years ago – is a little less direct.
The biggest reason to look down on this rumor is that it suggests that Intel would launch at 14 nm the competitor "Rocket Lake" on silicon Samsung. In the past, Intel had signed agreements with TSMC for the production of Atom processors or chipsets (as is often said). Building "big hearts" in a rival foundry would be a major change. That's one of the reasons I do not want to weigh heavily in this rumor, but there's a way to make sense of this rumor.
One of the difficulties associated with setting up a new process in an existing plant is the disruption of ongoing production. If you want to replace a capacity of 14 nm by 7 nm, you may have to disable the lines to perform the upgrades. To do this, Intel has always operated its production lines in tight rates, but we know that the demand for 14 nm has been extremely high. Just last year, Intel announced the allocation of additional funds to boost production of 14 nm. At the same time, the long 10-nm delay has plagued Intel's installations. The company expects a relatively fast switchover to 7nm (production being scheduled by 2021), which means that it needs a fairly fast volume rollout at a time when demand for 14 nm can already be very busy.
If this rumor is true, it may be true to the extent that Intel has reached an agreement with Samsung to operate certain products from its own factories while aggressively upgrading its own factories. The company undoubtedly wants to restore the story of the supremacy of the process that it had 20 years before its 10 nm slide and it might prefer to run at 7 nm by taking advantage of the production of a competitor rather than conducting it alone.
The Daily SE suggests another reason why Intel and Samsung could conclude this type of agreement: prices. From the story:
The Samsung smelter recently announced that it had submitted to TSMC an unexpected unit price of 60% for some companies. Samsung has offered TSMC a complete set of less expensive masks than the "multi-layer mask" (MLM) set up to reduce low-volume production costs. A mask is a kind of film used to draw a circuit on a wafer.
While the dramatic cost reductions we've heard about were 7nm, it's quite possible that Samsung and Intel will also reach a 14nm agreement. Samsung Foundry will probably be hungry for customers and build for Intel would be a prestigious victory. Intel (again, assuming this rumor is accurate) would obviously want a good deal for the products and could find Samsung more acceptable than TSMC – or simply worry about more prosaic issues regarding parts availability.
At the present time, Intel has given limited windows to its 10-nm and 7-nm roadmaps. The company said that 10nm ++ and 7nm would overlap in 2021 and that it would result in a 7nm GPU. Deliveries of Ice Lake in notebooks are expected to begin in June, and volume shipments by the end of the year. No timeline has been provided for office rooms and roadmaps that have leaked (which may not be accurate) indicate that 14 nm hang on the desk until 2020. With the launch of 7 nm by AMD in a few weeks, the hike risks Intel.
Updated (18/06/2019): There is reason to believe that if such an agreement is concluded – and nothing has yet been publicly announced – it could be the same type of contiguity products that Intel has sometimes put to the point with partners before. This type of allocation is the kind of maneuver we expect from Intel while trying to maximize the in-house manufacturing of the highest margin parts with limited foundry space.
AMD announced last night its new Radeon GPUs at E3. The new Radeon RX 5700 and 5700XT are positioned as responses to the Nvidia RTX 2060 and 2070 family, rather than a frontal assault on the RTX 2080. As previously announced, the Radeon VII will remain on the market over the RTX 2080 .
Navi, however, seems to be a significant step forward for AMD on several fronts. We will have a deep architectural dive in the near future, but for now, let's look at speeds, flows and competitive positioning. You can click all the slides to open a larger version in a new window.
The Radeon 5700XT is a 40-unit design, with 2,560 stream processors, 9.75 TFLOP floating point performance, and clocks far superior to anything we've seen before. AMD's new RDNA architecture, which finally GCN replaces, is significantly more efficient than its predecessor, with a projected increase of 1.25 times in performance per clock. The TDP on the 5700XT is 225W, compared to 295W on the Vega 64. The power supply is via an 8-pin connector and a six-pin connector.
The Radeon 5700 is a 36 CPU design with 2,304 stream processors and the same 8GB RAM pool as the 5700 XT. As expected, there is no sign of HBM on these products. The 5700 features a 180 W TDP and the same 1x 8-pin + 1x 6-pin power distribution system.
According to AMD, its new RDNA architecture is a major improvement over GCN, with substantial improvements in clock performance, raw clock speed, and watt performance.
The performance improvement per watt of 1.25x does not take into account clock speed gains but adds to them. It's a good time to talk about the newly defined synchronization scheme by AMD, so let's talk about it.
The basic clock of these cards is equivalent to what you will see if you run a type of workload type of power virus like Furmark. The "Game Clock" is a conservative estimate of the clock that you will see when running current titles over long periods of time. According to AMD, this is not the GPU's median clock rate over time during gaming – it's actually a bit lower than expected to allow for variations in silicon and cooling from one system to another. AMD derived its Game Clock values by measuring the average speed of the game's graphics processors on 25 different games.
The Boost Clock is an opportunistic clock that the GPU will try to hit when possible. Even this value does not represent the maximum potential speed (AMD has described it as "close to the maximum"). With improvements to the underlying architecture and overall design, GPU clocks are much higher than anything we've seen before from AMD. (We will talk about this in more detail in the coming days.)
According to AMD, the power improvements to give RDNA 1.5 times better performance in limited power environments than an equivalent GCN configuration. The change to 7nm represents just over 20% of the total gain, with improvements in design and power of about 15-18%. Most improvements result from improving the performance of the core of the graphics processor. We will write as teaser for deep diving: RDNA can execute instructions at each cycle, compared to GCN, which takes at least four cycles. The net result of these enhancements is a significantly better GPU than the cards it replaces, dramatically improving performance and simultaneously reducing power consumption.
AMD expects the Radeon 5700XT to deliver performance around 1.14 times faster than Vega 64, while consuming 23% less energy. TDPs rated on the 5700XT and 5700 are still superior to their Nvidia counterparts, but TDP is not a substitute for power; we will have to test the hardware to see how the GPUs compare. The performance improvement by area is substantial. Vega 64 was 495mm2 part, while Navi is at 251mm2 part. The RTX 2060 and 2070, on the other hand, are 445mm2.
These gains should put the 5700XT slightly higher than the GTX 1080 and the RTX 2070 in terms of overall performance. AMD also told us that we have read our reactions in terms of colder reference noise.
The 5700 and 5700XT models not only use wrapped fans that vent heat from the system, but AMD also promises to lock them to a volume of 43dbA. (It is not known whether it is absolute maximum volume or maximum volume provided you do not manually set the fan to 100%). This should respond to one of the recurring complaints about AMD's reference cards that they are often much more powerful than the competition. New buyers will receive a redeemable card for a three-month subscription to the Microsoft Xbox Game Pass for PC service.
We will have much more to say about Navi and its underlying architecture in the days to come. The GPU seems to have taken a significant step forward in reducing AMD's power over Nvidia and appears to be reaching a higher performance / watt target than Radeon VII.
The price of both cards is mobile, depending on the performance shown. The Radeon RX 5700XT will be available for $ 449, while the RX 5700 is $ 379. The price of the RX 5700 is significantly higher than the current RTX 2060 cards, which sell for $ 335 at the bottom of the market. The RX 5700XT is a $ 450 card, compared to the $ 500 price of the RTX 2070. A 50th birthday of $ 500 from the card will also be available.
Microsoft announced yesterday the Xbox Next at E3 and, although we still do not know what will be the next generation system, we will finally know what kind of specifications it will offer.
First of all, let 's talk about availability. Microsoft said it plans to launch its next generation console for the 2020 holiday, involving a schedule from November to December. That would be pretty much aligning with the Xbox One, which debuted on November 22, 2013. This also means that the Xbox One will not take off as long as its predecessor, although it gets a major mid-cycle upgrade in the form of the Xbox One X The Xbox 360 lasted eight years, from November 22, 2005 (there is again this date) until 2013. The Xbox One / X, on the other hand, had a life of seven years. The Microsoft E3 revelation trailer for the platform is integrated below:
Regarding the features, we know that the console, like that of Sony, will include an SSD, with performance up to 40 times higher than current systems. This is not impossible for SSDs, which offer access times typically measured at 0.1 microseconds or less, compared to 10-12 ms access time for hard drives. .
If all goes well, it means that the inevitable remake of Mass Effect will have fewer elevator trips without nowhere. The other statistics are about what you expect. Zen 2 and Navi are both connected, as is ray tracing (no explanation for the moment, however, ray tracing abilities or effects that will be specifically supported).
Supposedly, the platform will support both 120 fps output and 8K resolutions, but we do not expect 8K resolution to be a playable target resolution for traditional AAA game titles. Technically, the console is able to produce at this resolution, but these are modern PCs – if you play a pretty old game with super-sampling enabled. Similarly, a 120-frame-per-second game may be possible, but console games have always been targeted at 30 frames per second or, more recently, at 60 frames per second.
4Would Xbox Next support this? Yes … but only by sacrificing greater visual fidelity than most developers would likely choose to spend. That said, it would be interesting to see more game developers with this kind of tactic, or simply offer players an option to play with lower details and higher speeds. Maybe the message is less about the probability wide support and more on providing flexible features if developers chose to exploit them. Whatever the case may be, we do not expect 8K games to be standard.
Microsoft also seems to have learned from the terrible legendary debut of the Xbox One: instead of spending his unveiling to reveal everything, the head of Xbox, Phil Spencer, assured the audience that the game performance would be the focus. "For us, the console is vital and central in our experience," said Spencer. "We heard you, a console must be designed, built and optimized for one thing and one: the game."
There are a lot of things we do not know about the console, including how it compares to the PS5 in terms of speed and flow. The prize is also a major issue that has not yet been answered. These decisions were made in the past, but they are two companies, they stick to the subject to make sure they do not sell their current genes. . Despite recent news about collaboration in cloud games, Microsoft and Sony remain very determined competitors in the ongoing console war.
If all you do is the best news of series such as Computex, the PC market seems to be going well. DRAM prices have failed. The new 7 nm AMD and 10 nm Intel processors are about to generate additional gains. But under the hood, semiconductor companies have suffered a collective defeat.
Global chip sales dropped to $ 101.2 billion in the first quarter of 2019, compared to $ 116.2 billion in the first quarter of 2018. This is the largest year-over-year decline since the depths of the Great Recession, according to IHS Markit. Samsung, whose sales fell 34 percent from last year, suffered the largest losses in the NAND and DRAM markets. The prices of these two components have improved considerably. The memory chips, noted IHS, were at the origin of most of the fall. Remove their impact and sales dropped 4.4%. But the "catastrophic fall" has also been caused by other factors, including declining demand in major markets.
It is not difficult to know which companies are exposed to the NAND and DRAM markets by consulting these graphs. Intel regained first place at the top of the market thanks to the massive decline of Samsung. As everyone's income went down, Nvidia was the only other non-memory company to be so successful. It was experiencing a rise in RTX stocks, a sharp decline in data center sales and a downward comparison from a quarter where crypto-salts were still driving the market. Nvidia's comparative financial data will improve in the second quarter and beyond, once we have left the period in which we could have benefited from inflated cryptographic salts.
The global computer application chip market fell 16.7% in the first quarter of 2019. It is interesting to note that IHS claims that Nvidia's sales have dropped in part because of AMD's direct graphics competition in the data center. This is the first time we hear this statement. AMD has not talked much about its sales volume or its data center performance, but its overall position in the data centers and on the AI / ML markets in which it wishes to penetrate has been considered weak enough. We examined this topic because of questions about GPU AI and ML performance tests. The truth is that it's hard to find people who work with AMD cards. There are many reasons, including the relative state of maturity and compatibility of OpenCL versus CUDA.
At the moment, the best players in the industry are still planning a recovery at the end of the year. We should know whether this is true or not, because advanced economic indicators will begin to appear before we get official economic reports. While worries about Brexit and the EU may rage around Halloween and the trade war between the two countries, the current semiconductor market situation for 2019 is still uncertain.
In fact, the semiconductor market is characterized by many uncertainties, even at a more granular level. Qualcomm has been declared a massive monopolist and the whole of its revenue generation system could consequently be permanently changed. Nvidia is trying to convince consumers to pay a significant premium for ray tracing assistance. AMD's new 7-nm processors and graphics processors are expected to bolster their own efforts to regain market share. One wonders about Intel's 10 nm efforts and about its future Ice Lake processors. Questions arise regarding the impact of the US sales ban on Huawei on the broader semiconductor market and on China's reaction if any of its Flagship companies is seriously damaged. How these trends and issues evolve will affect how the rest of 2019 will be built.
PCI-GIS announced the completion of the PCIe 5.0 specification. In the history of PCI development, this may be the first time a new standard has been completed even before the previous iteration is launched on the mainstream market.
"The new data-intensive applications are driving demand for unprecedented levels of performance," said Al Yanes, chairman of the PCI-SIG board of directors. "Completing the PCIe 5.0 specification in 18 months is a major achievement, driven by the commitment of our members who have worked diligently to evolve PCIe technology to meet the industry's performance needs." PCIe architecture will remain the de facto standard for high-performance I / O in the near future. "
The rapid launch of PCIe 5.0 is the result of the considerable delay taken by PCIe 4.0. PCIe 1.0 became available in 2003, followed by PCIe 2.0 in 2007 and PCIe 3.0 in 2010. (These are the dates when the standards were finalized, not when the hardware of the motherboard became available). PCIe 4.0, on the other hand, was not completed in 2017. The long delay of version 4.0 means that version 5.0 will probably be deployed fairly quickly. As always, PCIe 5.0 will remain backwards compatible with previous versions of PCIe.
The market reaction to the rapid appearance of PCIe 5.0 over version 4.0 is unclear. AMD, for example, clearly intends to adopt PCIe 4.0 for its third-generation Ryzen platform, but could theoretically expect a relatively fast migration to PCIe 5.0. Intel could theoretically plan for a quick switch to PCIe 5.0 – or both standards could coexist on the market, with PCIe 4.0 being used for less demanding applications, while offering maximum performance for PCIe 5.0. However, we have never seen PCIe 5.0 being used as a differentiator in this way on PCs and for good reason – so far, this has never been so important. Although the availability of PCIe channels can affect multi-GPU performance, it has never performed as well as PCIe channels, and I do not remember any instance of a new GPU evolving better on a new one. version of PCIe at launch. to the immediately preceding standard.
The popularity of PCIe-based SSDs and the M.2 form factor, however, have changed things. Although the difference between the use of an SSD and a hard disk remains greater than the improvements made when moving from a standard SATA SSD to an M.2 disk, PCIe acceleration 5.0 will really give NAND and Optane the space to expand their collective. Legs A PCIe 3.0 M.2 drive with an x4 connection provides up to 4GB / s of bandwidth in each direction. PCIe 4.0 doubles this number to 8 GB. PCIe 5.0 doubles to 16 GB.
Not only does this mean that even a x1 PCIe 5.0 link is now much more efficient than it was before, it also means that x4 drives are approaching theoretical transfer rates we've had. Used to associate with the main memory. That does not mean, in fact, that NAND works like a RAM. Neither Optane nor NAND replaces DRAM in today's client computers, and the latency and performance characteristics are totally different. But the relatively rapid transition from PCIe 3.0 to PCIe 5.0 means that maximum storage performance could be considerably faster than it is already in a relatively short period of time.
The sooner we expected to see the adoption of PCIe 5.0 would take place in 2020 and 2021 would not be crazy according to the speed with which Intel and AMD have adopted the standard.
One of the most interesting releases of the year is not at all a modern game, it's a new look at a 22-year-old title. Quake II is probably a game that modern players even know, which adds to the interest aroused by the work done by Nvidia to refresh this old game with a massive facelift.
according to at NV, The RTX-enabled version of Quake II will play on a GeForce RTX "or other capable hardware" graphics card, which probably means this version in an exclusive NV. However, it is a good idea to allow people to experience ray tracing on older GPUs. Virtually all NV GPUs that support RTX ray throwing (and Nvidia has enabled the capabilities of its Pascal GPU family) should be able to handle Quake II, given the absolutely anemic requirements of the game in all other categories. We have already seen how games with modest GPU requirements can be enhanced with a recent effect with a recent mod that added path tracing to Minecraft.
Cylindrical projection mode for a wide-angle field of view on wide screens
The following is an explanation of how the ray tracing works in Quake II. If you just want to see some game sequences, NV's launch video is also embedded below:
According to Nvidia, at least one RTX 2060 is recommended, but Nvidia has every interest in upgrading its own GPU line. We hope that the company has reasonably ensured that the game is well done on Pascal, even if it does not emphasize in his own marketing.
The Quake II chaingun remains my favorite chaingun application of all titles, and I can not wait to see what NV has done with one of the major FPS titles ever published. Quake II was id's first attempt to write a game with a real story, and I would not recommend anyone to start it with a thrilling, nuanced story of derring-do-it will be a little primitive compared to modern game standards. But I suspect it will hold up remarkably well and I can not wait to take it for a test drive.
This version will be limited to the first three levels of the game, unless the full version is installed on your system. In this case, you will be able to read the full version. Nvidia will also release the source code of its own mod, which will allow it to be compatible with other QII mods developed during the game's life.