Intel may have launched Cascade Lake relatively recently, but another refresh of the 14-nm server is already on the horizon. Intel has lifted the veil on Cooper Lake today, giving new details on how the processor integrates into its product line with the 10-nm Ice Lake server chips supposed to be queuing for the deployment in 2020.
Cooper Lake features include support for Google's bfloat16 format. It will also support up to 56 processor cores in a snap-in format, unlike Cascade Lake-AP, which can scale up to 56 cores but only in a welded BGA configuration. The new take would be known as LGA4189. There is reports that these chips could offer up to 16 channels of memory (since Cascade Lake-AP and Cooper Lake use multiple chips on the same chip, Intel could run up to 16 channels of memory per socket with version double chip).
Bfloat16 support is a major addition to Intel's artificial intelligence efforts. While 16-bit semi-precision floating point numbers have been defined in the IEEE 754 standard for over 30 years, bfloat16 changes the balance between the format used for significant digits and that used for exponents. The original IEEE 754 standard is designed to give priority to precision, with only five bits of exponent. The new format allows a much larger range of values but with less precision. This is particularly useful for artificial intelligence and deep learning calculations, and is a major step on Intel's path to improving the performance of artificial intelligence and deep processor learning computations. Intel has released a White Book on bfloat16 if you are looking for more information on the subject. Google says that using bfloat16 instead of the conventional semi-precision floating point can generate significant performance benefits. The society written"Some operations are related to the memory bandwidth, which means that the memory bandwidth determines the time spent in such operations. Storing the inputs and outputs of memory bandwidth-related operations in bfloat16 format reduces the amount of data to be transferred, improving the speed of operations. "
The other benefit of Cooper Lake is that the CPU would share a socket with the upcoming Ice Lake servers in 2020. A theoretically important distinction between the two families is that Ice Lake servers at 10 nm can not support bfloat16, while 14nm Cooper Lake servers will. This could be the result of increased differentiation of Intel's product lines, although it is also possible that this reflects the difficult development of 10 nm.
The introduction of 56 cores as a base indicates that Intel expects Cooper Lake to expand to more customers than the Cascade Lake / Cascade Lake-AP target number. It also raises questions about the type of Ice Lake servers that Intel is going to market and the possibility of seeing 56-core versions of these chips as well. To date, all of Intel's 10-nm Ice Lake messaging has focused on servers or mobile devices. This may reflect the strategy used by Intel for Broadwell, where desktop versions of the processor were scarce, and where server and server components dominated this family – but Intel says later the fact of not publishing Broadwell desktop was a mistake and that the company had gaffed by skipping the market. Does this mean that Intel keeps launching an Ice Lake desktop or if the company has decided to no longer use its desktop computer? made understand that this time is not yet clear.
Cooper Lake's attention to AI treatment means that it is not necessarily meant to go with AMD's next 7 nm Epyc. AMD has not talked much about AI or machine learning on its processors and, although its 7nm chips add support for 256-bit AVX2 operations, the company's CPU division does not tell us has not yet hinted that a particular goal is the AI market. AMD's efforts in this area are still based on a graphics processor and, although its processors will certainly work with AI code, it does not seem that the market is at the same level as that of Intel. Between the addition of a new support for AI to existing Xeons, its products Movidius and Nervana, projects like Loihiand plans the data center market with Xe, Intel is trying to build a market to protect its high-performance computing and high-end server operations, and to address Nvidia's current dominance of the industry.
AMD has kept details of his next family of Epyc products remarkably close to his chest. A recent leak (now removed) of the publicly accessible Open Benchmarking database shows fierce competition between AMD's upcoming Epyc 7nm processors and Intel's equivalent Xeon products. Intel CEO Bob Swan said that AMD offered increased competition in the last half of 2019, especially in data centers. So these numbers are not automatically surprising – unless, of course, you remember when the AMD market went into the servers was essentially zero.
According to the text of the leak now removed (picked up by THG Before breaking down, the AMD Epyc 7742 is a 64-core processor with 128 threads, 256 MB L3 cache, a 225-Watt TDP, and a 2.25 GHz / 3.4 GHz base / boost clock , respectively. The Epyc 7601, already launched, is a TDP 180C processor, 32C / 64T, with 64 MB of L3 and an almost identical boost clock of 2.2 GHz / 3.4GHz. The Xeon Platinum 8280 measures 28C / 56T, 2.7GHz, 4GHz and 205W, while the Xeon Gold 6138 (included for reference) measures 20C / 40T, 2GHz / 3.7GHz and 125W.
If these rumors are correct, AMD managed to double the number of cores and increase the clock very slightly in a larger TDP envelope of 1.25. I'm not sure what the "RDY1001C" refers to at the bottom of the results, although this configuration is still the fastest in the list. Googling the term has given no results.
There are more tests at THG than what we have reproduced here; check their article for complete results. And, as always, treat all results with great caution. These are disclosed results. Even though they are accurate, they may reflect engineering samples that are not representative of the final performance.
SVT is a highly optimized video encoder for Intel processors, but optimizations for Intel chips also work well for AMD processors, and we certainly see it here. None of the codes seems to evolve particularly well when adding new cores, so we will not attempt to make sense of the dual figures. A simple 7742 is significantly faster than the Xeon Platinum 8280 and the 7742 is more than twice as fast as the 7601.
In HEVC, the performance figures change. Here, Intel and AMD are globally at par, but the 7742 represents a huge increase over the Epyc 7601.
POV-Ray 3.7 is evolving with an increased number of threads, but the gain of a dual processor is much smaller than that of the 7742 compared to the 7601. AMD averages only 24% more performance adding 64 additional cores, compared with 42 percent scaling for the 8280 Xeon Platinum. This difference in scaling means that a pair of Xeon 8280 doubles is roughly equivalent to a Epyc 7742 pair, although an Epyc 7742 is significantly faster than a Platinum Xeon 8280.
Blender, and more generally rendering, are tests for which AMD processors are generally excellent. AMD resolutely wins this test, although it is interesting to note that we are also seeing signs of significantly improved scalability for Intel processors. This may simply reflect the fact that Intel processors have far fewer cores. The Xeon Platinum 8280 is only a 28-core chip compared to the performance of a 64-core chip. This is a pretty important benefit for AMD. Of course, there is also the question of pricing and positioning – Intel has generally charged its Xeons well above AMD's Epyc processors, and we tend to prioritize price comparison over other factors. .
However, readers should be aware that there are sizing issues with AMD processors due to the large number of 128C / 256T cores, while Xeon Platinum processors only have 56 cores in a 2S configuration. Applications themselves may not adapt to these types of thread counting.
If these numbers are accurate, they suggest that AMD's Epyc 7nm processor will be a major challenge for Intel in more markets – which is exactly what we expected from previous third generation Ryzen claims and from AMD on Epyc 2. Factor in Bob Swan's recognition of increased competition in the market, and we plan a scenario in which Intel will reduce its Xeon prices either by reducing them directly or by launching Cooper Lake (currently the first half of 2020). Intel processor prices have always been much higher than those of AMD, but it's hard to know exactly how much, because the company's list prices (the best indicator to follow) do not reflect the volume actually paid by customers.
If AMD's Rome is as good as it looks, we should see an increased adoption of this piece by OEMs over first-generation Epyc, as well as some reaction from # 39; Intel. It may take several generations of products for server clients to switch to new providers, but they take this into consideration.
In recent years, Intel has been talking about its Cascade Lake servers with DL Boost (also called VNNI, Vector Neural Net Instructions). These new features are a subset of the AVX-512 and are intended to specifically accelerate processor performance in artificial intelligence applications. Historically, many AI applications have favored GPUs over processors. The GPU architecture is much better suited to graphics processors than processors. Processors offer many more thread-based execution resources, but even today's multicore processors are overwhelmed by the parallelism available in a high-end graphics processor core.
Anandtech did you compare the performance of Cascade Lake, the Epyc 7601 (soon outperformed by the AMD Rome 7nm processors, but still today AMD's main server core), and a RTX Titan. The article, written by the excellent Johan De Gelas, discusses different types of neural networks beyond CNN networks (convolutional neural networks), which are generally compared, and explains how a key element of the strategy Intel is competing with Nvidia in workloads where GPUs are not as powerful. or can not yet meet the emerging market needs due to memory capacity constraints (GPUs still can not match CPUs here), the use of 'light' artificial intelligence models does not not requiring long periods of workout, or artificial intelligence models that rely on statistical models of non-neural networks.
Growing revenue from data centers is a critical part of Intel's global strategy for artificial intelligence and machine learning. Nvidia, meanwhile, is keen to protect a market in which it currently has virtually no competition. Intel's Artificial Intelligence strategy is broad and encompasses many products, from Movidius and Nervana to DL Boost on Xeon, to the next GPU Xe range. Nvidia seeks to show that GPUs can be used to handle artificial intelligence calculations in a wider range of workloads. Intel incorporates new artificial intelligence features into its existing products, uses new hardware that, it is hoped, will impact the market, and attempts to create its first serious GPU to challenge the work done by AMD and Nvidia in the consumer market.
Anandtech's benchmarks show, overall, that the gap between Intel and Nvidia remains wide, even with DL Boost. This graph of a recurrent neural network test used a "Long Short Term Memory (LSTM)" network as a neural network. A type of RNN, LSTM "selectively remembers" patterns over a period of time. "Anandtech also used three different configurations to test it – Tensorflow ready to use with conda, a Tensorflow optimized for Intel with PyPi, and optimized version of Tensorflow from the source using Bazel, using the latest version of Tensorflow.
This pair of images captures the relative scale between the processors as well as the comparison with the RTX Titan. Ready-to-use performance was quite poor under AMD, although it improved with the optimized code. Intel's performance skyrocketed like a rocket when the source-optimized version was tested, but even the source-optimized version did not fit the performance of Titan RTX very well. De Gelas notes: "Secondly, we were pretty surprised that our Titan RTX is less than three times faster than our dual Xeon setup," which explains how these comparisons are done in the larger article.
DL Boost is not enough to narrow the gap between Intel and Nvidia, but in all fairness, this has probably never been supposed to be. Intel's goal is to improve AI performance enough on Xeon to make the execution of these plausible workloads on servers that will be mainly used for other purposes, or when creating models of artificial intelligence that do not meet the constraints of D & C A modern graphics processor. The long-term goal of the company is to compete in the AI market with a range of equipment, not just Xeons. Since Xe is not quite ready yet, competing in the HPC space means competing with Xeon for the moment.
For those of you who are wondering about AMD, AMD does not really talk about performing artificial intelligence workloads on Epyc processors, but is focused on its RocM initiative to run CUDA code under OpenCL. AMD does not talk much about this side of its business, but Nvidia dominates the GPU application market for AI and HPC. AMD and Intel both want a piece of space. At the moment, both seem to be fighting for one.
Google's Duplex feature on Android Phones can save your life if you do not have the time to call and make a reservation for yourself. A similar feature coming to select Android phones could be a real Lifesaver, however. Automated 911 calls will soon be available contact the emergency services and relay your exact position without you saying a word.
Although the new emergency call feature is similar to Duplex, you do not initiate it the same way. The duplex mode is integrated into the assistant, so just ask him to call somewhere and everything goes in the background. The automated call for help is part of the Google Phone app. After calling emergency services, the phone displays three buttons for medical services, police and fires. By tapping on one of them, the robot's voice begins.
The tirelessly quiet robot lady will tell the 911 operator that it is a calling robot on your behalf before indicating your location and the reason for your call (medical, fire or police). Especially, your Android device is for the emergency operator, not a Google server as in the case of Duplex. Thus, all the information transmitted to the operator remain between you and the emergency services. You can see the script on the screen as it is read and the call remains open all the time. So you can also talk to the operator if you can.
Google states that this feature could be very useful in cases where a person can not talk to emergency operators because of an injury or coercion. Many emergency call centers automatically get GPS information from GPS calls, but the address contained in the message can help confirm and get help faster.
The bad news is that most Android users will not have this feature. This happens to Pixel phones, of course. Google fully controls the software of these phones and automated emergency calls are part of the Google Phone application. It also indicates that some phones will receive this feature, but do not list them. Many device manufacturers, such as Samsung, LG, and OnePlus, are creating their own phone apps that are separate from Google's. Presumably, Android One devices and Motorola's near-standard phones could benefit from this feature in the future.
In recent weeks, FaceApp – the photo enhancement tool for smartphone-based AI – has become the source of a major controversy over data privacy that appears to have been largely overestimated. However, this highlights a clear and common problem regarding the rights that we could give up with potentially any application we allow on our devices.
On July 14, developer Joshua Nozzi tweeted a charge (since removed) indicating that FaceApp seemed to download all the photos from a user's library and not just the photos selected by a given user for use with the application's services. He also pointed to Russia's involvement in the company, reinforcing common concerns about the illicit Russian involvement in US data-related cases. In a few days, a pseudonym security researcher Elliot Alderson responded at 9t05 cover of Nozzi's charge by Mac with contrary evidence. FaceApp too replied with a statement to 9t05Mac with similar intent. Here is the abridged version:
We could store a downloaded photo in the cloud. The main reason is the performance and the traffic: we want to make sure that the user does not download the photo several times for each editing operation. Most images are removed from our servers within 48 hours of the download date.
FaceApp performs the essential of processing photos in the cloud. We only upload a photo selected by a user for editing. We never transfer other images from the phone to the cloud.
Although the main R & D team is in Russia, user data is not transferred to Russia.
Although 9t05mac has taken the plunge by publishing Nozzi's accusation, his claims have been proven false, Chance Miller – the author of the article – raises an important point:
It's always wise to take a step back when applications like FaceApp become viral. Although they are often popular and can provide humorous content, they can have unintended consequences and privacy issues.
The false accusation of Nozzi seems more to be an honest mistake than a malicious act. Miller's argument shows why we are more prone to panic when independent circumstances give us a picture of danger. Although we should always take the time to find evidence of our claims before publishing them, in order to avoid widespread panic unnecessarily, it is not difficult to see how a person could commit this mistake while people are in alert status for this type of activity.
Although FaceApp has not prompted anyone to own their photo library to build a massive database of US citizens for the Russian government – or the conspiracy theory that you prefer – this incident highlights the ease with which we provide wide permissions once we download an application.
When an application requests access to your smartphone data, it generates a large network by necessity. Photo apps do not require the right to save photos or access only photos that you explicitly show, but to your entire photo library. You can not provide access to the microphone and camera, or anything else, with granular permissions that let you control what the application can do. In addition, smartphones do not provide a simple way to see what applications are doing. Newspapers of any kind, or a way to monitor network activity, are not made available to the average user.
For this reason, most users do not have the opportunity to know if an application has made them lose confidence or not. Until we have better control over the applications our apps can and can not access on our devices, we need to consider the worst case scenario every time we download. Unless a person has the knowledge and willingness to regularly monitor the activity of applications, as well as to read (and understand) the terms of service of each application in their entirety, that person can not exclude the possibility of the use of the application. malicious use of their data. After all Facebook has just been fined $ 5 billion for allowing the very consensual leak of user data. (not that it mattered) and much of this has happened through the association of a person with a user who has downloaded the problematic application.
Although the most commonly used apps do not end up in such controversial situations, data leaks happen quite frequently so we have to remember what we risk with every contribution of our personal information. Each granted access, each uploaded photo and each piece of information provided to an application, that it identifies us directly or indirectly-Provides a company with new information about us which it often claims ownership through its terms of service. They may or may not use the data collected for unpleasant purposes, but they allow themselves this right through a process they know that almost everyone will ignore. Businesses need a broad language in their legal agreements to protect themselves. Unfortunately, this legal requirement also creates a framework for leveraging users when a company publishes an application for data collection purposes.
Granular permissions on smartphones are a step forward in addressing this issue, but it will not prevent applications from continuing to request extended permissions and requiring access as an admission price. At this point, most of us know that we pay with our data when we do not pay with our dollars, but the problematic difference lies in the exact cost. Most people would probably not be afraid that FaceApp would use their selfies to improve the quality of service, but they might feel different if these data were used for some other reason. Even if we do not provide all of our photo libraries, and even if FaceApp removes images 48 hours later, they still have enough time to take advantage of the data voluntarily provided by users. Although it seems that they have no malicious intent, we do not know exactly what our data is costing us because we do not know how they use it.
The same applies to almost all the applications we download. Without transparency, we pay a fixed cost in secret. With repeated actions on many applications, it becomes very difficult to determine the source of the potential problems. FaceApp seems to work like any other application: asking for extended data permissions by necessity and reducing liability through a service terms contract. With each application, we must ask ourselves if the service provided is worth the cost of an unknown cost.
We are not the day when artificial intelligence will provide us with a brush for reality. As foundations on which we rely for their integrity, many people are frightened by what will happen. But we have always lived in a world where our senses do not represent reality. New technologies will help us get closer to the truth by showing where we can not find it.
D & # 39; a historical point of viewWe have never successfully halted the progress of a technology and we owe the level of security and safety that we appreciate this continued progression. While normal accidents occur and the inconveniences of progress will probably never cease to exist, we aggravate the problem when we try to fight the inevitable. In addition, the reality has never been so clear and precise as we want to believe. We are fighting against new technologies because we think this creates uncertainty when, more precisely, it only highlights the uncertainty that has always existed and that we have preferred to ignore.
The dissolution of our reality – a fear provoked by artificial intelligence – is a mirage. For a long time, we trusted what we see and hear throughout our lives, whether in the media or with people we know. But no reality is reality because the reality has never been absolute. Our reality is a relative construction. That's what we agree on based on information from our experience. By observing and sharing our observations, we can try to construct an image of objective reality. Of course, this goal becomes much harder to reach when people are lying or using technology that makes convincing lies more possible. This seems to threaten the very stability of reality as we know it.
But our idea of reality is imperfect. It includes human observation and conjecture. It is limited by the way our body perceives the world around us and by our brain that processes the information acquired. Although we can capture a lot, we can only detect a fragment of the electromagnetic spectrum and even that is too much for our brain to deal with immediately. As the healing brush in Photoshop, our brains fill the gaps in our vision with his best guess what it belongs to. You can test your blind spots to get a better idea of how it works or just watch it in action by looking at an optical illusion like this:
This, among other cognitive processesyou produce submitted versions of reality. You can not feel every aspect of a moment and you will certainly remember every detail. But on top of that, you do not even see everything you see. Your brain builds the missing parts, hide visual information (mostly when we move), makes you hear the bad soundsand can confuse the rubber members with yours. When you have a limited view of a given moment and the information you get is not completely accurate, you start with a subjective version of the reality that you are able to gauge. Trusting collective human observations led us to believe geese grew up on trees for about 700 years. Human observations, conclusions and beliefs are not an objective reality. Even in the best case, we will sometimes come up with some extraordinary mistakes.
Everything you know and understand goes through your brain and brain does not give an accurate picture of reality. To make matters worse, we often miss our memories in many ways. Our vision of the world is neither true nor distant. So for a long time, we relied on others to help us understand what is true. This can work very well in many situations, but sometimes people will have very different versions of the same situation because of past experiences. In both cases, problems arise when subjective observations contradict one another and people can not agree on what really happened. Technology has helped us to improve this technological problem that we had feared during its initial introduction.
Over time, we have created tools to help us survive as a species. By developing new tools, we were able to disseminate information more easily and create a climate of trust. Video and audio recordings allowed us to bypass brain processes and record an unenriched record of an event – at least from a singular point of view. A video camera still fails to capture all the reality of a given moment.
For example, imagine that someone pulls a knife in a fight and pretends to sweep to try to scare off his attacker without any intention of causing real harm. Video surveillance draws a different picture without this context. For an officer of justice, the images of security will show assaults with a deadly weapon. In the absence of other evidence, the officer must exercise caution and make an arrest.
That such assumptions lead to fewer crimes or more questionable arrests does not change the fact that an objective record of the reality lacks information. We trust recordings as truth when they offer only part of the truth. When we trust video, audio or anything that can not tell the whole story, we rely on a support that lies by omission by design, as any observer of reality.
Technology has flaws, but that does not give it away. Overall, we have benefited from advances that have resulted in objective recordings of the world around us. All records do not require additional context. A video of a cute puppy might not please everyone, but most people will agree that they will see a puppy. In the meantime, we called the green sky and can not agree on the color of a dress in a bad photo. As technology progresses and becomes accessible to more and more people, we all begin to learn when and how we can paint reality with a less precise brush than we would have thought.
This awareness causes fear because of our system of understanding, the world begins to collapse. We can not rely on tools that we could not understand our world. We need to question the reliability of the things we have recorded and it goes against much of what we have learned, experienced and integrated with our identities. As new technologies emerge, they further weaken our ability to trust what is familiar to us, they create this fear that we tend to attribute to technology rather than ourselves. Phone calls are part of normal life, but they were, in the beginning, considered an instrument of the devil.
Today, AI is experiencing similar problems. Deepfakes panic when people began to understand how to easily exchange faces with stunning accuracy – with many quality videos and photos meeting specific requirements. Although these deepfakes have rarely deceived anyone, we have all glimpsed the near future in which artificial intelligence would progress to the point of not knowing the difference. That day came last month at Stanford University, Princeton University, the Max Planck Institute for Computing, and Adobe. published a paper this demonstrated an incredibly simple way to edit a recorded video to change the spoken dialogue visually and audibly, which fooled the majority of people who saw the results. Take a look:
Visit the paper summary and you will find most of the text devoted to ethical considerations – a common practice these days. Researchers in artificial intelligence can not get a good job without considering the possible applications of their work. This includes discussion of cases of malicious use so that people understand how to use it for useful purposes and allow them to also prepare themselves for problems that may arise.
Ethical statements can fuel public panic because they act indirectly as a kind of vague science fiction in which our fearful imaginary must fill the gaps. When experts present the problem, it's easy to think only about the worst-case scenarios. Even if the benefits are taken into account, faster video editing and error correction seem to be a slight advantage when the negatives include false information that people will have trouble identifying.
However, this technology will emerge regardless of the efforts made to stop it. Our own history demonstrates time and time again that Any effort to stop the progress of science will, at best, result in a short delay. We should not want to prevent people who understand and care about ethics from what they are, because it leaves others to create the same technology in the shadows. What we can not see seems less frightening for a moment, but we have no way to prepare, understand, or guide these efforts when they are invisible.
Although technologies such as the aforementioned textual video editor inevitably lead to both uses and artificial intelligence more efficient in the future, we are already victims of similar manipulations in everyday life. The doctored photos are nothing new and manipulative editing shows how the context can determine the meaning.a technique taught at the film school. AI adds another tool to the box and increases mistrust in media that has always been easily manipulated. It's unpleasant to live, but ultimately a good thing.
We trust our senses and the recordings we watch too much. Reminders of this help prevent us from doing so. When Apple adds a correction of attention to video chats and Google offers a voice assistant that can make phone calls for you. we will have to remember that we see and hear may not accurately represent reality. Life does not require precision to progress and prosper. To pretend that we can observe objective reality does more harm than to accept that we can not. We do not know everything, our goal remains a mystery to science and we will always make mistakes. Our problem is not with artificial intelligence, but rather that we believe we know the full story when we only know a few details.
As we enter this new era, we should not be fighting against the inevitable technology that continues to shine to highlight our misplaced trust. AI continues to demonstrate the fragility of how we view reality as a species at a very fast pace. This kind of change hurts. We realized that we had only imagined the stable ground on which we had walked all our lives. We are looking for a new place of stability in the face of uncertainties because we consider that the solution is the problem. We may not be ready for this change, but if we fight the inevitable, we will never be.
Artificial intelligence will continue to erode the false comfort we enjoy, which can be scary, but it is also an opportunity. This gives us a choice: to oppose something that scares us or to try to understand it and use it for the benefit of humanity.
Top image credit: Getty Images