As artificial intelligence continues to progress rapidly, we still have a long way to go to develop the sensors needed to translate the physical world into computer-based data. While vision and sound were a long way ahead, our other senses have little practical application in the digital world, but this is not the case with robots.
MIT recently created a new robot using GelSight Sensors which allows him to see the objects he touches and to create a 3D map of the texture to better understand it. The video below shows how GelSight technology, generally used for aerospace applications, can "see" what it touches.
GelSight certainly offers an impressive and detailed way of translating the real world into digital information. But that does not make a smart robot, but just very informative fingers that require intelligence to be controlled. Aware of this potential, MIT has created a robot with an artificial intelligence model that exercises on objects that it touches with the help of detailed three-dimensional maps generated by its GelSight sensors. While the robot does not really see what it touches in a traditional optical sense, it receives so much data through its sensors that it can translate that data into visual information and learn from it just like anything else. Ordinary image-oriented convoluted neuron network (CNN).
The MIT robot was trained in 12,000 video recordings of touch data from GelSight sensors, broken down into still image frames, of 200 household objects. Combined with tactile data, this allows the robot to understand the materials that its sensors touch. In one conversation with Engadget, CSAIL Ph.D. student and lead author of this project, explained what their system can now achieve:
"Looking at the scene, our model can imagine the sensation of touching a flat surface or a sharp edge, and by blindly touching, our model can predict interaction with the environment only from tactile feelings. more power to the robot and reduce the data we might need for tasks involving manipulation and seizure of objects. "
The MIT system, which is still in its infancy, is working and it is thanks to their approach. Many researchers and developers in artificial intelligence tend to create models based on the functioning of the human brain, but this often makes no sense. In some cases, we make I want it to work like a human being because his goal is either to approach us or to help us learn more about ourselves by simulating human processes. In most other cases, however, addressing the development of artificial intelligence by imposing a human framework negates the many non-human benefits that software and hardware have to offer.
MIT has chosen to use a much more accurate and capable sensor that any human can only approximate and make the most of the computing power available to AI. By making choices that exploit the benefits of computers rather than forcing humans, they have created a robot that can surpass humans in blind identification tasks. In specific cases, he succeeds already.
Although this does not seem to be the most important problem to solve, touch actually plays an important role in robotics. Niche applications might benefit from a robotic ability to feel the difference between cotton and nylon, but wider applications have much more to offer. For a robot without touching, all objects have the same feeling. He may be able to understand some things visually, but it's rarely helpful.
Think about how you would look after your day if everything you touched felt the same, or more exactly, looked like nothing at all. You would not know what force to use when you plug a cable. You would not be able to understand the practical differences between a printed image of sandpaper and the sandpaper itself.
By providing the robot with a sense of touch and the ability to learn from it, this robot can better judge the materials it touches. He can learn faster and more accurately than he can do simply with standard visuals. He can then use this information to adjust his actions based on the materials that he handles – or at least, it's the ideal goal for the future. If robots can understand touch, they are less likely to cause unintentional damage. At the present time, if you asked most intelligent robots to carry a water balloon, they would not know how to hold it without destroying it. The sense of touch gives robots the ability, through a well trained AI model, to know how to handle different types of objects and to act accordingly.
Although MIT has only created a smarter and smarter robot component, it's still a step in the right direction. A robot designed to understand and incorporate the data it acquires in a tactile way has far better implications for general safety than others. That's how creating backups against potential accidents.
While most cities are preparing to use autonomous cars on their roads, Amsterdam is more interested in autonomous boats. The Dutch city consumes about a quarter of water thanks to its impressive network of canals and it has visions of robotic boats sailing on these waterways. The city is work with MIT make this vision a reality with the cleverly named "Roboats".
The idea is to equip the small rectangular vessels of sensors, propellers, microcontrollers, GPS, cameras and other equipment, and detach them to allow the transport of goods to the request. Designers also plan to connect several boats to form temporary bridges, performance scenes and even floating markets. The key to all this is the design of robots that can reliably navigate and bond with each other while floating.
Roboats have a lidar and cameras to help them navigate in complex environments, much like autonomous cars. The boats have augmented reality tags like AprilTag, which looks like simplified QR codes. Other boats can see these tags and use them to orient them when berthing.
Each Roboat has a locking mechanism with patella components on the front, back and sides. It is more difficult to moor two robots together when they float in the water, which moves unpredictably. Thus, the jacks have cone-shaped guides that help align the robots if they are a few inches away from the rendezvous. The cone has a laser sensor on the inside that records when another Roboat mooring. It triggers a mechanism that locks the boats until they receive the signal to separate.
Channels usually have gentle waves that should not push the robots too hard, but MIT researchers are working on ways to compensate for larger disturbances such as the wake of other boats. If two Roboats have reduced the distance and the laser sensor does not allow successful mating, the robots can back off and try again to dock.
The current Roboats are too small to be used for transport, but they are just prototypes. The team is working on Roboats four times larger, offering more stability and power. A new mooring mechanism with tentacle type rubber clips is also in preparation. This could make the connection between the boats strong enough to carry heavy payloads. The team also wants to replace the printed AprilTags with an LCD that could be modified to adjust the assembly orders.
Modern artificial intelligence uses complex algorithms to perform all kinds of tasks in an instant, for example to determine the feeling of a client based on his or her examination or to identify specific features of an image . However, the brightest moments of artificial intelligence come from the creativity with which we use these algorithms. People used the AI for generate new sports, transform scribbles into realistic landscapesand now, MIT has found a way to detect breast cancer up to five years in advance using an in-depth image classification model.
The MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Massachusetts General Hospital (MGH) have used mammograms and known results from more than 60,000 patients to form their new model to the smallest visual detail from the human eye. Well-trained doctors do not miss these predictive patterns simply because they may seem too small to be noticed, but because more subtle patterns just do not attract enough attention. An image classification model that can categorize thousands of scans down to the smallest detail can quickly solve this daunting task.
Regina Barzilay, a professor at MIT (and breast cancer survivor), explains how this new model can improve treatment plans:
Rather than taking a single approach, we can customize the screening for cancer risk in women. For example, a doctor might recommend to a group of women to have a mammogram every two years, while another higher-risk group could undergo additional MRI screening.
When doctors can order mammograms according to the needs of the patient, they can avoid unnecessary exposure to radiation and the cost of potentially unnecessary examinations. Although existing models can accurately identify 18% of patients in the high-risk category, this new model increases this number by up to 31%. Its success is based on the team's approach to its development. For the first time, a model of breast cancer prevention targets women individually. It also takes into account racial diversity, where earlier models focused mainly on white populations. This not only contributes to increased accuracy, but significantly reduces the breast cancer mortality rate among African-American women.
As demonstrated by MIT and HGM, well-trained image classification models can help doctors save lives. Although no AI gives perfect results, image classification algorithms have matured and become reliable in many applications, especially in specific models such as this one. You need a little more than a good idea, relevant data and a little time to create a successful image recognition model. Services like Clarifai, Microsoft Azure, IBM Watson, Vize and others offer free, bespoke customized training platforms that require no programming skills. Thanks to these algorithms, accessible to all, we have all the necessary resources to train AI to solve problems and help others. It takes time and care to safely integrate a successful experience into the practice of diagnostic medicine; This approach will likely see many revisions as it develops outside of a single hospital. But the first results are promising.
The robot can differentiate paper, metal and plastic with 85% accuracy by simply touching them. Read more …
The US government's battle against Chinese companies such as Huawei and ZTE has persuaded MIT to suspend its collaboration with both companies. In a letter to professors and researchers, Maria T. Zuber, MIT Vice-President in charge of Research, explained to her colleagues that the university had put in place a new set of procedures for what she called the international proposals at "high risk".
The university has, according to Zuber, "determined that the commitments with certain countries – currently China, Russia and Saudi Arabia – deserved a further review of faculty and administration in relation to usual evaluations that all international projects receive. " As part of this new review process, MIT will not accept any new commitments with Huawei, ZTE or the subsidiaries of both companies. "The Institute" she states"Will come back on collaborations with these entities depending on the circumstances."
The following projects will be further examined to ensure compliance with the required security standards:
The Trump administration has initiated very different lawsuits against the two companies in question. ZTE really only appeared one time, when the US Department of Commerce banned any US company from selling products to the company, accused of violating US sanctions against Iran repeatedly and deliberately. President Trump then took steps to overthrow this penalty, because the total weight of the sanction would have led ZTE to bankruptcy. Since this reversal, we have not heard much about society.
Huawei, on the other hand, has been an ongoing topic of discussion. The United Kingdom recently released a report in which it could not be guaranteed that Huawei equipment did not pose any security risk, due to the weak security and practices of the company's source code. The United States maintained that Huawei and ZTE posed a security threat, but they did not provide specific details. In August, President Trump signed a decree banning the use of Huawei and ZTE equipment in US government networks or subcontractors. After deliberation, the EU decided do not follow the United States advance. EU countries will need to share data on 5G cyber security risks and take steps to mitigate them by the end of the year, but Huawei equipment will not be specifically banned.
Image by Wikipedia
The Massachusetts Institute of Technology (MIT) has taken the decision to end its existing relationships and suspend future relations with the two Chinese technology giants.
The university's decision comes in the light of ongoing federal investigations of companies for violation of sanctions. MIT's Assistant Vice President and Research Vice President made this announcement in a Wednesday.
On the basis of this in-depth review, MIT does not accept new assignments or renew existing ones with Huawei and ZTE or their respective affiliates due to federal laws relating to violations of sanctions restrictions, "the statement said. letter. "The Institute will revisit collaborations with these entities depending on the circumstances."
In addition to terminating its relationship with Huawei and ZTE, some proposals deemed "high risk" will require a special administrative review. These high risks include "projects funded by individuals or entities from China (including Hong Kong), Russia and Saudi Arabia". These new requirements also cover work involving MIT professors or students in these countries, as well as projects involving individuals or organizations from these countries. country.
MIT is the first university to break ties with technology companies in China. According to Stanford University, the University of California, Berkeley and the University of Minnesota have all suspended their future research with Huawei.
Beyond Trump's trade war, the United States intelligence officials have repeatedly accused Huawei and ZTE of for the Chinese government. These national security issues have led President Trump to products from both companies based in China.
United States the authorities even conducted a Huawei at this year's CES after the company allegedly attempted to steal technology from an American company. That same month, the United States also Huawei is trying to steal trade secrets from T-Mobile.
In addition to all of this, as mentioned earlier, both Huawei and ZTE have been the subject of a federal violation investigation.
As part of an advocacy agreement, ZTE faced a 7-year provider in the United States after being accused of violating the sanctions against Iran and North Korea. Interestingly, Trump stepped in to end the ban after the company declared that the ban his business.
The relationship between the United States The government and these Chinese technology companies are not trying to improve anytime soon. In response to the ban on its products in the United States, Huawei recently announced its intention to .
Do not be surprised if more universities join MIT and other schools that have decided to no longer work with Huawei, ZTE and other China-based companies.