The June oven is part of a new wave of kitchen gadgets promising to combine the modern technology of Silicon Valley with cutting-edge design. On paper, these products promise to generate a new wave of simple and effective interactions. In reality, they often come in small print. In the case of the month of June, small print may indicate a tendency to light up and preheat overnight.
Several June owners complained about what had happened to them while they slept, according to at the edge. An owner with a nest pointed to the oven caught the moment when the unit plugged in at 4 am and straightened up to 400 degrees. Two other owners published stories about similar incidents: one person actually left in the oven some food that she had already cooked and woke up to find that she was burned.
According to June CEO Matt Van Horn, these problems can all be directly blamed on the user's mistake. "We have seen some cases where customers have accidentally activated the preheating of their oven via a device, imagine your cell phone," he told The Verge.
So, imagine if I had to be in the June app by clicking on the recipes and I was accidentally typing something that preheated my oven, we saw a few cases. It's a really wonderful feature to be able to preheat your oven remotely, and it's a completely new world that's very exciting, and things are happening … People have always joked about the button pushing, as if I did not want to call you, So these are just the types of software that we must be aware of to create to offer the satisfaction of our customers.
June has a problem here, whether the company wants to recognize it or not. Obviously, it is important that the furnace of society has a fault that activates and preheats it without anyone ordering it. But it is equally important that customers do this inadvertently without doing it. Unattended cooking represents a significant percentage of the total number of house fires.
Until now, an oven was a device that you turned on while you were standing in front. While it has always been a good idea to keep flammable objects away from an oven, each of us has, at one time or another, left something to be desired. flammable near a stove. You've probably done it deliberately, especially if you've ever had a sudden rush or lack of free space for food preparation. The rule for managing the risk of a fire in an oven is: check if the oven is on before placing flammable objects nearby.
An oven that can light remotely presents a different risk than an oven that can not. June can (and perhaps has already) taken many steps to reduce the potential threat, including the construction of a good oven not too subject to external hot spots. At the same time, however, it's a furnace – it will by definition have hot spots. A human being in front of the oven will automatically clear the area of debris that may have accumulated around him. The oven does not "know" that it must perform this function. And people can die when computers make mistakes on what they know. Autonomous vehicles enter immobile objects. The planes sink into the ground, resisting all the efforts of their pilots to make their way to the sky.
An important distinction between various autonomous vehicle problems and the 737 Max's MCAS, of course, is that the June furnace may not be able to do this because of some integration capabilities of the integrated inspection capability. But it is less important than it may seem. What Matt Van Horn calls "user error", I would call something else: poor design of the application. And since June develops both its application and its oven, the responsibility for the issue comes back to the same place.
If the problem is that the end users mistakenly trigger the "Preheat" function in the application, it must be designed to make it much more difficult to preheat the oven without being aware of it. . It should not be possible to light the oven inadvertently by consulting the cookbook of the application. June will distribute an application update in September that will allow consumers to turn off the remote preheat feature while allowing it. Next year, the June oven will be updated to recognize the presence of food in the device and will turn off after a set period of time if the end user does not not signal that the oven should remain on.
The point of comparison of the situation of the oven of June with that of the autonomous cars or 737 Max is not to pretend that they are equivalent. It should be emphasized that the inclusion of new features in products requires manufacturers to think about how carry them. A product capable of changing common assumptions about the operation of a device must be particularly careful to guard against any risk of harm created by the change. Adding a little intelligence to a washer or dryer does not increase the risk of damage, but anything that generates enough heat to eventually trigger a fire must be treated with care. The June growth difficulties are a good example of how businesses and consumers will have to adjust their product perceptions if they want to change the "default values" that people are used to.
The June does not seem to be a highly rated product in the first place – it's a $ 600 toaster oven and the Wirecutter has found its cooking under in comparison with the Cuisinart TOB-260N1. As additional bonuses, the Cuisinart does not have access to Wi-Fi, has no built-in camera and does not seem to offer any revenue application costing around $ 50 a year.
Update: June PR asked to make the following statement: "The safety of our product is the number one priority of June and the company took a number of precautions when manufacturing the oven June. We worked directly and quickly with the few June owners who underwent accidental preheating. These cases are certainly troubling, and we have a team of engineers who are working to make sure that does not happen in the future. We have ovens deployed on the market for 4 years and have a large community of enthusiasts. The best scenario is to listen to customers in real time, as June does, to solve the problems. "
The company also said it has made various changes to its iOS application to date. The default tab will now be the "Cookbook" tab instead of the "Oven" tab, and various shortcuts related to preheating have been removed. In September, the application will be updated to allow users to disable the preheat function if they wish. The oven will also stop preheating after 30 minutes if it detects no food, as previously transmitted.
Throughout the history of modern computing, passwords have been the primary method of securing data. Password problems are many, but things are changing slowly with biometrics, hardware security keys, and so on. Google is exploiting several new technologies for make one of his sites without a password, but only for Android users.
Google says that it has automated safeguards that prevent unauthorized people from accessing a user account, but that no password-based system is perfect. You will never convince everyone to use strong passwords, and some of them will have to write them on post-it notes. For the first time, you do not need a password to access your Google Account data. However, this is only true for a service and for some Android phones at the moment.
Starting today, you can access the Google password manager site on your smartphone and sign in with one click. The Password Manager site gives you access to all the account credentials registered in the Chrome and Android autofill. It is therefore a mine of valuable data that could potentially allow an attacker to compromise many accounts of the victim. Instead of using a password to log in, you can use the secure unlock method on your phone, for example your fingerprint. Tap the sensor to verify your identity and you're in.
Google does not have fingerprint data on its servers – they stay locally on your phone. It is also a fundamental part of the design of FIDO2 driven by Google and others. Google stores FIDO platform-related credentials on your phone, which are used to verify your identity as a hardware security key. When you visit the Google Password Manager, the site uses a WebAuthn "Get" call to retrieve the stored credentials. It works as a FIDO2 signature to verify your identity.
Currently, this feature only works on the aforementioned Google password manager site. You will also need a Pixel phone. The feature will be deployed on all Android phones running version 7 (Nougat) or higher. Since this feature is connected to the Android Secure Unlock feature, it should work automatically with all future secure unlock methods. For example, the advanced face unlock feature offered at Pixel 4. Current Android phones with Face Unlock will not be considered a secure unlock method for the purposes of Google's new sign-in feature.
Normal GSM calls are not fully encrypted end-to-end for maximum protection, but they are encrypted at many stages throughout their journey. As a result, random people can not connect to phone calls over the air, such as radio stations. The researchers found, however, that they could target the encryption algorithms used to protect calls and listen to virtually anything.
"GSM is a well-documented and analyzed standard, but it's an aging standard and a fairly typical cybersecurity process," said Campbell Murray, global delivery manager for BlackBerry. Cybersecurity. "The weaknesses that we have noticed concern any GSM implementation up to 5G. Whichever GSM implementation you use, there is a historically generated and historically-generated vulnerability that you expose. "
The problem lies in the encryption key exchange that establishes a secure connection between a phone and a nearby cell tower every time you initiate a call. This exchange gives your device and tower keys to unlock the data about to be encrypted. Analyzing this interaction, the researchers understood that, in writing the GSM documentation, the error control mechanisms governing key coding are distorted. This makes the keys vulnerable to a crack attack.
"This is a very good example of the desire to create security, but the security engineering process behind this implementation has failed."
Campbell Murray, BlackBerry
As a result, a hacker could configure his equipment to intercept call connections in a given area, capture key exchanges between phones and cellular base stations, digitally record calls in unintelligible encrypted form, decrypt keys, and then use them to decrypt calls. The results analyze two proprietary GSM cryptographic algorithms widely used in the encryption of calls – A5 / 1 and A5 / 3. The researchers discovered that they could decrypt the keys of most implementations of A5 / 1 into one. time approx. For the A5 / 3, the attack is theoretically possible, but it would take several years to decipher the keys.
"We spent a lot of time reviewing the standards and reading the implementations and reverse engineering of the key exchange process," Murray said. "You can see how much people thought it was a good solution.It is a very good example of the intention to create a security, but the engineering process of the security behind this implementation failed. "
The researchers point out that since GSM is an old and carefully analyzed standard, it already exists other known attacks are easier to achieve in practice, such as the use of malicious base stations, often called stingrays, to intercept calls or track the location of a mobile phone. Additional research on the family of A5 encryptors over the years, other weaknesses have also been noted. And there are ways to configure key exchange encryption that would make it more difficult for attackers to decrypt keys. But Murray adds that the theoretical risk remains.
For lack of a complete overhaul of the GSM encryption scheme, which seems unlikely, the documentation of the implementation of the A5 / 1 and A5 / 3 systems could be revised to make interception attacks even more impracticable. keys and by cracking. The researchers indicated that they were in the early stages of the discussion with the GSMA standards body.
The professional association stated in a statement to WIRED: "The details have not been communicated to the GSMA as part of our coordinated vulnerability program, when the technical details will be known to the fraud and security group. the GSMA, we will be in a better position to examine the implications and mitigation measures. "
While it may not be surprising that GSM poses security challenges, it remains the cellular protocol used by the vast majority of the world. And as long as it lasts, the privacy issues of calls remain real.
Even when he was installing this configuration, he had doubts. As a security-conscious software engineer (for a company he declined to name), Jmaxxz questions the type of remote hacking he could have left his girlfriend's car. "In the back of my mind, I kept thinking," What is the risk of this system? I put his car on the internet, "he recalls." I thought, "Ignorance is a bliss. I will not watch it. Do not look at him. "
But Jmaxxz watched it. And after 24 hours, in January of this year, I found exactly what he feared: vulnerabilities that would allow any hacker to fully hack the device for remote unlocking and ignition , offering a handy tool for stealing tens of thousands of vehicles. . "You can locate cars, identify them, unlock them, start the car, trigger the alarm," he says. "Really everything a legitimate user can do, you can do it."
"The problem is that these bugs were shipped in the first place."
Jmaxxz, engineer and hacker
At a conference today in Las Vegas on the DefCon hacker conference, Jmaxxz described a series of vulnerabilities in MyCar, a system manufactured by the Canadian company Automobility, whose software is renamed and distributed under names like MyCar Kia, Visions MyCar, Carlink and Linkr. LT1. MyCar devices and apps connect to remote radio start devices like Fortin, CodeAlarm and Flashlogic, using GPS and a cellular connection to extend their reach everywhere with an Internet connection. But with one of the three security flaws present in these apps – what Jmaxxz said he reported to the company and which have been corrected since – he claims that he could have accessed the database from MyCar, leaving it as well or a less friendly hacker detect and steal any car connected to the MyCar application, anywhere in the world.
Based on an analysis of the MyCar exposed database – and Jmaxxz said that he had been careful not to access private data from anyone else – he feels that he is not the only one. About 60,000 cars were left behind by these security concerns, with enough data exposed so that a hacker could even pick the make and model of the car that they wanted to steal. "Want a new Cadillac, you can find a new one," he adds.
When Jmaxxz began exploring the internal components of Automobility's applications in January, it first discovered that they included hard-coded administrator credentials, which it could extract and use to access main data of the company. Even beyond that, Jmaxxz describes two other common types of hackable vulnerabilities – widespread SQL injection bugs and direct object-reference vulnerabilities – that would have allowed him to access the same data and rekeys. Send orders to the vehicles of other users.
Jmaxxz says it warned Automobility and the US computer emergency response team of these vulnerabilities in February of this year. They have been corrected over the next few months. But he says that he has continued to find and report persistent SQL injection vulnerabilities in the MyCar code to the developer of MyCar, Automobility, some of which have been corrected only a few days before his exposed on DefCon. WIRED reached out to Automobility, who did not immediately respond. A notice on the CERT website in April confirmed the vulnerability and includes a statement from Automobility: "All resources at our disposal were used to quickly remedy the situation, and we have fully solved the problem," wrote the company. "During this period of vulnerability, no incidents or current problems with confidentiality or compromised features have been reported to us or detected by our systems."
Jmaxxz argues that the danger of these bugs exceeded the mere theft or farces triggered by a remote alarm. Starting a car away from the owner without knowing it can lead to dangerous carbon monoxide leaks, he says: "If you start a car in a closed structure, you may end up dead."
Moreover, Jmaxxz said he discovered in the MyCar database that he had also stored much more information than expected on his girlfriend's car. In just 13 days, he had collected 2,000 locations from the car. "It offends me more than everyone else," he notes. "That's not what I signed up for."
Even now that Automobility has corrected the bugs reported by Jmaxxz, it claims that this still represents the worst case scenario for Internet of Things companies that do not even perform the most basic security rules . "The problem is that these bugs were delivered in the first place, and I think that should have been addressed in all types of security testing."
Needless to say, Jmaxxz took the MyCar device out of his girlfriend's car earlier this year. He finally built his own DIY solution, with the code he's committed to making available on Github. The system, he says, will do as good a job as MyCar for warming up a car remotely – and makes a better Christmas gift than exposing one's vehicle to an Internet network filled with car thieves.
. (tagsToTranslate) cars (t) hacking (t) hacking cars</pre></pre>
Announced in March, this initiative aims to develop an open source voting platform built on secure hardware. The Oregon-based Verifiable Systems Company, Galois, is designing the voting system. And Darpa wants you to know: its final phase goes well beyond securing the vote. The agency hopes to use voting machines as a model system to develop a secure hardware platform, meaning that the group designs all the chips that go into a computer from the beginning and does not use any components. owners from companies like Intel or AMD. .
"The goal of the program is to develop these tools to provide security against hardware vulnerabilities," says Linton Salmon, Project Project Manager at Darpa. "Our goal is to protect ourselves from remote attacks."
The other voting machines in the Village are complete, deployed products, which participants can take apart and analyze. But Darpa machines are prototypes currently running on virtualized versions of the hardware platforms they will use. The VotingWorks secure voting company currently provides a basic user interface.
"We want people to find things."
Dan Zimmerman, Galois
To vote using the system, you go on a touch screen, make your choice (Who is the best Star Wars The film; Are hot dog sandwiches), confirm your selections, and send them to print. Your selections appear with a QR code in the upper right corner of the page. Next, you put your printed ballots in a secure ballot box, which is currently part of a binder inside some of the printer's components. The urn analyzes the document as you insert it and uses the QR code to perform a cryptographic validity check. If the paper does not pass the test, whether it is fraudulent or the result of a different election, the scanner will reject it and will not record the vote.
Currently, all the components with which an elector would interact are bare-bones prototypes that do not provide much to hack. In the village of 2020, Darpa plans to develop a more comprehensive system that participants can evaluate. But hackers can still probe the secure hardware infrastructure and try to find loopholes in its layers of protection against hardware-based attacks, complicated keystrokes. speculative execution attacks and Rowhammer to more common defects such as buffer overflows.
Participants sitting to evaluate the system on Friday told WIRED that it looked promising. And creating an open source secure hardware platform that everyone can incorporate into their products has the potential, beyond the voting machines, to have a major impact on the security of the Internet of Things. .
"This is where people can go," says Dan Zimmerman, principal investigator at Galois. "I do not think anyone has found any bugs or problems yet, but we want people to find things. We're going to create a little board just to allow people to test the secure material at home and in their classrooms, and we'll publish it. "
There is also already a code repository on securehardware.org that hackers can scan from afar. And the group has even incorporated examples of vulnerabilities in the code so that researchers can see how the hardware platform works to minimize the threat they pose and look for flaws in these defenses.
"There is a terrible software vulnerability," says Dan Wallach, a security researcher at Rice University in Houston, Texas. "I know because I wrote it. It is a web server to which everyone can connect and read / write arbitrary memory. It's so bad. But the idea is that even with this, an attacker will still not be able to access items such as cryptographic keys or anything. All they could do now is that the system crashes.
Darpa and Galois hope that Defcon participants will find bugs and defense suggestions throughout the weekend, and that the wider community will also have a say. The system will also be used in several universities over the next two years for review by various academics.
The voting village has always been about finding loopholes in the hope of making voting machines safer. But the Darpa prototype could be the first time these discoveries are welcome.
All pictures Roger Kisby / Redux Pictures.
. (tagsToTranslate) vote (t) DefCon (t) darpa</pre></pre>
At the Defcon conference on hackers today, security researcher Truman Kain unveiled what he calls the surveillance detection scout. This computer fits the center console of a Tesla Model S or Model 3, plugs into the USB port of its dashboard and transforms the car's built-in cameras – the same reversing cameras and rear view providing a 360 degree view used for the Tesla autopilot. Sentry incorporates features – in a system that tracks, tracks and stores license plates and faces over time. The tool uses open source image recognition software to automatically put an alert on the Tesla screen and on the user's phone if he sees the same plate from the screen. Registration several times. When the car is parked, it can follow nearby faces to see which ones appear more than once. Kain said his intention was to warn that someone was perhaps preparing to steal the car, modify it or burst into the driver's next house.
Despite the obvious privacy concerns, Kain presents his invention primarily as a useful tool for Tesla owners with paranoia above average. "It turns your Tesla into an AI-powered surveillance station," says Kain. "It's supposed to be another eye, to help you and tell you that she saw a license plate following you for several days or even several rounds of the same trip."
Kain, a consultant with security company Tevora, has not forgotten the creep factor of his creation. According to him, listening surveillance detection shows the type of monitoring that the data already collected by autonomous cars can allow. If a large number of Scout Detection Surveillance users associate their license plate recognition data – a feature that Kain deliberately left out of the software – the system could create an outsourced version of the same surveillance system powerful than that provided by commercial license plate reader systems, the use of which by the police was banned in some states. "I could see everyone across the United States, thousands of cars on this Scout Monitoring Network," said Kain. "So, I think this is a real ethical problem."
The Surveillance Detection Scout prototype, whose Kain software is available on Github, works by capturing and analyzing video from the three Tesla cameras – two on its mirrors and a forward-facing – to a Nvidia Jetson mini-computer. Xavier at $ 700. It uses an open source neural network framework called Darknet as a machine learning engine, as well as ALPR Unconstrained for license plate recognition and Facenet for face tracking. Both programs are freely available on Github. The system also uses Google Open the image dataset as training data.
"I do not do advanced AI," says Kain. "I'm just applying what's already available, commercially available." The software even identifies the make and model of cars it sees based on license plate searches on the FindByPlate.com service. (Kain says it's much harder to link license plates to real names, and he does not intend to include this data in his tool. )
Kain says that he had the idea of his tracking mechanism last year after attending a conference on preflighting at Defcon last year. Since he had bought his Tesla Model 3, he was thinking about the gigabytes of videos that he had collected and deleted, crushing his video diaries every hour. "I've had a bit of FOMO, thinking about the demise of this whole video if I do not do something about it," says Kain.
After discovering a tool available on Github, Tesla USB, allowing Tesla owners to store their video indefinitely on an external drive, Kain came up with the idea of combining this storage capacity with image recognition to give his car features similar to the Nest camera in his home, which includes the so-called "familiar face detection". Beyond tracking license plates, the face detection element of his tool also works as what he describes as an upgrade to Tesla's existing Sentry security system, which starts recording when someone touches your car and raises an alarm when he tries to get in there.
By assembling a patchwork of public codes, Kain's 4-inch cubic box can recognize the license plate numbers and faces of the car's video stream and alert the car owner if it detects plates or signs. repeated faces in these data. It uses the If This Then That software integration tool to send alerts. By default, the system informs the driver if he sees the same car tracking every minute for five minutes, although Kain indicates that the settings can be adjusted according to the driver's preference. The notifications are about a minute late, says Kain, because of the time it takes for Tesla's cameras to record a video file. And for now, users need to set up their own web server for it to work, although Kain says he'll be able to provide easier web connections later on his own server.
Kain offers some scenarios in which his system could do good: confidential sources meeting a journalist or anyone else with reason to believe that they are being tracked or targeted by snoopers. "If it helps keep someone safe, it's great," says Kain. "If it lets me know that someone is sneaking around my car, it's also fine."
Joseph Lorenzo Hall, chief technologist at the Center for Democracy and Technology, explains Joseph Lorenzo Hall, supervisor for detection and surveillance. State laws against automatic license readers, even for private use, would probably make it illegal in Arkansas, Georgia, Maine, and New Hampshire. Its facial recognition features make it illegal in Illinois.
Aside from the laws, Hall claims that Kain's invention could have unintended consequences and serious implications for privacy. Confrontations could result in false positives, he says, if a driver mistakenly thinks he is being followed by someone on the same trip. "I am concerned about the subjective judgment that a human could draw from this technological system," Hall said. "It could get people to shoot each other when there's really nothing to worry about."
Hall is also more worried about the generalized form of AI-enabled surveillance represented by the system, especially if its users were modifying the Kain code to share their data with each other. "You're going to have very rich records of people's movements," said Hall. "It's basically a surveillance camera on wheels, which does not warn anyone, that maps routes across the cities in which they live."
According to Hall, it is even more disturbing that law enforcement agencies have the ability to access the data, either through some incentive for drivers – just as do the local police in some cities. Ring subsidized ring home surveillance cameras from Amazon as a way to access their data-Or by forcing users to share it with subpoenas.
Kain says that he is aware of these concerns and that he has built his system in part to demonstrate the possibilities of autonomous car video surveillance before a shady commercial start does it first – a system that could aggregate data between users rather than separate them. A new era of ubiquitous video data collection on autonomous cars is coming out, he said, and much of it could end up in centralized repositories.
But he also admits that someone could easily change his code to allow data sharing between users, thus making a big step forward into the future even as he warns. "It would be trivial for someone to integrate that with any developing experience," says Kain. "Is it a slippery slope? Potentially."
. (tagsToTranslate) Tesla (t) surveillance (t) privacy (t) facial recognition</pre></pre>