Kaspersky Labs does not enjoy the best reputation. The company is linked to Russian intelligence services, the Department of Homeland Security has banned its use in government computers and Best Buy will not sell its products. In 2017, it was reported that Israelis had observed Russian intelligence agents using Kaspersky software to spy on the United States. Now, a survey of the company's antivirus software has revealed a major data leak that dates back to 2015.
According to the German publication C't, Kaspersky antivirus injects a universal unique identifier (UUID) in the source code of each Web site visited. This UUID value is specific to the computer and the software installation. The value injected into each website never changes, even if you use a different browser or access the Internet using the browser's Incognito mode.
It was discovered the injection because one of their antivirus software evaluators discovered the same line of source code in several websites. The installation of the application on different systems has resulted in the creation of different UUID values. The assigned UUIDs have not changed over time, indicating that they were static. And because these values are injected into the source code of every website you visit, it means that the sites you follow can follow you. As C't writes:
Other scripts run in the context of the website domain can access the HTML source at any time, which means that they can read the Kaspersky ID.
In other words, any website can read the user's Kaspersky ID and use it for tracking. If the same universally unique identifier comes back or appears on another website of the same operator, they can see that the same computer is being used.
After developing a proof of concept and verifying that users with installed Kaspersky antivirus could actually be tracked in private browsing mode, C't contacted Kaspersky The flaw now has an official name: CVE-2019-8286. Kaspersky argued that this was a fairly minor problem, which would require advanced techniques to exploit. Kaspersky has updated its software so that it only introduces more information about the version of a Kaspersky product that you use in each visited website, and not a unique identifier unique to your site. personal machine. It is not satisfied with this fix and considers it still a security risk.
A bug that identifies a computer on a Web site that knows how to listen to this information is potentially very useful. Even though Kaspersky does not have an external database associating UUIDs with specific installations, broadcasting in UUID directly in private browsing mode means that a Web server registers a visit from within. a specific computer. If this machine is associated with a specific person, you have established a link.
Is it possible that Kaspersky simply made a dreadful security decision when implementing its anti-virus software? Absolutely the fact that a bug exists does not automatically mean that some bad one was using it. But these types of coincidences are interesting to say the least. Broadcasting a UUID as part of the use of antivirus software is not the type of attack that most of us could expect. This is the type of fingerprint method that an intelligence agency could be very interested in tracking down who was accessing very specific websites, but not the kind of thing that would be of interest to a typical malware company. Of course, we could also say that's why the bug was hinted at the beginning. The defect of Kaspersky in this reading is not deliberately harmful; it is an accident that reflects the company's desire to put an end to ordinary malicious programs and not to state actors.
I do not know what perception is good. But I would suggest at least investigating an antivirus vendor with fewer allegations of outside intelligence cooperation if you're concerned about this type of problem.
Nest is the leading player in smart home security and connected security. His status as a subsidiary of Google has subjected him to a special examination. Google talked about its "Customer Confidentiality Commitment" at the I / O 2019 when it unveiled the new Google Nest logo. The company has just delivered on its promise for I / O: it removes the option to disable the camera's status lights. Nest customers have answered with almost universal anger to change.
One of the principles in Google's commitment to privacy was that the company would ensure that there is a visual indicator when your Nest camera was on and streaming video on Google. According to the email sent to users, Google does this by constantly turning on the status of most Nest cameras. Thus, you will always know if any of these devices is actively broadcasting. So that's good, right? Not so fast – it turns out that many people liked being able to turn off those lights.
Nest says the Nest Cam video doorbell and Nest Hello will soon get a silent OTA update that removes the status light feature. The small green LED will be lit continuously when the camera is active and will flash when someone will watch the live stream. Instead of turning off the light, Nest will only support light attenuation.
This ensures that you and everyone around you are aware of what the camera is doing. However, this is not a characteristic that everyone wants. Many Nest camera owners prefer devices to attract the least attention possible. For example, blinking the camera at the door could indicate to an unwanted visitor that you are looking at him and not answering the door. Pretty awkward. This could also make the cameras easier to spot for an observer thief, who could then avoid them or damage them.
The status light is indeed a valuable tool if you are worried that someone will hack your cameras or if you do not trust Google. Although placing a Google camera in your home seems like a bad idea if you are that person. For all others, the status light is at best unimportant and at worst a nuisance. Imposing it on everyone could miss the point. The outrage over the Google community forums is widespread, but there is no indication that the company will reconsider its decision.
The June oven is part of a new wave of kitchen gadgets promising to combine the modern technology of Silicon Valley with cutting-edge design. On paper, these products promise to generate a new wave of simple and effective interactions. In reality, they often come in small print. In the case of the month of June, small print may indicate a tendency to light up and preheat overnight.
Several June owners complained about what had happened to them while they slept, according to at the edge. An owner with a nest pointed to the oven caught the moment when the unit plugged in at 4 am and straightened up to 400 degrees. Two other owners published stories about similar incidents: one person actually left in the oven some food that she had already cooked and woke up to find that she was burned.
According to June CEO Matt Van Horn, these problems can all be directly blamed on the user's mistake. "We have seen some cases where customers have accidentally activated the preheating of their oven via a device, imagine your cell phone," he told The Verge.
So, imagine if I had to be in the June app by clicking on the recipes and I was accidentally typing something that preheated my oven, we saw a few cases. It's a really wonderful feature to be able to preheat your oven remotely, and it's a completely new world that's very exciting, and things are happening … People have always joked about the button pushing, as if I did not want to call you, So these are just the types of software that we must be aware of to create to offer the satisfaction of our customers.
June has a problem here, whether the company wants to recognize it or not. Obviously, it is important that the furnace of society has a fault that activates and preheats it without anyone ordering it. But it is equally important that customers do this inadvertently without doing it. Unattended cooking represents a significant percentage of the total number of house fires.
Until now, an oven was a device that you turned on while you were standing in front. While it has always been a good idea to keep flammable objects away from an oven, each of us has, at one time or another, left something to be desired. flammable near a stove. You've probably done it deliberately, especially if you've ever had a sudden rush or lack of free space for food preparation. The rule for managing the risk of a fire in an oven is: check if the oven is on before placing flammable objects nearby.
An oven that can light remotely presents a different risk than an oven that can not. June can (and perhaps has already) taken many steps to reduce the potential threat, including the construction of a good oven not too subject to external hot spots. At the same time, however, it's a furnace – it will by definition have hot spots. A human being in front of the oven will automatically clear the area of debris that may have accumulated around him. The oven does not "know" that it must perform this function. And people can die when computers make mistakes on what they know. Autonomous vehicles enter immobile objects. The planes sink into the ground, resisting all the efforts of their pilots to make their way to the sky.
An important distinction between various autonomous vehicle problems and the 737 Max's MCAS, of course, is that the June furnace may not be able to do this because of some integration capabilities of the integrated inspection capability. But it is less important than it may seem. What Matt Van Horn calls "user error", I would call something else: poor design of the application. And since June develops both its application and its oven, the responsibility for the issue comes back to the same place.
If the problem is that the end users mistakenly trigger the "Preheat" function in the application, it must be designed to make it much more difficult to preheat the oven without being aware of it. . It should not be possible to light the oven inadvertently by consulting the cookbook of the application. June will distribute an application update in September that will allow consumers to turn off the remote preheat feature while allowing it. Next year, the June oven will be updated to recognize the presence of food in the device and will turn off after a set period of time if the end user does not not signal that the oven should remain on.
The point of comparison of the situation of the oven of June with that of the autonomous cars or 737 Max is not to pretend that they are equivalent. It should be emphasized that the inclusion of new features in products requires manufacturers to think about how carry them. A product capable of changing common assumptions about the operation of a device must be particularly careful to guard against any risk of harm created by the change. Adding a little intelligence to a washer or dryer does not increase the risk of damage, but anything that generates enough heat to eventually trigger a fire must be treated with care. The June growth difficulties are a good example of how businesses and consumers will have to adjust their product perceptions if they want to change the "default values" that people are used to.
The June does not seem to be a highly rated product in the first place – it's a $ 600 toaster oven and the Wirecutter has found its cooking under in comparison with the Cuisinart TOB-260N1. As additional bonuses, the Cuisinart does not have access to Wi-Fi, has no built-in camera and does not seem to offer any revenue application costing around $ 50 a year.
Update: June PR asked to make the following statement: "The safety of our product is the number one priority of June and the company took a number of precautions when manufacturing the oven June. We worked directly and quickly with the few June owners who underwent accidental preheating. These cases are certainly troubling, and we have a team of engineers who are working to make sure that does not happen in the future. We have ovens deployed on the market for 4 years and have a large community of enthusiasts. The best scenario is to listen to customers in real time, as June does, to solve the problems. "
The company also said it has made various changes to its iOS application to date. The default tab will now be the "Cookbook" tab instead of the "Oven" tab, and various shortcuts related to preheating have been removed. In September, the application will be updated to allow users to disable the preheat function if they wish. The oven will also stop preheating after 30 minutes if it detects no food, as previously transmitted.
In theory, organizations such as the FTC exist to protect American citizens. In practice, all too often, these organizations are much more accountable to the companies they are supposed to regulate than the citizens whose rights they protect. Last week, the FTC announced an agreement with Equifax, in which the stolen people, that is everyone in the United States, were entitled to $ 125 compensation. Given the scale and importance of the data Equifax has been able to steal, one might think that this type of minimal compensation would be the less the company could offer, since it has leaked social security numbers, addresses, phone numbers, birth dates and names.
Now, however, the FTC has changed its tone. Too many people have registered for the $ 125 settlement. According to the proposed settlement structure, only $ 31 million has been set aside to provide these repayments. That's $ 125 for 248,000 people. Equifax hacking affected 147 million people. In other words, according to the FTC, only 0.16% of Americans would ask for $ 125. Our government is now asking its own citizens to accept a virtually worthless free credit watch (which literally costs nothing to provide to Equifax) rather than asking for a small cash settlement in exchange for one. the most glaring database thefts of all time.
The new blog of the FTC is title "Equifax data breach: choose monitoring of free credit reports." Robert Schoshinski, deputy director of the Privacy and Identity Protection Division, writes:
The monitoring of free credit is worth a lot more – the market value would be several hundred dollars a year. And this monitoring service is probably stronger and more useful than anyone you already have because it monitors your credit report with the three credit reporting agencies nationwide and offers credit card services. Identity theft and individualized identity restoration of up to $ 1 million.
The FTC blog does not not It should be noted that the only reason the volume of money for refunds is so small is that the agreement between the FTC and Equifax allocates only $ 31 million to the relevant fund. Although the agreement with Equifax provided up to $ 425 million in assistance to victims of the offense, the overwhelming majority of the money is allocated to. other purposes. It's treated separately Press release. The government also do not note that under the terms of the agreement, it will be extremely difficult for anyone to prove that identity theft is related to the theft of the Equifax database, as this database has never been detected for sale on a hacking site. This implies that he was stolen by a state actor rather than by a conventional hacker.
Hurray. R0ckH4rd69Lvr does not have your data; Russia or China probably. It's much better.
Most financial websites do not subscribe to the FTC's assertion that free credit monitoring is worth "a lot more." To quote Levar Burton, "You do not have to take my word for it." Here is a sample of quotes and links on the subject:
NerdWallet: "NerdWallet recommends avoiding such offers from credit bureaus."
US News & World Report: "This is useful if you are a victim of identity theft, but its value is rather narrow."
CNBC: "Credit monitoring services may not be worth the cost"
CNN Money: "Most of what these products provide, you can do it yourself, for free".
LendingTree: "Paid credit monitoring services do not necessarily control your reports better than a free service."
The Attorney General of Maryland, Brian Frosh, summed up the spirit of the problem much better in his comments on the settlement last week. Speaking of some 147 million victims of Equifax hacking, I noted, "Most of them – most of us – did not register … We did not choose Equifax," said Frosh. "He chose us. He collected our personal information, compiled it, analyzed it and sold the product and some of the raw data to other people. Their imprudence with our personal data may cause harm to millions of Americans. "
Slate's argument, advanced last week, was that customers had a moral obligation claim this funding, send a message to Equifax and other companies about the critical importance of data security and hold them responsible for failing to do so. No one chooses to do business with Equifax, TransUnion or Experian. These institutions establish financial records and credit reports on Americans without consent, in order to provide aggregate information about their credit history. There is no way to voluntarily withdraw from the system and solvency checks are so important to so many life events that there would be little practical solution for the wealthiest Americans to do so. .
Cambridge Analytica was fined $ 5 billion for Facebook, but Equifax was fined $ 671 million. According to the FTC, this was a deliberate decision to protect Equifax. "We want to make sure we do not bankrupt or shut down the company," Maneesha Mithal, data and privacy expert at the FTC. said Ars Technica "We want to make sure that they have the funds and resources necessary to protect consumers in the future."
Yes Because nothing says how important it is to protect consumers as a slap when a company loses the data of 147 million Americans. Nothing fosters trust, as the FTC publishes a shameful and scathing blog stating the value of worthless surveillance services that the company fined can be provided without charge.
Details on how to object to the settlement, if you wish, are available in the FAQ on the EquifaxBreachSettlement page. You can not ask the court to change the rules, but you can argue for approval or refusal. A payment of $ 125 for a few million Americans was already serious enough, but the government's behavior in this case, not to mention the terms of the regulation itself, is insulting.
Soft know all your secrets. Your dd talk, your business plans with the boss, the company's many unspeakable meditations – they fill all the servers of the San Francisco-based company, waiting to be viewed by a Curious CEO, skillful hackerwhere the whole world.
The communication platform on which many rely to work and keep in touch with their friends is, for example, most things online, a potential disaster for privacy is about to happen And even if you have no choice whether to use the tool or not, you have the option to lock its settings confidentiality in order to mitigate the fallout before it is too late.
So let's lock it up.
If you use Slack for work, chances are it's a paid plan. This differs from the free version – which your D & D team could use to coordinate campaigns and meetings – in several important ways.
The first is that with the paid version, your boss might be able to read your direct messages. Determining if this setting is enabled is the first step to keep your DM secret. Fortunately, there is a way to do it.
When you are connected to Slack in a web browser, go to slack.com/account/team and click "Save and Export". Scroll to "What data can my administrators access?" And you will get your answer.
If only the page says that public the data can be exported, your DM is safe for your boss. However, it is stated that "Workspace owners can also export messages and files from private channels and direct messaging," the lords of your company have the opportunity to do so. extract your direct messages.
OK, so now you know that your boss has the ability to read your direct messages. It's bad, but not everything is lost. There are still many ways to protect yourself, or at least reduce the damage that will inevitably result.
To begin, you need to change the retention settings on the of your direct messages. Slack gives owners of workspaces (ie the person who manages your company's Slack account) the ability to determine how long messages will be kept, both on public channels and on the direct messages. It could be for 90 days, for example, or forever.
You can and should adjust this setting in your own direct messages. Think about it this way: when your boss records a deputy minister's file, would it be better for him to receive years of direct messages or only the last 24 hours? Yes exactly.
In a live message conversation, click the gear icon in the upper right corner and select "edit message retention." Then select "Use custom retention settings for this conversation", choose a day (the shortest period possible), and select Save.
Your messages will now be automatically deleted after 24 hours. Notably, this does not necessarily mean that they are no longer on slack servers once they have a day (they probably are not), but messages should no longer be within the owner's reach of the aforementioned workspace once a day has elapsed.
Unfortunately, you have to do it for every direct message conversation, but it's a quick change that is well worth it.
Slack does not give you the ability to individually encrypt your messages.
There is however a way around this problem in the form of a free browser extension called Shhlack. The extension, available for Chromium, allows you and your colleagues to encrypt all your messages. Its use is quite simple and means that your private messages will not be visible in clear text when your boss, or hackers, will take a look.
But above all, as GitHub page warns, "This is an experimental and ongoing project" that you must use "with a grain of salt". In other words, if something serious, such as your job or your business secrets, depends on the confidentiality of your messages, you will have to take stricter privacy measures.
This one is less of a parameter than a live tip, but it could save your life, so listen well: Any message that, if it was made public, could get you into trouble should not at all to be sent via Slack.
Instead, try creating a private Slack channel (with a short retention setting!), By getting the phone numbers of the people you want to chat with, and then sending them to the encrypted email application. Signal. You can make encrypted phone calls on the free app, create very large newsgroups, send files, conduct video chats and set messages to be automatically deleted afterwards. a predetermined time.
There is even a desktop app if you do not like to type with the thumbs.
Modifying slack messages afterwards may seem like a surefire way to remove potentially problematic content. But guess what, some Slack accounts track changes and keep records of messages before they change.
Knowing if this setting is enabled will prevent you from making the mistake of thinking that you are in the clear whereas in reality, the only thing you have succeeded in is showing that it is obvious that you try to cover your tracks.
Once logged in to your inactive account, go to https://my.slack.com/account/workspace-settings and click on "Retention and Exports".
You will find the answers you need.
Keeping your account private means keeping it safe. Protect your account with two-factor authentication is a great way to keep hackers and snoopers.
AT put in placeOnce logged in, visit my.slack.com/account/settings. Then click on "Two-factor authentication" and follow the instructions. You will need an authenticator application downloaded to your smartphone for it to work, but it exists tons safe choice it works with Slack.
Believe me: you really want this security feature to be enabled.
Suppose you want to leave Slack or leave a business and do not use this Slack account anymore. You may assume that the deletion of your account supports all your remaining personal data, but this is certainly not the case.
Instead, you must actually ask the "main owner" workspace to ask for release delete your profile information.
"When members leave a workspace or organization, they may have the right to request that their profile information be deleted by the primary owner," explains the company. "As a data controller, the primary owner is responsible for determining whether the profile information should be deleted."
This primary owner must then send an email to Slack at email@example.com with a specific delete request, noting "the member's email address and the URL of your workspace".
Once you have taken this step, you are finally free to enjoy your privacy.
. (tagsToTranslate) privacy (t) slack (t) tech (t) cybersecurity</pre></pre>
From shopping with your credit card to medical records to online browsing history, companies share and sell unidentified datasets containing a record of each of your trips. The information is supposed to be devoid of any specific detail – such as your name – that would link it directly to you. However, it turns out that the actual anonymization of your personal data is much more difficult than you think.
So you find a study published today in the newspaper Nature Communications. The researchers determined that, according to their model, "99.98% of Americans would be correctly identified again in any dataset using 15 demographic attributes".
Although 15 demographic attributes may appear to be a lot of data about a person, the study puts this figure into perspective.
"Modern data sets contain a large number of points per individual," write the authors. "For example, the data broker Experian has sold (company specializing in science and data analysis) to Alteryx an anonymized dataset containing 248 attributes per household for 120 million Americans."
That anonymized datasets can be de-anonymized is not new in itself. In 2018, researchers from DEF WITH HACKING CONFERENCE demonstrated how they could legally and freely acquire the apparently anonymous browsing history of 3 million Germans, then quickly anonymize portions of it. The researchers were able to discover, for example, the pornographic habits of a specific German judge.
This new study shows how little data is needed to identify specific people in otherwise rare datasets. "The (rare) attributes are often enough to re-identify with very high confidence individuals very incomplete data sets," the authors note.
Spoiler: The results are as troubling as you thought – keep in mind the next time the fine print of a company warns that it "could share your anonymous data with third parties."
. (tagsToTranslate) privacy (t) data security (t) tech (t) cybersecurity</pre></pre>