Artificial intelligence controls much of the way we consume information today. In the case of YouTube, users are spending 700,000,000 hours every day watching videos recommended by the algorithm. Likewise, Facebook's news feed recommendation engine is running 950,000,000 hours to watch the weather a day.
In February, at the YouTube user named Matt Watson found that the site's recommendation algorithm facilitated the connection and sharing of pornography involving children in the comments of certain videos. The discovery was horrible for many reasons. Not only did YouTube monetize these videos, but its recommendation algorithm was actively pushing thousands of users to suggestive videos of children.
When the news was announced, Disney and Nestle withdrew their ads from the platform. YouTube has removed thousands of videos and blocked the comment features of many others.
Unfortunately, this was not the first scandal to hit YouTube in recent years. The platform promoted the terrorist content, sponsored by a foreign state propaganda, extreme hate, zoophilia softcore, inappropriate content for childrenand innumerable conspiracy theories.
Having worked on recommendation engines, I could have predicted that the AI would deliberately promote the damaging videos behind each of these scandals. How by looking at the metrics of commitment.
Use of recommendation algorithms, YouTube's AI is designed to increase the time people spend online. These algorithms track and measure the user's previous viewing habits, and users appreciate them, to research and recommend other videos with which they will engage.
In the case of the pedophile scandal, YouTube's artificial intelligence was actively recommending videos suggestive of children to users most likely to use them. The more artificial intelligence is (ie, the more data it contains), the more effective it will be in recommending specific content targeted to the user.
Here is where it becomes dangerous: as the AI improves, it will be able to predict more accurately who is interested in this content; thus, it is also less likely to recommend such content to those who are not. At this point, the problems with the algorithm become exponentially more difficult to detect because the content is unlikely to be reported or reported. In the case of the pedophile recommendation chain, YouTube should thank the user who found it and exposed it. Without him, the cycle could have lasted years.
But this incident is only one example of a bigger problem.
How hyper-committed users shape the AI
Earlier this year, Google's Deep Mind researchers examined the impact of recommendation systems, such as those used by YouTube and other platforms. They concluded These "feedback loops in recommendation systems may give rise to" echo chambers "and" bubble filters ", which may be reduced to the exposure of the user's content. and finally change his vision of the world.
The model did not take into account the influence of the recommendation system on the type of content created. In the real world, AI, content creators and users strongly influence it. Because artificial intelligence aims to maximize engagement, hyper-committed users are seen as "models to replicate". Artificial intelligence algorithms will then please the content of these users.
The feedback loop works as follows: (1) People who spend more time on platforms have a greater impact on referral systems. (2) The content with which they engage will get more points of view / tastes. (3) Content creators will notice and create more. (4) People will spend even more time on this content. That's why it's important to know who the hyper-committed users of a platform are: these are the ones we can look at in order to predict which direction the AI is pointing the world at.
More generally, it is important to consider the structure of the incentives that underlie the recommendation engine. Companies that use algorithms to communicate with their platforms as much and as often as possible, as this is in their commercial interest. It is sometimes in the interest of the user to stay on a platform as long as possible, when listening to music, for example, but not always.
We know that misinformation, rumors and salacious or divisive content induce a significant commitment. Even if a user notices the misleading nature of the content and brand, this often happens only after having participated. At this point, it's too late. they gave a positive signal to the algorithm. Now that this content has been favored in a certain way, it is boosted, which encourages creators to download more. Driven by artificial intelligence algorithms incentivized to enhance the characteristics favorable to engagement, more of this content is filtered into the recommendation systems. Plus, as soon as AI learns to engage only one person, it replicates the same mechanism on thousands of users.
Even the best AI in the world, systems designed by resource-rich companies like YouTube and Facebook, can actively promote uplifting, fake and unnecessary content in pursuit of engagement. Users must understand the AI base and consult recommendation engines with caution. But this awareness should not be the sole responsibility of users.
In the past year, companies have become increasingly proactive: Facebook and YouTube have announced that they will start detecting and demoting harmful content.
But if we want to avoid a future fraught with division and misinformation, there is still much to be done. Users must understand what artificial intelligence algorithms work for them and which ones work against them.
Cable Opinion publishes articles written by external contributors and represents a wide range of points of view. Read more opinions right here. Submit an editorial to firstname.lastname@example.org
(tagsToTranslate) Artificial Intelligence (t) YouTube (t) Social Networks
Full details of the settlement were not available Friday afternoon, and the FTC and Facebook declined to comment. The Wall Street Journal first reported the news. It is not clear how long it will take the Department of Justice to review the conditions. In the meantime, important issues remain unresolved, including whether the FTC has chosen to personally hold Facebook CEO Mark Zuckerberg for the privacy of the company, and what kind of control external must be submitted by Facebook.
The FTC opened his investigation in Facebook's data practices last March, a week after the announcement Cambridge Analytica, a political consulting firm that had worked with the Trump campaign in 2016, had obtained information unduly on tens of millions of Facebook users. Data was purchased at an academic who used a personality profiling app to collect information not from willing users but, thanks to Facebook's lax privacy policies at the time, from friends of those users , without their knowledge. Facebook has not cut this access before 2015.
But in 2011, Facebook had promised to the FTC that he would not share the data with third parties without the affirmative consent of users, as part of an agreement to settle charges that the company allegedly misled consumers about his practices in confidentiality. It seems that the regulator has found that Facebook violated this decree of consent.
"We do not think of good things, we need a structural solution here."
Matt Stoller, Institute of Open Markets
The reported fine far exceeds the FTC's most important privacy sanction, a strike of $ 22.5 million against Google in 2012 on its privacy policies on the Safari browser. But even $ 5 billion would represent a drop in Facebook's basket, which generated $ 15 billion in revenue in the last quarter. When Facebook disclosed in its Q1 results report he had set aside between $ 3 billion and $ 5 billion to cover settlement costs, his share price has skyrocketed.
Some of Facebook's biggest critics had already expressed doubts about the fact that a sum of money could punish a company the size of Facebook. "They can inflict a very big fine, which is only a parking ticket," Wember's Matt Stoller, a member of the anti-monopoly think tank Open Markets Institute, recently told WIRED. "We do not think a fine counts, we need a structural solution here."
In a letter to the FTC in early May, Senators Richard Blumenthal (D-Connecticut) and Josh Hawley (R-Missouri) said the FTC should "impose radical changes to the pattern of misuse. and abusive personal data on social networks ".
"Personal responsibility must be recognized from the top of the board of directors to the product development teams," writes the letter. "If the FTC discovers that a Facebook official has knowingly broken the consent order or breaks the law, she must name it in any further action." If the FTC has not remained one of the biggest outstanding issues around this settlement, it has been made all the more intriguing after the recent disclosure of emails, again in The Wall Street Journal, which seem to indicate that Zuckerberg was aware of the company's "dubious" privacy practices.
The Cambridge Analytica scandal provoked a growing awareness of data rights In the United States, Facebook and other high-tech companies have been repeatedly called upon to answer broken promises and breaches of users' data protection. The apparent decision of the FTC is part of a growing demand for measures to control Big Tech. In Congress, lawmakers on both sides of the aisle called for federal legislation protect the privacy rights of Americans, while a number of legislatures already passed or considering privacy bills of their own.
Big Tech concerns are not limited to privacy. The FTC formed a work group to investigate past and future mergers of the industry at the beginning of the year and that it would now be, with the Department of Justice, scan big companies like Facebook and Google for antitrust reasons. Senator from Massachusetts and Democratic presidential candidate Elizabeth Warren has proposal split technology platforms that are also doing business on these platforms and review past mergers, including the Facebook acquisition of Instagram and WhatsApp. A number of conservative politicians, including President Trump, have reported that social media companies have liberal bias and censorship.
New details have also emerged about how Facebook has negotiated special agreements with data. appliance manufacturers and large corporations, even after the company severely cracked down on data access in 2015. One of these companies with extended access was Russian giant Internet Mail.ru. Facebook has also suffered a huge data breach, exposing some 30 million accounts.
And earlier this year, TechCrunch released a report detailing how to used an app called Research observe every movement made by users on their phone to spy on the competition. In addition, Facebook had the habit of creating a loophole to distribute the application to iPhone users, after Apple had thrown another similar Facebook application on its App store.
All this has caused a change of heart – or at least a change in public positioning – for society. In March, Facebook CEO Mark Zuckerberg launched a private roadshow, revealing for the first time long publication on the blog his vision for a new kind of social network focused on confidentiality. This social network would focus on messaging, allowing people to send end-to-end encrypted messages on Facebook Messenger, Instagram and WhatsAppand move to new areas such as payments and trade. Later in the month, Zuckerberg wrote a op-ed in The Washington Post, expressing its support for a regulation focused on "harmful content, electoral integrity, confidentiality and portability of data".
"I think Facebook has a responsibility to help solve these problems, and I look forward to discussing it with lawmakers around the world," wrote Zuckerberg. "But people should solve these problems themselves, we should have a broader debate about what we want as a society and how regulation can help."
Facebook and other technology companies would have spent years lobbying for regulations, including a California privacy bill and federal regulations that would have required more control. strict on digital policy ads.
The fine reported to the FTC, as heavy as it is, does not mean that the regulatory problems of the company are over. Facebook still faces many investigations in the United States. The Securities and Exchange Commission launched a survey last year following the revelations of Cambridge Analytica. Earlier this year, Facebook's data transactions with other companies triggered a criminal investigation by the federal prosecutors of the Eastern District of New York. The company is also facing a lawsuit filed by the US Department of Housing and Urban Development, in April, about allegations. its advertising platform has allowed discrimination in housing. When rumors of possible antitrust investigations leaked last month, reports described the FTC's negotiations to call for future research on Facebook. And this even before arriving in Europe, where the general law on data protection has clearly put Facebook on the line of sight of European regulators.
If approved, the FTC's apparent settlement on Friday would set a precedent for how federal regulators plan to contact technology giants in an era of growing awareness of data rights. But no more than $ 5 billion will result in real change without a comprehensive reform to support.
. (tagsToTranslate) Facebook (t) FTC (t) privacy (t) cambridge analytica</pre></pre>
What it fails to mention in the description or in the video itself is that purchases made from these links may also result in small payments to Brownlee itself. Princeton researchers say the links include referral codes that usually refer to such payments.
Federal Trade Commission The guidelines for social media ads require influencers to clearly disclose whether they receive anything – money, gifts, or anything else – that could affect how users perceive their mention of a company or product. However, few do it. Last year an analysis of more than 500,000 YouTube videos and more than 2.1 million Pinterest pins led by Princeton researchers revealed that influencers rarely reveal their links with such affiliate marketing links.
Even advanced users with influence marketing can find it difficult to identify affiliate marketing links. A new browser extension published by some of the same Princeton researchers makes them more obvious.
The extension, nicknamed AdIntuition, displays a bright pink banner warning users that "This video contains affiliate links.If you click on the links highlighted, the creator receives a commission." The extension was released this week for Chrome and Firefox browsers . The researchers said they were interested in an application to other browsers and platforms, such as mailing lists or blogs. Getting the content of apps, even YouTube apps, poses additional challenges, said researcher Arunesh Mathur.
For the uninitiated, the links below Brownlee's video seem ordinary. They start with "amazon.to" followed by a backslash and a few random characters, suggesting that there are truncated paths to one of the many lists. Amazon. After eleven clicks, they redirect the user to an Amazon product URL with an identification tag indicating that you are used by online retailers to designate the affiliate marketing partner who motivated the business. purchase of the user.
Brownlee has not responded to a request for comment. But he is far from the only YouTuber to have failed to disclose a brand partnership. In last year's study, Princeton researchers found that only 10% of YouTube videos reviewed with affiliate links contained written information indicating compensation for endorsement. Only a small number of these complied with the FTC guidelines. The researchers said that they had not updated the study since then. YouTube has not responded to a request for comment. (WIRED includes affiliate links in some articles.)
One of the researchers, Marshini Chetty, director of the Princeton Human Computer Interaction Laboratory, said the team wanted to find a way to warn users when they were viewing videos containing affiliate marketing. "The browser itself should be able to warn you of deceptive or misleading content such as (if) the content you see may be an advertisement," she said. Chetty, Mathur and Michael Swart created AdIntuition to do it.
The extension analyzes the description of YouTube videos for user visits to determine if it probably contains signs of marketing partnerships for affiliates. It checks all links (and their redirects) against a list of URLs and known affiliate marketing IDsand scans the text for words, phrases or symbols, commonly used to direct users to influence, a personalized coupon code, which is also used to make a purchase. For example, if I wrote "Check out all the influencer marketing anecdotes on Wired.com and use WIRED1 with 1% off on a subscription!" In the description of a YouTube video, the extension reported the phrase as a possible influence marketing. because of the phrase "check out", according to Swart, the main builder of the extension.
Swart said the team had two main goals when creating AdIntuition. First, users had to be shown that affiliate marketing was a problem. "If you download the extension and watch YouTube videos, it is highly likely that the extension will signal something." The other goal is more academic.
If an AdIntuition user consents, the extension will send a limited amount of anonymized data on YouTube videos viewed by the user to the Princeton team, who will use it for inform subsequent research on the prevalence of affiliate marketing in videos.
If AdIntuition indicates that a video contains affiliate marketing, the extension will collect the date and time of viewing, the type of affiliate marketing content present, the parts of the description of the video set obvious by the extension, the ID of the video and an anonymous user ID associated with the browser that downloaded the extension. If AdIntuition does not report anything, the program only saves the user ID, date, and time, as well as a video of any sort viewed, but no details allowing it to appear. identify. Users can choose not to collect data by unchecking a prominently labeled box labeled "user data" in the extension settings.
Mathur says that there has been a lot of research on affiliate marketing and that the Princeton team wants more data. "When we put this extension in the hands of users and they use it, we can see how affiliate marketing is used in the wild," Swart said. I suspect that they will find that affiliate marketing is even more prevalent among popular videos; The initial study was conducted on a random sample of YouTube videos.
Less than a week after the release of AdIntuition, it is too early to draw conclusions about its effectiveness, but the results of the beta test are promising. Prior to the release of the extension, the researchers conducted a survey of 350 Amazon Mechanical Turkers users. "We found that the group that had AdIntuition had a lot easier to understand that (when affiliate marketing is present), the content of the video was influenced by the relationship between a brand and the content creator." said Swart.
(tagsToTranslate) YouTube (t) influencer (t) Marketing (t) sponsored content</pre></pre>
Would you like to receive this two minute summary as an email every day of the week? Register here!
Amazon has pledged $ 700 million to teach his workers to code
This morning, Amazon announced a $ 700 million initiative to retrain US employees for highly specialized, mostly technical jobs over the next six years. The initiative is a way for the company to strengthen its image with American workers, but the effectiveness of the training remains to be demonstrated. A notable absence: Amazonian companies said the initiative would not cover training focused on sustainable energy or other climate-related jobs.
A group of hackers à la carte hit 17,000 domains and count
You may not have heard of Magecart, but think of them as credit card collectors on the web. Due to poor corporate security in the cloud, the group has managed to cover 17,000 domains in recent months. Although Magecart focuses on credit card numbers, it's also hard to imagine another group of hackers using the same methods for even worse crimes.
A new device widely used in the field of research on sex might surprise: plug-ins. It turns out that combining an orgasm detector that works for everyone is a daunting task, but with the Butt Plug Turn Search Device, sex workers have a way to consistently measure the physiology of orgasms.
Next week is Amazon Prime Day and Google does not have it. So, it's having its own sale on Google and Nest devices, and WIRED rounded out the best deals.
Which Amazon Fire tablet is best for you?
This daily report is available via the newsletter. You can register just here to make sure you receive the news in your inbox every day of the week!
Elias helped test new software from researchers at Carnegie Mellon University and Facebook. He and another professional, Chris "Jesus" Ferguson, each played 5,000 hands on the internet in six-to-five games of a robot called Pluribus.
In the end, the bot was ahead by a good margin. On the way, Elias noticed something: although the machines are often considered as uninspired, this bot was much more elegant than your typical poker pro. "He will put two or three times the pot, which humans do not do much," says Elias. "These huge bets are great for me and something I'm going to incorporate into my own game."
Pluribus is important not only because a new bot has taught an old pro new rides. The software is the first to beat the best professional Texas Hold 'em multiplayer no-limit, considered the elite form of poker. A paper in the newspaper Science On Thursday, describe how Pluribus faced Elias and Ferguson, and also won in scenarios where only one copy of the bot was played by five human professionals for 10,000 hands.
"If you meet this robot with five elite professional humans, it will beat them up and bring them some money," said Noam Brown, a researcher at the IA Lab on Facebook and co-creator. from Pluribus. "It's really the gold standard when it comes to poker."
Michael Littman, a professor at Brown University, worked on computer poker but did not participate in the project. Poker has long been considered a great challenge for AI, with properties similar to many real-world situations. Unlike chess, poker players must choose stocks without knowing which cards their opponents have in their possession, as is the case in politics, business and war. The complexity that creates a six-game has been made out of reach of the multiplayer Hold 'em for AI. Most of the work has been on two-player games. Now, the last major milestone for AI poker has failed, says Littman. "It's really the end of a multi-decade effort involving many researchers," he said.
Brown built Pluribus with Tuomas Sandholm, a Carnegie Mellon teacher. Brown was previously a graduate student of the Sandholm Lab, where they built a 2017 bot called Libratus This software has become the first software to beat professionals with the much simpler two-player formula of No-Limit Hold 'em.
"If you meet this bot with five professional elite humans, he will beat them and bring them money."
Noam Brown, co-creator of the bot
Brown started the Pluribus project after joining Facebook, but he said the social media giant was not thinking about specific applications of the technology. "The goal is to do basic research on imperfect information and large-scale multi-agent systems," he says, a phrase that also aptly describes the core service of Facebook. In the longer term, the ideas tested in Pluribus could help autonomous cars predict the actions of other drivers or improve fraud detection algorithms, he said.
Sandholm at CMU says it has already proven the commercial and even national value of software that can develop strategies. He created two companies to commercialize work on AI strategy techniques in his lab.
One of these companies, Strategic Machine, is working on uses such as improving robots in video games and helping to set optimal prices that take into account the reaction of competitors. The other, Strategy Robot, signed a two-year contract worth up to $ 10 million with the pentagon in 2018; Sandholm and the Pentagon refuse to discuss the contract. But Sandholm said that one of Strategy Robot's selling points is to use proven ideas in poker and in its other AI projects to make simulated battlefield strategies, or even real, more robust against the actions of the enemy. No ownership of the project with Facebook will be licensed to Sandholm's companies, although some of the essential techniques at Pluribus predate the project.
Pluribus is similar to Libratus in that it is built with its skills by playing billions of hands against versions of itself. After each hand, the system examines what happened and what could have worked better. Any improvement is added to its main strategy.
The new bot is able to play a game much more complex than its predecessor largely because it is more able to refine this main strategy by projecting the possible results of a particular point of time. A game called search function. The first bot of Brown and Sandholm has attempted to map out all the possible twists of a game. But it would take too much computing power to explore the almost infinite possibilities of a six-player game.
Instead, Brown and Sandholm have developed a search function that only looks a few moves forward at a time. To avoid unpleasant surprises, it also takes into account how the value of different actions would change if opponents changed their strategies. This type of search had not yet been adapted to a game like poker, where some information is hidden.
According to Brown, this new approach also presents the benefit of less computing power, which makes Pluribus relatively inexpensive to operate. The bot needed eight days of play against itself on a single powerful server with 64 processor cores to master the game. developed for complex video games such as DOTA 2 several weeks of training on hundreds of thousands of processors. "You can develop something like this on a cloud computing service for around $ 150, which really makes it possible to apply that to other areas," Brown said. The comparable figure for Libratus, which was played against him on a supercomputer for two months, would be in the order of a million dollars, he said.
One of the applications that the pair does not have in mind for their code is to make money in poker. "We will not publish the code," Brown said, "as this would have a better impact on the online poker community." "We are trying to make this accessible to members of the AI community, not to those who want to do poker AIs."
Nevertheless, I admitted that the techniques will probably spread anyway. In a year, will other people have developed Pluribus style robots? "I think it's entirely possible," Brown says.
Elias, the champion of human poker, is waiting for him. Since the arrival of Libratus, I say, people do not play so much high-stakes online games because robots have become more sophisticated. "If you play online, it's likely you're playing against a bot or a human being helped by a bot," Elias says.
Elias said the latest advances in AI should not dissuade poker pros and fans from playing poker, as this could improve the quality of the game. I was happy to help in Pluribus test because I liked the science of AI and the potential of new ideas like the value of betting bigger. The bot's penchant for "do not bet", in which a player who wagered the same bet in one turn, moves to the next raise, also questions poker about the fact that the tactic is a bad idea.
Elias admits a little sadness. The arrival of Pluribus, the ultimate poker robot, marks a historic crossing point for the game. "I have not done anything other than poker since the age of I'm 16 years old and have dedicated my life to it, so it's very humiliating to be beaten by a machine, "he says. "The first time AI wins, it's the last time that the man will win."
Tim Verheyden, a journalist with the Belgian public channel VRT, contacted the couple with a mysterious audio file. To their surprise, they clearly heard the voice of their son and grandchild, captured by Google's virtual assistant on a smartphone.
Verheyden claims to have had access to the file and over 1,000 other people through a Google provider that is part of a paid global workforce to control the audio captured by the assistant from devices including smart speakers, phones and security cameras. One recording contained the couple's address and other information suggesting that it was about grandparents.
Most of the recordings reviewed by VRT, including the one referring to the Waasmunster couple, were intended; users have asked for weather information or pornographic videos, for example. WIRED reviewed the transcripts of files shared by VRT, which released a report on his findings on Wednesday. According to the broadcaster, the wizard appears incorrectly in about 150 records, after misinterpreting the message.
Some of these fragments captured phone calls and private conversations. They notably announced that someone needed the bathroom and that it seemed to be about discussions on personal topics, such as a child's growth rate, healing. of his injury and the love life of someone.
Google says that it transcribes a fraction of the audio from the wizard to improve its automated voice processing technology. Yet, the sensitive data contained in the recordings and instances of Google's unauthorized eavesdropping algorithms make some people uncomfortable, including the worker who shared the audio with VRT and some privacy experts . Privacy experts say Google's practices violate EU privacy rules GDPR introduced last year, which offers special protections for sensitive data such as medical information and requires transparency on how personal data is collected and processed.
VRT started talking to the Google entrepreneur as a result of a Bloomberg report this describes how Alexa audio from Amazon– Unintentional recordings included – are transcribed by company staff and subcontractors, including Boston, Costa Rica and India. The Google entrepreneur said that I had transcribed about 1,000 clips a week in Dutch and Flemish and that I was concerned about the sensitivity of some recordings. I showed VRT how I connected to a private version of a Google app called crowdsource access the records assigned to it.
In one case, said the contractor, I transcribed on a recording in which a woman felt like she was in distress. "I thought physical violence was involved," I said in VRT's English subtitles video reportage. "These are real people you listen to, not just voices." The contractor adds that Google has not provided clear guidance on what workers should do in such cases.
In a statement, Google spokesman said the company had opened an investigation because the contractor had violated the data security rules. According to the release, Google is calling on "language experts from around the world" to transcribe the audio recordings of the company's assistant, but only checks for about 0.2% of all recordings, which are not associated with user accounts.
Google reviewers may not see the account data, but they still have the ability to hear very private information, for example relating to health. Jef Ausloos, a researcher at the Center for Computer and Intellectual Property Law at the University of Leuven, Belgium, told VRT that Google's system might not be in line with the GDPR, which requires explicit consent to collect health data.
Michael Veale, a technology policy researcher at the London-based Alan Turing Institute, said that these speeches did not seem to meet the requirements of the GDPR, even for data considered non-sensitive. The group of national data protection regulators in charge of the GDPR application stated that companies had to be transparent about the data collected and their processing. "You have to be very specific about what you are implementing and how," says Veale. "I think Google did not do that because it would look scary."
Google spokesman said the company would look into how it could clarify how users are used to improve the company's speech technology.
Veale has filed a complaint about the Apple Siri with the Irish data controller, arguing that the service violates the GDPR because users can not access the recordings made by Siri. He added that Apple had replied that its systems handled the data with enough care so that the audio files of its own voice would not be considered as personal data. Google and Amazon allow users to view and delete their records. Amazon now allows users to call, "Alexa clears everything I said today, "To purge your story.
Amazon's privacy policies do not describe how reviews treat certain Alexa audio files. Like Google, its privacy pages Alexa does not record all conversations, but does not explain that he may inadvertently listen. Apple's documents also do not describe the review processes, although a White Paper on Security indicates that some Siri audio files are kept for "continuous improvement and quality assurance". Amazon and Apple declined to comment.
Fixed on 19/10/19 at 19h ET: The Google entrepreneur who spoke on Belgian TV said he watched 1,000 audio clips a week. An earlier version of this article, I reviewed 1,000 clips per month.