In one Twitter has announced a new reporting feature for tweets that attempt to spread false information about the electoral process. The new feature appears as a default option when a user chooses to report to tweet.
When reporting a tweet, a user can simply select the "It's misleading to vote" option and then tell Twitter about how the tweet is trying to manipulate voters. This update is intended to streamline the process, make the content easier to report, and bring it to the attention of Twitter's security team.
Twitter already have again tweets and accounts that tried to manipulate voters. The company has removed the screens of robots that propagate electoral misinformation as part of its broader attempts to crack down on electoral interference.
The company announced that this new feature would be deployed for the first time in India on April 25 and then in the EU on April 29, just in time for the elections. Twitter plans to continue deploying this feature globally throughout the year.
Facebook has a new verification partner, Axios first reported Thursday: CheckYourFact.com, the fact-finding arm of The Daily Caller, a right-wing website founded by hawker theorist of the plot Tucker Carlson.
CheckYourFact.com describes itself as "editorial independent" of The Daily Caller. Its funding comes from The Daily Caller's operating budget, advertising revenues and a grant that is ultimately funded by conservative political groups, according to Media questions, a conservative media monitoring organization.
Facebook has accepted as a partner because CheckYourFact.com is accredited by the Poynter Journalism Institute International Fact Verification Network (IFCN). Accreditation qualifies organizations for Facebook factual partnership, a program that uses third parties to evaluate the truth of articles posted on Facebook. If an article is considered misleading, Facebook retrogrades it in news and research.
Despite Poynter's will, skepticism and anger over the new partnership are legion. An organization affiliated with The Daily Caller – a swamp of misinformation and bias – can it be right?
"Facebook adds The Daily Caller as official fact checker, it's terrifying …"
"Adding Facebook to The Daily Caller as an official examiner is terrifying," said Angelo Carusone, director of Media Matters, at Mashable. "At a time when newsrooms continue to reduce their staff, right-wing media can resist vertical concealment by checking facts and exploiting gaps in the landscape."
The Poynter evaluator found CheckYourFact.com checked all the boxes of the IFCN. Poynter needs "impartiality and fairness, transparency of sources, funding and organization, methodology, and an open and honest correction policy" to be approved by the IFCN.
But as Carusone suggests, even though he technically checks the boxes, what does the Daily Caller do even in the area of fact checking? This is a website that has published apologists for Charlottesville, denial of climate changeand Pizzagate conspiracy theorists; it would be exceedingly generous not to question the reasons why they set up a control arm of the facts. If he was interested in the facts, why would he not already know, does he check the facts?
Facebook fact check program has been criticized as an ineffective public relations effort, not a robust solution to the problem of online spread of fake incendiaries. The inclusion of the fact-checking arm of The Daily Caller only adds to the perception that it's a way for Facebook to claim equity. and bi-politics, while allowing bad faith to take advantage of the system.
As with Facebook's fact-checking program as a whole, CheckYourFacts.com seems rather to be a way for The Daily Caller to erect a veneer of legitimacy. It also gives him a way to possibly have a say in what ranks up and down on Facebook.
"Facebook continues to be a victim of the work of the right-handed referees gambit and remains far too willing to do counter-productive, and sometimes even strange, things to relieve the critics of the right," Carusone said. "The systems remain inadequate and too susceptible to manipulation."
Most recently, the company has addressed misinformation with products such as , which allows you to check some video search results. Anti-vaccination videos, likely to harm the public, have been broadcast . YouTube even promised to answer his critics product, so that it ceases to actively promote extremist and conspiratorial content.
There is no doubt that YouTube takes platform security more seriously than ever before. YouTube will definitely be for the knowledge. However, a report from is now highlighting how his employees have consistently made YouTube aware of these issues long before he decides to solve them. And while YouTube highlights the focus on these issues over the last two years, such a rejected proposal could have helped to stifle the proliferation of shooting plots in Parkland last year.
According to former YouTube employees who spoke to Bloomberg, the company has been repeatedly warned of toxic content and misinformation about the service, but has dismissed concerns that they are focusing on growth. of the platform. In February 2018, YouTube employees proposed a solution to limited videos to legitimize sources of information in response to the conspiracy theory calling for Parkland's victims of crime to fire shots. According to Bloomberg, the proposal was rejected.
Some former high-level YouTube employees have even relied on the spread of this type of content to explain their departure from the company.
One of the first employees of YouTube, who worked there before Google The 2006 video site explained how the site moderated and demoted problematic videos, using content that promoted anorexia as an example. I pointed out how things seemed to have changed once.
With this desire to increase commitment and revenue, toxic videos have benefited from the changes. The problem became so well known that, according to Bloomberg, YouTube employees had a nickname for this brand of content: "bad virality".
Concerns about videos bypassing society's hate policies, the push for misinformation and promoted extremist content have been ignored. Proposed policy changes to address these issues were also rejected. The company went so far as to tell the staff, who are not part of the moderation teams, to stop looking for problematic content to report.
As YouTube notes in its response to the Bloomberg report, the company has begun to take these issues more seriously. YouTube was particularly toxic content concerning children. The company has even adopted policies similar to the proposal introduced following Parkland's .
The recent changes underway on YouTube are definitely a good thing for the future. But it is clear that so much could have been done sooner.
Most of the accounts deleted by Facebook this time were related to Russia. However, the company said the majority of accounts had been removed for spam-related activities. In total, the social network removed 1,907 pages, groups and related accounts in Russian. The "small part" of the accounts that had been set up to disseminate misinformation consisted mainly of content related to political problems and conflicts in Ukraine. About 1.7 million accounts were part of the 1,757 Facebook groups removed. The company also removed 86 pages and 64 Facebook accounts.
In addition to Russia-related accounts, Facebook has announced that it has removed 513 pages, groups and accounts connected to Iran for disappointment in coordinated inauthentic behavior. Pages related to Iran have turned out to be more openly political in nature than the latest series of Russian accounts. Facebook found that many of these accounts mimick current political groups and present themselves as legitimate media organizations. Many of the reports published by these stories have attempted to ease tensions between India and Pakistan, as well as between Israel and Palestine. Other frequently mentioned topics include conflicts in Syria and Yemen, the Venezuelan crisis and terrorism. According to Facebook, this operation was widespread in the Middle East and North Africa.
In total, Facebook has removed 158 pages, 263 Facebook accounts, 35 groups and 57 Instagram accounts connected to Iran. According to the company, approximately 1.4 million accounts have followed one or more of these pages. These accounts spent about $ 15,000 on Facebook ads between December 2013 and February 2019.
Facebook also said it had removed 212 pages, groups and Facebook accounts related to Macedonia and Kosovo for having coordinated unauthentic behavior. Users of these accounts shared beauty tips and celebrity information, as well as introductory pages from various political groups in the United States, United Kingdom, and Australia. About 685,000 accounts followed one or more of the 40 pages related to Macedonia and Kosovo. Facebook advertising on these accounts was approximately $ 5,800 between October 2013 and March 2019.
In the face of growing criticism over the years, Facebook has begun to focus its war on misinformation in 2018. The company has specifically targeted pages, accounts and groups that have engaged in "coordinated unauthentic behavior". Facebook qualifies this type of user behavior or organization. the establishment of "networks of accounts" in order to "deceive others about who they are or what they do".
In recent months, the social network has multiple related to Iran networks on his platform. Prior to this last purge, Facebook had already deleted more than a thousand pages and accounts connected in total to Iran.
In total, the search giant has recorded 2.3 billion ads compared to last year. That's more than 6 million bad ads removed every day.
Google has also terminated its relationship with nearly one million incorrect advertiser accounts and nearly 734,000 publishers and app developers, nearly double the amount of 2017. The company has also removed ads from nearly 28 million web pages and 1.5 million apps.
In the report, Google details specific sectors and niches that require specific new policies in 2018. The company provides examples, such as for-profit bond providers, that advertise from its advertising network. Google says the decision has been made to prove that these providers are taking advantage of vulnerable communities. The company removed more than 531,000 listings for bonds last year.
In total, Google has created 31 new ad policies to stop malicious ads in problem areas related to: , resellers, third parties surety and drug treatment facilities. For example, the company now banned ads promoting drug treatment services unless the advertiser drug treatment provider. He also removed about 58.8 million ads for phishing scams from his network.
In terms of misinformation and the political sphere, Google claims to have verified 143,000 election ads in the United States, thanks to it took place last year. The company has removed ads of about 1.2 million pages, 22,000 apps, and 15,000 sites for violating the misinformation and hate content or poor quality policies.
Internet abounds with scam artists and malicious actors looking for targets. Google's action plan to deter advertisers and malicious editors from its networks is quite simple: remove their economic incentives. The company seems to be successful in removing advertising that violates its rules. Judging by Google's updated policies, the real challenge is to follow the evolution of the methods of these malicious actors.
The email application owned by Facebook is a new "search image" feature that allows users to easily upload an image found on their platform to Google, according to . The image tool has been discovered in the latest beta of WhatsApp for Android.
At the touch of a button, WhatsApp will send a photo of its application to the search giant. The email application will then direct users to a Google search results page displaying "similar or equivalent images" elsewhere on the web.
This information can help users to determine whether an image is real or whether it has been photographed, or even to know the background of an unmodified photo that has been stripped of its context.
With over Active users, WhatsApp is the most popular messaging platform in the world. This makes it a prime target for fake news. In India, where the app is very popular, the misinformation that has spread on WhatsApp has even turned .
WhatsApp has taken action over the past year to reduce the platform's false information issues. The company has developed an initiative focused on stopping misinformation about the service. A was created within the company to tackle fake news in India.
The messaging application itself has been the subject of many updates to combat the spread of false information. WhatsApp has released new for groups, imposed limits and proactively prohibits millions of each month.
With the new "search image" feature currently in beta, WhatsApp is now looking to combat fake information disseminated via photos.