Google, the largest Internet search destination, can legally take advantage of the fraud generated on its platform and has little interest in fixing it. But this is not an easy problem to be solved by ethical or legal amendments, as current circumstances are extremely changing the status quo. Nevertheless, we can still do more to reduce Internet fraud.
Google is positioning itself in the search market with 63% of all queries on the Web. You could even have expected that number to be greater than Microsoft's, which recovers almost everything else and people have a real bias against Bing. No growing problems of trust, nearly two-thirds of the world's population still chooses Google for research and Chrome as a browser. With a clear dominance in the search market, the company's platform is an ideal target for fraud in the same way that viruses and other exploits target Windows systems more than any other. When you want to find the largest pool of victims, you start with the largest group of people.
Nevertheless, Google's efforts to fight fraud on its platform seem minimal and mostly ineffective. The Wall Street Journal recently told a story accusing Google of profiting from millions of fake and fraudulent addresses on his platform, but what appeared to be news for many should have looked like déjà vu. It was very public knowledge for over five years. More than five years ago, a cybersecurity expert and a hacker Bryan Seely used an exploit in Google Maps& # 39; Business listings to change the FBI and Secret Service phone numbers to listen to their calls. I've successfully registered 40 calls in a single day using this process to demonstrate the problem to the government. The government listened, told Google to stop, what Google did, then everything started again three months later.
Five years later, the WSJ tells the same story from another angle: the victims who are victims. Here is an example:
A man arrived at Ms. Carter's home in an unmarked van and stated that he was a contractor to the company. I was not there. After working on the garage door, I asked for $ 728, almost double the cost of previous repairs, Carter said. I asked for money or a personal check, but she refused. "I'm alone at home with this guy," she said. "He could have knocked me down fatally."
The repairman had hijacked the name of a legitimate business on Google Maps and listed his own phone number. I went back to Mrs. Carter's house again and again, seeking her for the payment of such a mediocre remedy that she had to be redone.
C & # 39; hardly an isolated incident and, as the WSJ also points out, Google has recognized the problem:
Mr. Russell, of Google, said the company had removed more than three million fake listings from companies in 2018. Last year, the company also deactivated 150,000 accounts that downloaded the invented lists , he said, up 50% from 2017. detail his countermeasures, citing security.
In the company's response, Ethan Russell, product director for Google Maps, wrote that among the more than 200 million ads added to Google Maps over the years, only a "low percentage" is a fake. He said that last year, Google had removed more than 3 million fake company profiles, of which more than 90% had been deleted before users could see them. Google's systems have identified 85% of deleted ads and 250,000 users. The company has also disabled 150,000 user accounts deemed abusive, representing a 50% increase over 2017.
If this text sounds familiar to you, it may be because Google has recycled the same information as WSJ cited in its article. Although Google has decided nothing at all, its efforts to fight fraud on its platform rely on a delicate balance between ethics and profit – two things that do not exist in many areas of business. Google activity.
Google has also received reviews for do not control ad click fraud. Waiting, he was accused by many to crush the competition maintain the dominance of the market that allows them to act in this way. He entertained numerous class actions as well as other notable disputes involving these issues and similar issues. The company remains under constant fire in its dubious privacy practices and its constant desire to benefit from the victimization of its users. After all, in 2018, Google had only generated $ 116 billion in advertising revenue, and solving many of these problems would reduce that revenue.
It may seem illegal for Google to profit from millions of fraudulent ads and virtually nothing about them, but that's not always a good thing. Article 230 of the American legal code (formally 47 U.S.C. 230) protects Google and any other company operating an online platform from legal liability for the activity of the users of this platform. Derek Bambauer, Professor of Law at the University of Arizona, explains:
The actual action is described in 230 (c) (1), which indicates that no provider (eg Google) or user (perhaps Google) of an interactive computer service can be held responsible for the content created by another provider of information content. (The terms "interactive computer service" and "information content provider" are defined in section 230.) So, if I publish a review of your restaurant on, for example, Yelp, which falsely claims that the Place is invaded by cockroaches and rats, Yelp is safe from prosecution for defamation. (You can still sleep, of course – I am the provider of information content.) This remains true even if Yelp know that the examination is wrong.
An online platform makes it more difficult to detect the perpetrators of fraudulent activity, leaving little legal recourse for consumers and businesses involved. As Bambauer says, "to sue Google, it's like shooting at Superman's cloak: it's probably not the smartest thing." Google has a fleet of very good lawyers. The best option for companies is to start with public pressure on Google. "Unfortunately, there has been public pressure to talk about TED and the Wall Street Journal.
Article 230 is an almost impenetrable legal wall, with a few rare exceptions: trademark infringement and Amendment FOSTA-SESTA who has exempted cases of sex trafficking. Although this last solution may seem good at first glance, the amendment broadly outlines the circumstances in which a company assumes responsibility for the content of its platform. In response, many companies (such as Craigslist and Tumblr) completely removed and forbidden any sexual content. Many also have argued that the amendment endangers sex workers and does little to solve the problem of sex trafficking.
In the meantime, more common problems such as fraud have not prompted lawmakers to further amend Article 230. This could lead companies to close even larger portions of their platform that do not not part of the social policy of human sexuality. Without careful consideration, the changes made in Section 230 may be an incentive switch for what we are able to do online with consequences that affect the entire world. If we want to protect the freedom of the Internet, we must protect Article 230, but in doing so, we also look like Google companies that legally benefit from many frauds.
The current situation also encourages companies to do less to avoid creating new liabilities. For example, while Article 230 makes not protect online retailers from product liability claims (for example, batteries of cell phones that explode), Amazon uses its market to work around this problem. After all, the company does not sell the product directly to the consumer, but it simply takes a share of the trade it sells. This issue has become more complicated recently when a lower court decided that Amazon could share responsibility for defective products sold by a third party –but only if they warn consumers of the problem. For Amazon, choosing the most ethical option of notifying consumers could cost it millions of dollars in mandatory refunds.
Google could be facing a similar problem. More efforts to warn consumers of the risks of fraud on its platform are likely to create legal vulnerability that could bypass the protections provided by Article 230. In addition, any solution to the problem, whether in some or all of it, directly Google influences the results and Google has obligations to its shareholders as a publicly traded company. This creates a complex set of circumstances that cause Google to continue the current behavior. They are trying to develop better automatic detection methods to eradicate fraud in advance and rely on users to report any breach. Doing more could have an extremely negative impact on their business.
Should legal problems release Google from any liability? Is it ethical to exploit and take advantage of a platform that can cause significant harm to many of its users? It may seem simple to find an ethical answer to this dilemma, but the reality is the problem is much more difficult to solve. If Google tries a solution that is vulnerable to expensive court proceedings, it will be necessary to come up with an effective solution. In the worst case scenario, this would mean the end of Google Maps.
While such extreme results seemed very unlikely, we witnessed a massive shutdown of Internet services when FOSTA-SESTA became law. The companies did not wait for legal action but closed rather preventively. Google has already threatened to close its news service on the Adoption of Article 13 by the EU (later renamed Article 17) because of the potential costs.
No matter what could happen, Bryan Seely says Google still has options:
It is not in Google's internal interest to find all spam 100% of the time, as it takes a lot of time on a large scale, with a lot of ambiguous / difficult to determine content. Some are obvious, others not. Google is expected to interact and help different countries and states to develop a "legal" or "non-legal" business database API, so that entities such as the state department that grants business licenses are verified when they are not in compliance. a company is online. The reason we issue such licenses to organizations is for the responsibility and the protection of consumers; it's almost a complete bypass of this process.
For this to work, Google and the government should work together. In addition, Google may provide contextual tools to report fraud. For example, let's take this basic research for an electrician:
It should be noted that during the most common screen resolution 1280 × 800you will only see ads in your results unless you scroll. But let's look at the first option and how this company is positioned on Yelp:
If this company commits fraud, how would you report the problem to Google? Each search result has a small, easy-to-ignore menu, but even that does not provide the option:
You can find out why the ad was served, but you can not post it directly from the only contextual option you have. In addition, Google leaves a lot of leeway to provide useful tools to its users:
Google is not required to create a liability by warning you of potential fraud and may simply post a link to information and post information about how to protect yourself and report a problem. Meanwhile, the company's most important verification activity involves sending a postcard to the user who wants to claim an address and publish a list. By configuring mail forwarding, virtually anyone can redirect mail from the address of an existing business to itself to support that address on Google.
The same process works to create new lists. Seely believes that an individual could create hundreds of fake listings a day. I tried the process (without finalizing it) myself and it took less than five minutes to my first try. It's 12 lists per hour, or 84 per workday if you take a long break for lunch. With a little practice, and perhaps with the help of account generators easily purchased in darknet markets for as little as $ 10, it's not hard to see how to dedicate a person could handle hundreds of fake ads every day. Even using a manual process, you would exceed 100 lists by doing a few extra hours.
Google could strive to introduce better auditing processes, like the one suggested by Bryan Seely, but it's hard to believe that it will do so when it will not even bother. # 39; print. DO NOT FORWARD on his verification documents (photo above).
Even though Google will probably never eliminate fraud on its platform and do not know what additional methods would be most useful, but it's obvious that Google has spent at least half a decade ignoring the commendable efforts that no create no additional legal liability for the company. While it's not fair to regard greed as Google's only motivation, its behavior demonstrates a preference for profit over customer protection in several specific cases.
Complex and widespread problems torture us daily in the news and confidence in the media continues to decline. Even when you can believe what you read, hear, or see, you may feel that you can not do anything against problems as vast and complex as this one.
While fighting online fraud will require a lot of work from a lot of people, small efforts from each of us can bring us closer to a safer online experience for all. Here are some options to consider if you want to help Google do the right thing:
This may take some time, even with such exposure, but if we continue to press Google for it to further strive to solve this problem, we can improve the situation. These are just small ways everyone can help, but as Mark Baldino have made greater efforts to combat this problem. You should do what you feel is right. If you have another positive way to motivate Google to reduce the number of frauds on your platform, please share it in the comments. We only have a chance to do that with our combined efforts.
Top image credit: Adam Dachis
The FTC announced that it had finalized $ 25 million from Office Depot for pushing its customers to buy malware removal services they never needed. The company that cooperated with the scam, Support.com, was also fined $ 10 million, for a total of $ 35 million. The two companies worked together to charge Office Depot customers up to $ 300 for malware removal services. Office Depot also owns OfficeMax (the two companies merged in 2013).
From 2009 to 2016, Support.com provided Office Depot / OfficeMax with a "PC Health Check Program". While it was intended for a PC hardware diagnostic application, its current goal was to remove malware removal services. It was claimed that there were no infections when there was not any. According to the FTC's complaint, "PC's health check program did not and by nature could not" find "or" identify "anything that could return these results".
Instead of performing any type of analysis, the utility was designed to claim repairs were necessary if the consumer checked one of the boxes indicating that he was having generic problems with his computer.
The four problems presented were:
No analysis has ever been done on consumer PCs. If you check one of these four boxes, the system claims to have detected malware. automatically The FTC complaint claims that the malware removal services provided by Office Depot-OfficeMax could rise to more than $ 300 per service.
To scam such customers is deeply unethical and there is no excuse for that. It can be tedious and tedious to troubleshoot a machine in front of an untrained customer. This does not allow people to lie in order to scam them.
I have been working in the computer repair and services industry for decades, both formally and informally. The application created by Support.com would not even be a good app if it worked. Of the four problems it claims to diagnose, only two of them are directly related to possible malware infection. A PC that is too slow to use could simply be an old machine. A PC that breaks down can often suffer from almost all kind of problem Removing malware from people who do not need it, Office Depot and Support.com have designed a software solution That would not even tell them properly when their own scam would not work.
We all know that one of Office Depot's troubleshooting services would have been: Blow down the operating system and try again. Invariably, what are these types of services make. In practical terms, this solution would also provide evidence that may come from the system that has never been infected with malware.
But that's partly what makes this so blatant. Office Depot did not just think it could sell expensive remediation services to people who might not need it. It would be that his customers were stupid, he could sell them expensive restaurant services that would make nothing to solve their underlying problems and that people would simply be willing to eat it. The scheme burst in 2016 A survey conducted by KIRO 7 in Seattle revealed that this utility would detect malware on brand new computers. Several technicians in several places pushed the TV channel team to buy malware removal products for 180 USD. At the time, Shane Barnett, an Office Depot technician, explained that technicians were required to run the service on each system entering the store. From our own cover in 2016:
"The program itself is mandatory," said Barnett. "This is not an option for not running the program.You must run it on every machine that comes into the building." Period. "Barnett adds that he and other technicians have communicated the PC Health Check related issues to senior management, who was not interested in their reports.
The FTC notes that Office Depot employees have been complaining about the program since at least 2012, with an employee saying, "I can not lie to the customer or be pushed into the store to earn a few extra dollars. . " To do this, the company continued to impose the use of its fake software to scam people by making them lose money. The FTC will use the $ 35 million to reimburse the affected customers. We suggest doing business with companies that have not actively announced their own willingness to defraud people.
It's almost useless to answer the phone for numbers you do not recognize anymore.
With a little luck, automated calls may slow down a bit in the near future. Federal Trade Commission announced on Tuesday he had settled charges with four different phone scams in the United States. The FTC collectively credited these callers with "billions" of fraudulent phone calls.
The four organizations that entered into the FTC were NetDotSolutions, Senior Goals Marketing, Veterans of America and PointBreakMedia. They have all been banned from doing more telemarketing scams with automatic dialing of phone numbers and will pay for their actions.
Each of the four FTC-appointed defendants engaged in slightly different scams, but anyone with a phone in the United States should be aware. They used dialing technology to make billions of phone calls and try to sell, among other things, debt relief programs.
Veterans for America, for example, has attempted to scam people for donating goods such as cars and boats, in the name of helping non-existent charities for veterans. In fact, it was run by a man named Travis Deloy Peterson who had just sold what people had given him.
Pointbreak Media, meanwhile, pretending to be a representative of Google and will he promise companies to Google?
There is a lot of anecdotal evidence that automated calls have been on the rise in recent years, but current numerical estimates are pretty staggering. A study by YouMail, a company that makes automated call blocking software, found that nearly 48 billion fake phone calls had been made in the United States. in 2018. It was more than 30 billion the year before.
With so many automated calls each year (and such a large increase), it is worth wondering how much the FTC has done a lot in scams this week. Last year, the Federal Communications Commission slapped ends on a single man responsible for 96 million automated calls, so it may take some time before the problem is permanently resolved.
Advertisements are a necessary part of modern technology, but they have disadvantages even when they are used to scam people. Some ads in Android apps have however scammed users. A survey conducted by the firm's Protect Media and Buzzfeed News ad analyzer found that fraudsters had exploited a popular ad network to serve hidden video ads in apps. exhausted batteries later on the phones displaying them.
Protected Media has detected fraudulent ads on Android phones, but the system does not have anything to do with Android itself. The applications involved are not even to blame. Developers often use banner ads to monetize free apps. This is so that fraudsters manage to penetrate your phone.
The summary ads were mainly broadcast on Twitter's MoPub platform. Each ad used a generic ad image featuring a known company such as McDonald's or Disney. The ads ran in a large number of applications, but the report did not include exact numbers.
Although the user of an application can see only one static banner ad, the ads have multiple layers of video ads that play automatically. These were hidden from the user, but the videos "played" to be saved as impressions. The advertiser would pay the middleman for an ad that no one has ever seen and the user would not be any wiser. Well, they might have been there had not been any battery depletion.
The investigators traced the scam to a company called Aniview, which denies any involvement. He says that an unknown third party associated with a now closed subsidiary has diverted his tools. Whatever the situation, the Aniview tracking code and video technology have been discovered in fraudulent ads. Even though the videos were not visible on the screen, they were always played back by the phone. As you can imagine, playing three videos in the background is detrimental to battery performance. Users blame apps and leave negative reviews.
This scam has been exposed, but a recurrence is almost certain thanks to the way the ads are sold. With so many layers in ad networks and advertisers, there are too many opportunities for someone to slip malicious code into an ad. There have been numerous examples of advertisements involving fraud and even spreading malware. It's not nice to dump your battery, but at least, the scam did not include any hacking attempts.
Follow @https: //twitter.com/PCMag
As BuzzFeed reportsIt has been discovered that in-app advertising banners have been diverted on a large scale to generate revenue for fraudsters working in the digital advertising industry. The people who are suffering are the consumers and their devices, but also the application developers who complain about the speed with which their applications are running out of their phones and batteries.
The fraud was discovered by two advertising fraud labs, Protected Media and DoubleVerify. Fraudsters buy an inexpensive in-app banner display space, but then hide read-only videos behind the banner displayed by users. The video is never seen by anyone, but as it is broadcast, it is recorded as such and therefore generates revenue for fraudsters, and much more than the banner ad. It is the big brands that pay, but without their knowledge, they pay for a zero exposure of their products.
The video below shows how fraudulent video ads are hidden behind banners:
Regarding the magnitude of this fraud, DoubleVerify has rated it at 60 million fraudulent video ads a month. The ad hijacking took place on Twitter's MoPub advertising platform, and the Israeli company Aniview, specializing in video advertising solutions, was touted as one of the sources of these advertisements. Protect Media, the company's subsidiary, has also been identified by Protect Media as playing a role.
Aniview denies any direct involvement and blamed "a malicious and unnamed third party" who took advantage of the ads and code created by one of Aniview's affiliates. Aniview CEO Alon Carmel told BuzzFeed that the company "did not knowingly participate in a fraudulent activity" and that immediate action was taken.
Aniview does not say who the malicious third party is, but has since removed a number of employees from the company's website. Among them are Tal Melenboim, co-founder of Aniview, and two employees who have held executive positions at OutStream Media. Melenboim has since been denied to be part of any illegal activity at Aniview.
Since Twitter's MoPub advertising platform has been used, Twitter has also launched its own investigation after verifying activity reported by Protected Media. If Twitter brings this back to Aniview, there will surely be consequences for the advertising company.
It is important to point out that this type of fraud is not new, but the surge of activity observed in October has prompted the advertising fraud companies to take a closer look. Aniview is also the only company identified as a participant, with several other companies continuing to screen these hidden video ads in the digital advertising market. One of the companies contacted by Protected Media responded by complaining that everyone was doing it and that it was like being caught.
This article was originally published on PCMag
It's a special observation, but when Cheryl Gafner, former Theranos receptionist, succeeds, it's hard to shake hands – Elizabeth Holmes, the fraudster behind the blood test company today Who has disappeared TheranosIt does not blink.
Or at least when it blinks, it's done with a purpose, a cold, calculating gesture that accompanies a specific word or phrase. For Holmes, the winks are apparently meant to convey sincerity and build trust, not an unintentional response to having eyes.
This is a disturbing behavior to observe, especially in the context of Holmes' dangerous multi-million dollar fraud. But be aware of scary details like this one is an essential part of The inventor: looking for blood in Silicon Valley visual experience
Directed by Alex Gibney (Go clearThe new HBO documentary follows Holmes, the tenacious and unadorned entrepreneur, from his humble beginnings obsessed with Thomas Edison to his indictable charge of 2018 last June.
Holmes is responsible for one of the most sophisticated and risky programs ever launched in Silicon Valley. But L & # 39; inventor does more than just report his misdeeds to make his point.
Scene by Scene, the documentary sheds light on Holmes' troubling behavior, shaking voice and hypnotic eyes, which reinforce the fear factor associated with his power and intelligence.
& # 39; The Inventor & # 39; forces you to harden Holmes, lying directly on your face.
The PR photos taken by Holan and reclaimed by Holan as the company flourished are constantly appearing during the two hours of the film. More than eleven, these images are superimposed to create a mirror room effect that presents Holmes as an omnipresent terror.
Using a dozen interviews with Holmes – each of them seeming to be closer to the goal than the last – L & # 39; inventor ask you to sit down with this apparently well-intentioned entrepreneur, face to face. Then, when you become aware of what you have done, it forces you to harden Holmes directly on your face, his unsteady voice still seeming at the edge of the crack.
In their original form, these images and clips are meant to be hopeful, pleasing, and business-approved insights for the woman behind the ground-breaking health care technology.
In L & # 39; inventor, they are threatening reminders of the threat Holmes poses to each of his unconscious consumers.
That does not mean L & # 39; inventorThe portrait of Holmes is without nuance.
Throughout the film, interviewees close to Holmes, as well as the narrator, offer various explanations for his behavior, ranging from an overly optimistic desire to do good to a state of self-imposed delirium that has may have been rendered incapable of telling the truth. Although these motivations add depth to Holmes' too close presence for comfort, they are barely enough to make her human.
L & # 39; inventorThe creative and engaging portrait of former CEO, Elizabeth Holmes, does not tell the story. heartbreaking story of a greedy and beneficent woman, lost in the possibilities of her own greatness. Instead, its creators presented a modern, well-informed, fact-based monster movie – it's effective, annoying and most scary.
L & # 39; inventor debuted on HBO on Monday, March 18th at 6pm ET.