In recent weeks, FaceApp – the photo enhancement tool for smartphone-based AI – has become the source of a major controversy over data privacy that appears to have been largely overestimated. However, this highlights a clear and common problem regarding the rights that we could give up with potentially any application we allow on our devices.
What happened with FaceApp?
On July 14, developer Joshua Nozzi tweeted a charge (since removed) indicating that FaceApp seemed to download all the photos from a user's library and not just the photos selected by a given user for use with the application's services. He also pointed to Russia's involvement in the company, reinforcing common concerns about the illicit Russian involvement in US data-related cases. In a few days, a pseudonym security researcher Elliot Alderson responded at 9t05 cover of Nozzi's charge by Mac with contrary evidence. FaceApp too replied with a statement to 9t05Mac with similar intent. Here is the abridged version:
We could store a downloaded photo in the cloud. The main reason is the performance and the traffic: we want to make sure that the user does not download the photo several times for each editing operation. Most images are removed from our servers within 48 hours of the download date.
FaceApp performs the essential of processing photos in the cloud. We only upload a photo selected by a user for editing. We never transfer other images from the phone to the cloud.
Although the main R & D team is in Russia, user data is not transferred to Russia.
Although 9t05mac has taken the plunge by publishing Nozzi's accusation, his claims have been proven false, Chance Miller – the author of the article – raises an important point:
It's always wise to take a step back when applications like FaceApp become viral. Although they are often popular and can provide humorous content, they can have unintended consequences and privacy issues.
The false accusation of Nozzi seems more to be an honest mistake than a malicious act. Miller's argument shows why we are more prone to panic when independent circumstances give us a picture of danger. Although we should always take the time to find evidence of our claims before publishing them, in order to avoid widespread panic unnecessarily, it is not difficult to see how a person could commit this mistake while people are in alert status for this type of activity.
Is an application really safe to use?
Although FaceApp has not prompted anyone to own their photo library to build a massive database of US citizens for the Russian government – or the conspiracy theory that you prefer – this incident highlights the ease with which we provide wide permissions once we download an application.
When an application requests access to your smartphone data, it generates a large network by necessity. Photo apps do not require the right to save photos or access only photos that you explicitly show, but to your entire photo library. You can not provide access to the microphone and camera, or anything else, with granular permissions that let you control what the application can do. In addition, smartphones do not provide a simple way to see what applications are doing. Newspapers of any kind, or a way to monitor network activity, are not made available to the average user.
For this reason, most users do not have the opportunity to know if an application has made them lose confidence or not. Until we have better control over the applications our apps can and can not access on our devices, we need to consider the worst case scenario every time we download. Unless a person has the knowledge and willingness to regularly monitor the activity of applications, as well as to read (and understand) the terms of service of each application in their entirety, that person can not exclude the possibility of the use of the application. malicious use of their data. After all Facebook has just been fined $ 5 billion for allowing the very consensual leak of user data. (not that it mattered) and much of this has happened through the association of a person with a user who has downloaded the problematic application.
Although the most commonly used apps do not end up in such controversial situations, data leaks happen quite frequently so we have to remember what we risk with every contribution of our personal information. Each granted access, each uploaded photo and each piece of information provided to an application, that it identifies us directly or indirectly-Provides a company with new information about us which it often claims ownership through its terms of service. They may or may not use the data collected for unpleasant purposes, but they allow themselves this right through a process they know that almost everyone will ignore. Businesses need a broad language in their legal agreements to protect themselves. Unfortunately, this legal requirement also creates a framework for leveraging users when a company publishes an application for data collection purposes.
Granular permissions on smartphones are a step forward in addressing this issue, but it will not prevent applications from continuing to request extended permissions and requiring access as an admission price. At this point, most of us know that we pay with our data when we do not pay with our dollars, but the problematic difference lies in the exact cost. Most people would probably not be afraid that FaceApp would use their selfies to improve the quality of service, but they might feel different if these data were used for some other reason. Even if we do not provide all of our photo libraries, and even if FaceApp removes images 48 hours later, they still have enough time to take advantage of the data voluntarily provided by users. Although it seems that they have no malicious intent, we do not know exactly what our data is costing us because we do not know how they use it.
The same applies to almost all the applications we download. Without transparency, we pay a fixed cost in secret. With repeated actions on many applications, it becomes very difficult to determine the source of the potential problems. FaceApp seems to work like any other application: asking for extended data permissions by necessity and reducing liability through a service terms contract. With each application, we must ask ourselves if the service provided is worth the cost of an unknown cost.