No matter how futuristic (and helpful) some technologies may be, there’s always a chance that sooner or later they will fall prey to restrictions. According to some thought leaders in the industry, the EU’s new data privacy rules – the General Data Protection Regulation (GDPR) – will have a negative impact on the development and use of artificial intelligence in Europe. Lilian Edwards, a law professor at the University of Strathclyde in Glasgow comments on the situation: “Big data is completely opposed to the basis of data protection. I think people have been very glib about saying we can make the two reconcilable, because it’s very difficult.” Besides, AI professionals fear that it will put EU companies at a competitive disadvantage compared with their competitors in North America or Asia. If you recall the recent Cambridge Analytica scandal that hit the news earlier this year, it would be easy to understand where the root of the law takes. In a nutshell, this big data company was reported to have harvested the information for nearly 90 million Facebook users without their knowing and failing to delete it when demanded to. One can remember the famous Rothschild’s line: he who controls the info, controls the world. Today, we can say that the one who owns the data, owns the world.
What’s the GDPR, anyway? While some people label it the best lullaby in history, the General Data Protection Regulation is an EU-wide data protection law that changes various national privacy laws. The EU enacted the law two years ago, but it came into effect on May 25, 2018. Let’s outline its significant points:
- Companies collecting personal data must explain what it will be used for. Therefore, they don’t have a right to use it for anything else;
- They should minimize the amount of data they collect and keep. There’s a time limit for keeping as well;
- Users have a right to request the information on how precisely their personal data is used;
- Companies must notify its customers of data breaches in 72 hours;
- Customers can also demand a complete removal of their data upon request;
- If personal data is used to make automated decisions, companies must be able to explain the logic behind the decision-making process;
And here comes the less expected. Due to its nature, the GDPR applies in Europe, but it concerns foreign companies that do business there as well. For instance, an American company whose employees or customers are in Europe is affected by the GDPR too. Some experts believe that part of this decision is strategic. As the Financial Times explains: “the EU tends to write rules for itself and let the gravity of its huge market pull other economies into its regulatory orbit. Businesses faced with multiple regulatory regimes will tend to work to the highest standard, known widely as the “Brussels effect.” As always in life, the weak has fewer chances to survive: small and mid-size companies trying to build AI systems do not have vast streams of incoming first-party data. The big tech companies tend to have access to the streams of never-ending data from their customers, making it much easier to get consent. Besides, companies dealing with particular categories of sensitive data such as medical records or children’s data should be especially careful. “Big data challenges purpose limitation, data minimization and data retention – most people never get rid of it with big data,” continues Edwards. “It challenges transparency and the notion of consent since you can’t consent lawfully without knowing to what purposes you’re consenting… Algorithmic transparency means you can see how the decision is reached, but you can’t with machine-learning systems because it’s not rule-based software.” Now let’s observe some possible effects of the GDPR impact on AI businesses:
– The overall cost of AI will go up.
In accordance with Article 22 of the law, companies are obliged to assign humans to review certain algorithmic decisions. This restriction is regarded as the most painful, as it will lead to the significant raise of labor costs. Secondly, it works in the exact opposite direction from the initial idea. After all, the main reason for developing AI is to automate functions that would otherwise be much slower and costlier.
– Damaged AI systems.
As a rule, AI systems, which “learn” from the data they process must “remember” all the data they used to train themselves in order to sustain rules derived from that particular data. However, the Article 17 provides the right to erase it. This way, it will affect AI system’s behavior, and, consequently, make it less accurate or, in the worst-case scenario, fully damaged.
– Increased regulatory risks
Again, there’s a strong suspicion that small and medium-sized businesses won’t understand the regulation in full measure. Thus, they may become victims of legal action, which will add extra costs to their budget. In addition, the fines are pretty draconian, to say the least: up to 4% of a company’s global turnover, or €20 million (depending on which number is greater). Naturally, such fine is more costly for small businesses, thus the chance of adopting AI for them is becoming even lower.
So the question is: “What can they do to survive in those circumstances?” First of all, a company should become aware of what data it controls and where it is stored. This job will require collective efforts of a whole organization: top management, IT department, and human resources team among them. Courtney M. Bowman, the Proskauer lawyer, says that “people ask how to comply in 5 minutes. The GDPR is more than a day or two and checking a box. Unfortunately, it will require time and effort and money, and strategic risk assessment. It can potentially be quite expensive.” However, there’s always a bright side. Tim Estes, founder and president of Digital Reasoning, said that “The GDPR is a big deal for AI because it necessitates that we think differently about how we collect and use data. For too long, tech companies have insisted that in order to receive value from their products and services, you had to give up your data.” Secondly, the GDPR will strengthen the versatility of AI in regard to fraud prevention and breach detection. The detection of cyber threats by AI will protect the rights of customers and serve legitimate interests as stated in Article 47 of the GDPR. Consequently, it will stimulate higher investments in AI cybersecurity. In third place, the adoption of the GDPR might reconcile the providers of AI-powered service and its users as providers will become more responsible for how and where they use data resources. The final goal of regulation lies in making them think about users before they start calculating their profits. „At the end of the day, AI providers should only need to own the algorithms — not the data — to innovate their capabilities and solutions.“ summarizes Estes. It turns out that technically adoption of the GDPR won’t slow down the innovation, but rather direct and motivate it. However, only time will tell.