The new technological world or, more precisely, our planet’s digital upgrade is introducing swift changes that will transform our society. The engine driving this metamorphosis has a name: Artificial Intelligence. The innovation introduced by AI can already be observed in the improvement experienced by public amenities and services, the education sector, the transport industry, and in food supply, energy, and environmental management. And it’s only the start of this evolution: the increasing propagation of data access and development of IT resources, AI and automated learning systems – which have a tendency to organically evolve together – will become progressively more irreplaceable, effective and powerful.
To whom AI’s data belong?
In the last decades, this evolution has changed our day-to-day life, while global laws have remained just as they were before the digital revolution. Nowadays, the biggest challenge governments face is connected to the use and ownership of data collected and stored by AI systems. Indeed, ownership is the main issue: are data owned by their creator or their collector? This Shakespearean dilemma played out in the third millennium has given rise to a debate on the necessity for ethical and safety rules concerning future developments of AI systems.
Since human beings learned to read and write, they also gained a right to privacy, a right that, however, now seems to be slowly disappearing. George Orwell wrote, in his famous novel 1984: “Until they become conscious they will never rebel, and until after they have rebelled they cannot become conscious.” Indeed, society has become digitally reckless, there is a lack of awareness on how information is published and stored online, and on how it might be used. Today, several organisations, and especially the “Big Five” (Amazon, Google/Alphabet, Microsoft, Apple and Facebook) know more about our private life than we do, or are aware of, ourselves. We are referring to the so-called “digital subconscious”, a concept that, personally, has been intriguing me for at least a decade now. The first time I encountered this particular side of digital life was at the onset of the year 2000, when I ended up on “Digital Mirror”, a platform developed by Cataphora: a company founded to analyse individual behaviour and offer users a unique insight into their digital persona, that is, the user’s online version. Elizabeth Charnock, CEO of Cataphora, even published a book on this topic: E-Habits: What You Must Do to Optimize Your Professional Digital Presence. The book, in fact, talks about our own digital habits and offers some tips on how to optimise our presence online, what is now known as our “digital double”, which we ourselves construct.
Jeff Orlowski’s documentary The Social Dilemma (Netflix), provides food for thought on another aspect of the digital world: more in particular, how major multinational social media companies have the ability to influence consumers’ behaviour through particular algorithms. According to the documentary, it seems that the data mining strategies of such companies are designed to appeal to human vulnerability, and can actually cause users to be become addicted to their platforms. This is a subject that Shoshana Zuboff explores in depth in her book "The age of surveillance capitalism. The fight for a human future at the new frontier of power" (2019).
The ability to obtain, collect and store data in order to use them or supply them to third parties proved to be a gold mine for companies in the third millennium. This is why institutions are now seeking legislative solutions to ensure that the dangers of the digital transformation don’t end up surpassing its advantages, especially in terms of individual freedom.
Europe achieved a first victory in the governance of data collected online with the General Data Protection Regulation (GDPR, the software that manages data protection), drafted in 2016 and in force since May 2018. It’s one of the most applauded recent pieces of legislation introduced by the EU, that provides a range of provisions aimed at protecting an individual’s right to privacy. It’s a first step towards a model of legislation focused on the well-being and the protection of the individual, now considered a priority in the decisional process.
The need for the digital transformation to preserve personal rights led an increasing number of governments to adopt strict laws on privacy and data, meaning that the list of nations exempt from GDPR is increasingly getting shorter.
According to the United Nations Conference on Trade and Development (UNCTAD), 66% of the countries adopted such laws and 10% is working out how to implement them, while 19% is ignoring the issue. In Brazil, the “Lei Geral de Proteçao de Dados (LGPD)” was inspired by GDPR and is almost identical in terms of range and enforceability, though with less strict financial penalties in case of a breach. In February 2019, the National Legislative Assembly of Thailand approved the Personal Data Protection Act (PDPA). Interestingly, most of the PDPA’s provisions are similar to those in the GDPR, and only one section was adjusted in order to adapt it to national laws. In the United States there are no formal laws regulated at federal level. The legislation that most resemble European rules is the Consumer Privacy Act (CCPA), recently introduced in California.
Thanks to its privacy regulations, Europe has managed to gain unquestionable leadership in terms of data management. Therefore, continued discussion on Artificial Intelligence could reinvigorate Europe’s position in a debate on universally shared principles – a truly pioneering global concept, without a doubt.
Read more about Digital transformation
Selected content on this issue.