Your business, our energy

Products and solutions for business and customers Italy and abroad


Working and growing together

The platform dedicated to Eni's current and future suppliers



With the increasing use of AI in all sectors, it becomes more and more necessary to apply measures that prevent the boundaries of ethics from being crossed.

by Maria Pia Rossignaud
11 January 2021
6 min read
by Maria Pia Rossignaud
11 January 2021
6 min read

Artificial intelligence and algorithm are themes that present a dialectical challenge on different levels. Since 2018, the conversation has been strongly focussed on the ethics of the algorithm or more precisely on the Algorithmetic, a term coined by Derrick de Kerckhove, mass media specialist and student of McLuhan.  Today, we know that technology has not only changed the way we communicate and live, but also our reference social models, with behaviours changing constantly and as a consequence, likewise the decisions we make. Thirty years ago in the TV series Public Mind, Bill Moyers, journalist and American TV producer, analysed the various ways television of the time produced social cohesion by providing everyone with the same "menu", the same amount of news and the same type of advertising. However, today, digital platforms and social media have shifted us to a "one to one" model, where everyone receives what they want, because through machine learning, it’s possible to easily identify our individual tastes and propensities through profiling. This is just one side of the digital transformation, leading us into a symbiotic relationship between man and machine, as the IEEE engineers in the Symbiotic Autonomous System white paper explain.

Cooperation between IA and human mind

The other side is the competition between man and machine. Let’s take chess as an example. This is a game where intelligence dominates, so much so that it was a reference model for artificial intelligence until (on 12 May 1997), a computer (IBM’s DeepBlue) beat Kasparov, world chess champion. That day opened up a chasm between thinkers around the world, one camp insisting that computers had surpassed man’s ability and the other arguing that playing chess well wasn’t much of a test of intelligence.  DeepBlue is an example of the student (the computer) surpassing the teacher (Kasparov). The case of Google’s AlphaZero computer differs because in 2017, it taught itself how to play chess by learning the rules and playing against itself. In just 24 hours it won 100 games out of 100 against the best players of the time, becoming the world’s best player. But one chess master, Vladivir Kramnik, decided to take advantage of AlphaZero by exploring possible variations in the game’s rules to make it more engaging. The moral of the story is, rather than fighting a fierce battle against AI, we can work with it to create something that is beyond human capabilities but which still requires the human distinction.  This cooperation is a paradigm shift, recognising the intelligence of machines on one hand but at the same time, exploiting the differences with human intelligence. The goal is to increase our capabilities with the help of these machines.

Technological ethics

We are now in the era of synthetic data, where machines are no longer fed with 'historical' human data. Because of this, the new AI based work and knowledge humanism needs rules made up of regulatory systems based on principles such as technological ethics or rather algorithmetics. Machines are becoming ever more powerful and no longer merely mimic human intelligence. An example of this is the GPT3 (Generative Pretraining Transformer), a linguistic model that allows machines to create content on demand. This new technology from OpenAI labs was used by The Guardian newspaper on 8 September 2020, to create the first article written entirely by Artificial Intelligence.

With 175 billion parameters, GPT-3 is currently the king of large neural networks.  In reality, they may not be the best, but the fact that OpenAI has managed to outperform the previous OpenAI GPT and TuringNLG is likely to increase (and not diminish) the desire for ever larger neural networks. More than 30 OpenAI researchers have published models that can produce cutting-edge results in activities such as generating articles and news. But here too ethics problems arise, especially in a world where subjectivity and objectivity are confused and fake news is rampant.  “The word intelligence should never have been associated with what is simply the recognition of patterns and statistical calculation” was de Kerckhove's reflection on the human vs artificial intelligence debate.

Review and reassess the contribution of IA

According to Vittorio Meloni, Director General of the UPA, pausing to reflect on AI could be a strategy to restore Europe’s strength, which should focus on technology standards to defend its markets, saying "We need to promote a debate on principles that leads at a global level." Before we let technology progress, we need to know where it is going and where it will take us,  because one day, in the not too distant future, our personal AI (our digital twins) might start making decisions on our behalf (think about Alexa or Siri). And as de Kerckhove reminds us, in the search for objective truth, it’s important to avoid losing "meaning", something lacking in AI but essential to human decision-making. "Rules are needed to promote and further the use of AI algorithms in the world of justice, medicine and finance.  We must analyse, comment on and address the current progress of ethics in relation to the responsibility of the machine”.

The author: Maria Pia Rossignaud

Journalist and expert on digital media writing,  she is one of the twenty-five digital experts of the European Commission Representation in Italy, director of the first Italian digital culture magazine "Media Duemila" and Vice President of the TuttiMedia Observatory.