
Watch
‘Tackling the subject of artificial intelligence is, first and foremost, necessary’

A new programme, entitled AlgorEtica, about artificial intelligence and its impact on work, the economy, and other areas of our lives has come to Italian TV. We have interviewed the host of the show, journalist Monica Mondo.
An interesting programme has just started airing on the Italian channel TV2000 – the network that broadcasts the Italian Episcopal Conference – entitled AlgorEtica: Artificial Intelligence and Us. Its subject matter is artificial intelligence, a sensitive topic that concerns both our present and future.
It will air on Saturdays at 3.15 pm, until 15 February, and every episode will be available to watch on demand on the Play2000 app.
AlgorEtica offers an in-depth exploration of the subject, with guests in the studio helping the audience find their bearings in this topic that is entering more and more easily into every aspect of our lives, especially our working lives.
The host of the programme, journalist Monica Mondo, remarks about the subject matter at the start of the show: ‘It excites us, it concerns us, it fills us with fear.’
This adventure originated from an idea thought up by the channel’s director, Vincenzo Morgante. We asked Mondo to talk to us about this important experience. ‘I decided to venture into this – for me, uncharted – territory’, she explained, ‘because, like anything new, it intrigues and challenges me. This is the first TV programme on the subject’, Monica Mondo adds. Alongside her as host is Brother Paolo Benanti, theologian and member of the Third Order Regular of St. Francis, expert on technology ethics, and Italian member of the United Nations’ Advisory Body on Artificial Intelligence. The journalist describes him as ‘a fantastic person, highly educated and wise, with an engineering and scientific background, which provides a counterpoint to my humanities background. He is able to explain things to anyone and helps us understand the guests’ answers, using examples, following the questions I ask both them and him’.

AlgorEtica presents an intentionally simple discussion, with good pace and ‘reports that bear witness to what we’re saying’, the host continues. The title comes from something ‘Pope Francis [said]; he spoke eloquently on everything we cared about concerning this subject during his speech at the G7 in Apulia last June. When talking about people,’ Mondo continues, ‘and scientific or technological research that has people at its core, but which can also manipulate and use them, it is clear that ethics becomes a fundamental discipline to remember. However, ethics should not be expressed as rigidly enforced rules, but as concern, because human dignity is always at its heart.’
The Pope’s words, which the journalist quotes, form the opening remarks of the programme. ‘(…) Indeed, artificial intelligence arises precisely from the use of this God-given creative potential. As we know, artificial intelligence is an extremely powerful tool, employed in many kinds of human activity: from medicine to the world of work; from culture to the field of communications; from education to politics. It is now safe to assume that its use will increasingly influence the way we live, our social relationships, and even the way we conceive of our identity as human beings. The question of artificial intelligence, however, is often perceived as ambiguous: on the one hand, it generates excitement for the possibilities it offers, while on the other it gives rise to fear for the consequences it foreshadows.’
Asking Monica Mondo if these reflections from Pope Francis act as a compass for AlgorEtica, she responded: ‘Naturally, the Pope’s remarks, the remarks of the Magisterium of the Church, act as a compass. The idea he expressed is this: discoveries and the use of human intelligence for new discoveries are all very well, but they should always be for the benefit of people and the common good. We wanted to remind ourselves of that in every episode.’
Among the many intelligent questions AlgorEtica poses, Monica Mondo asked the guests of the first episode if we are facing a solely technological revolution, or one that is also anthropological and cultural. Shortly before this, Paolo Benanti defined AI as ‘a lubricant that reduces friction in how we use things’. He added that, at one time, to use a PC, you needed to understand its language; today, you can simply communicate with it in a rudimentary way. It all sounds nice and simple, but shortly after this, a red ball from the Galton board in the studio raised the question of whether we are turning ourselves from people into numbers. So we asked Monica Mondo if, in light of this deep dive into the topic, she thinks the greatest danger of AI is a kind of new anthropological mutation.
‘The dangers, in different spheres, are many and they always relate to the same strong temptation, which is either a sin or a crime, depending on how you look at it: using man. Medical or economic data used for private interest or to distort the market; machines that strike with more precision and horror in the context of war. Or stripping workers of dignity, as well as jobs. The principal temptation of technology – as with any discovery or invention, from the atom to the knife – is whether to use it for good or for evil. This is a big issue that we must not so much worry about as engage with; we must identify its critical issues and try, together, as civilised people – which has happened, at least in democratic countries – to give ourselves common rules and vantage points with which to understand the dangers over time.’
There are two robots in the studio: Pepper and Break. ‘It’s fun having them with us, because they give us an insight into a future that’s already here. They’re real, not toys: they’re tools used by the police. Break has been instrumental to some recent discoveries in Pompeii. They can be nice, good friends, or armed instruments of death. However, they are things we will use more and more in our everyday lives, and if they help solve problems, then they are welcome.’
For a final remark, we asked Monica Mondo how she came out of the AlgorEtica journey: more reassured, more worried, or just more informed? Her response: ‘Tackling the subject of artificial intelligence – learning more – is, first and foremost, necessary. I think it makes us more uneasy, as well as fascinated, in some ways, by the intelligence of man. But it also makes us uneasy about the consequences and risks it could bring about, the biggest of which is this: some tools – think of social media, for example – can change the way human relationships work, the minds of our children. And artificial intelligence could worsen this trend, therefore changing the way we look at people and at the things we hold dear’.

Article translated into English by Becca Webley