An article by: Tommaso Baronio

Interpreting the surge in the use of artificial intelligence, risks, benefits, and the global market.

When very influential companies demand additional rules, it’s usually because they want politics to reinforce their dominance and build barriers inaccessible to new, potential competitors.

Over the past three years, the game of chess has been booming. Never before have so many people moved their pawns on virtual chessboards. The revival of the game began with the 2020 pandemic and quarantine and became a trend with the release of The Queen’s Gambit series on Netflix; chess was the protagonist of the most popular post of 2022, where Lionel Messi and Cristiano Ronaldo play together, and was all over the place because of the controversy over the young Hans Niemann accused of fraud., the world’s most famous gaming app, registered so many hits that during January 2023 its servers went down frequently and many games were interrupted for technical reasons.

It will be amazing to learn that in the face of so many fans, the best player in the world is not a human but a computer.

Computer chess has come a long way in the last twenty years. From 1997, when the IBM’s chess program, Deep Blue, managed to defeat reigning human world champion Harry Kasparov, all the way to Stockfish, the computer that beat all competitors. At least until December 5, 2017. On this day, an announcement was made that went around the world, about the creation of AlphaZero, a machine-learning algorithm built by researchers at DeepMind, an artificial intelligence company owned by Google.


AlphaZero’s ancestors had years of human experience from grandmasters and were programmed into the engines in the form of complicated evaluative functions that told what to look for in a location and what to avoid, but they still lacked intuition, and they played like real machines – incredibly fast and strong.

AlphaZero has made a revolution. It played against itself for four straight hours, updating its neural network as it learned from its own experience. It discovered the principles of chess by itself and quickly became the best player of all time. It not only was able to easily defeat all the strongest masters of mankind, but also destroyed poor Stockfish, the reigning world champion in computer chess. In one hundred games, there were twenty-eight wins by AlphaZero and seventy-two draws. It hasn’t lost a single game.

While the mind returns to the young David J. Lightman as a nerdy teenager who, in the 1983 War Games, teaches the computer the inevitability of mass destruction using a game of tic-tac-toe, no one can help, but they begin to fear the risk of AI catching up with us, and our species will become extinct.


With the launch of new artificial intelligences, most notably ChatGpt, as well as Bard, Ernie, and Bing, the topic is raised again and faces the fateful question of the cynical editor-in-chief, “Do you think we talk about this in bars?” The answer is yes. AI is all over the place, particularly due to a recent declaration signed by 350 experts, including Sam Altman, the father of ChatGpt. In the letter, they explained that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,.”

Altman is one of the founders and CEO of OpenAi, the company behind ChatGpt. Just as other visionaries before him (Zuckerberg or Musk, by the way), he declares that he is interested in looking after the well-being of mankind and not thinking about profit. In the company manifesto, he writes, “Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity. If AGI is successfully created, this technology could help us elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge that changes the limits of possibility. AGI has the potential to give everyone incredible new capabilities; we can imagine a world where all of us have access to help with almost any cognitive task, providing a great force multiplier for human ingenuity and creativity.”


Altman seems credible, including his statements, that he is not interested in mere economic gain. In fact, OpenAi was born as a non-profit organization, following the ideal of saving the world using technology. However, reality is knocking on the doors of the most famous visionaries, so in order to start the development in the near future, OpenAi LP was created  in 2019, a hybrid of a commercial company and a non-profit organization, which they call “for limited profit.” The new company raises much more money by somewhat misrepresenting itself, i.e. disclosing little information about their models.

Unlike OpenAI, the new OpenAI LP can be profitable, and members can earn up to 100 times their initial investment. After their achievements, the profit goes to a non-profit organization, which directs it to educational programs and advocacy.

What consoles the temptation of billions in what may be one of man’s most dangerous weapons?

The CEO and founder can’t get rich this way because he’s not a shareholder, and this seems like a good sign for the project to stay intact.

Altman’s constant references to the dangers of AI, also present in the manifesto, show that he is well aware of what he has in his hands.

“On the other hand,” writes the managing director, “AGI would also come with serious risk of misuse, drastic accidents, and societal disruption. Because the upside of AGI is so great, we do not believe it is possible or desirable for society to stop its development forever; instead, society and the developers of AGI have to figure out how to get it right.”

This rightly brings along benefits, but also existential risks that still remain. In an interesting article, Professor Charles Jones from Stanford highlights, among other findings, for example, that “one way in which it can be optimal to entertain greater amounts of existential risk is if A.I. leads to new innovations that improve life expectancy. Mortality improvements and existential risk are measured in the same units and do not run into the diminishing marginal utility of consumption.”


The risk is commensurate with the use, but a constant in speeches by Altman and other AI experts is the mention of the Apocalypse. Stefano Feltri in his Notes  gave an interesting insight into all this, which brings the issue to a broader level, intertwining it with geopolitical plots. “When very influential companies ask for additional rules, it’s usually because they want politics to reinforce their dominance and build barriers that would be inaccessible to new, potential competitors. If artificial intelligence becomes a matter of national security or even global security, Altman and his followers will have the best possible protection against the threats posed by the market (competition and public discontent). Stopping research is impossible, otherwise the Chinese will simply gain control of technology that could lead to the extinction of mankind,” this is one of the messages contained in this brief statement. Therefore, the only solution for companies operating in such a sensitive area is to cooperate with politicians and regulators.”

The Chinese already have what is considered ChatGpt’s direct competitor, Ernie Bot. In the broader context of open hostility between the United States and the People’s Republic, US government lobbying to protect its market could become a consolidated reality.


Tommaso Baronio