On Wednesday, September 13, 22 of the most prominent figures from the US technology industry spoke in the Washington DC Senate on the sensitive topic of “artificial intelligence.”
Everyone was there: from Microsoft founder Bill Gates to Elon Musk (X, Tesla, SpaceX), Sam Altman (OpenAI), Mark Zuckerberg (Meta), Sundar Pichai (Google-Alphabet). A large delegation from Silicon Valley introduced senators to the fundamental concepts that will be used to regulate artificial intelligence.
It is interesting to note that the “fathers” of AI warn about the possible risks of their “creation,” even though there is no shortage of different opinions among many brilliant and bright minds, starting with the expedience of this powerful tool being within everyone’s reach. We are taking our first steps into this new technology and have already been able to understand how powerful and potentially dangerous it can be if managed in a non-positive way.
As is often the case, Elon Musk was the most outspoken, explaining that he believes artificial intelligence is “a danger to society, potentially a risk to all people.” The fact that we need regulation, or even an agency specifically dealing with this problem, is clear to everyone. Less clear is how to set rules without clipping the wings of innovation. In any case, the desire, which should not be taken for granted, for cooperation between the public and private sectors appears to be firmly established.