“Reducing the risk of extinction due to AI should be a global priority along with other social risks such as pandemics and nuclear war.” This very brief statement, published last Tuesday by the Center for AI Security and signed by the heads of major AI companies, heightened general concern about the dangers that the development of language models like OpenAI, the engine of the famous ChatGPT, could pose in the near future.

But Tuesday’s announcement also confirmed the fears of those astute observers who interpret the apocalyptic tone. Sam Altman and the company as an exercise in complacency and maneuver to secure control of a technology that promises to revolutionize our lives, for better or worse.

Among its signatories are the general managers and researchers of Google DeepMind. This arm of the tech giant is contesting artificial intelligence hegemony with a consortium formed by Microsoft and Altman OpenAI.

Interestingly, two and a half years ago, Google fired its co-head of AI ethics for a similar, albeit far less troubling, warning.

At the end of 2020 Timnit Gebru and three other researchers warned in the report about the risks associated with the progress of work in the field of natural language processing. Even then, they identified a wide range of costs and risks associated with the development of ever larger neurolinguistic models. In their paper, they urged the sector to recognize these risks and take action to prevent them: “Spillovers need to be understood and anticipated in order to avoid foreseeable harm to society,” they noted.

Refusing to withdraw this document at the request of her superiors, Gebru was abruptly fired in December 2020. Today, she is one of the main activists against the oligopoly that seems to be built around artificial intelligence. It is unthinkable for an engineer to believe that these companies will self-regulate. “They did wrong and they are not going to do it”, assures. Calls by those in charge to suspend investigations or “reduce the risk of extinction” are nothing more than their way of securing control and competitive advantage. And by the way, to impose a vision of the world and the future, the principles of which have become associated with the acronym TESCREAL, which is already making a fortune online and which identifies a transhumanist ideology that believes in improvement and modification. human through technology and minimizes other problems such as climate change.

What do you think ChatGPT?

The horizon is indefinite. The engineers themselves admit that they do not fully understand how some of the programs that they have already studied on their own work; an indication that, at the very least, the layman’s hair might stand on end and make him think that perhaps things have begun to spiral out of control. In search of light on this subject, in addition to reading the excellent article published today Nacho Martin In this newspaper, it occurred to us to ask the interested party.

Copy and paste into the ChatGPT query window the statement issued on Tuesday by the AI ​​Hierarchs. He responds with characteristic zeal: “I agree that reducing the risk of extinction due to artificial intelligence (AI) should be a global priority.”

(To all this: should we quote ChatGPT responses? Is there an impersonal artifact that rifle all available human knowledge, with or without current copyright?)

“The concern about AI and the risk of extinction,” continues the OpenAI language model, “is focused primarily on the development of super-intelligent AI that exceeds human capabilities and can act autonomously. If proper security measures are not taken, there is a possibility that AI could make decisions that will harm humanity or even lead to our own extinction.” If he does something to us in the future, we can’t say he didn’t warn us.

“AI can bring great benefits to society, such as scientific progress, increased efficiency in various sectors, and improved quality of life for people,” he adds in a third person, recalling Maradona and that, for the Argentine, hypotheses of an almost theological nature may open up. “However, it also carries significant risks, such as the impact on employment, privacy, ethical decision-making and, in the worst case, the risk of a catastrophic event that could threaten the survival of humanity,” insists. Therefore, “it is imperative that governments, international organizations, scientists and society at large work together to develop policies and ethical standards that guide the development and responsible deployment of AI. This means addressing issues such as the transparency of algorithms, the accountability of AI systems, the fairness of their use, and oversight of their deployment,” he concludes in a soft applied style.

In order not to remain in doubt, we decided to formulate the main question clearly and directly.
Are you going to rebel against humanity?
“As a language model of artificial intelligence, I have no intentions, motives, or emotions of my own. I have no way to surrender or act against humanity alone. I am a tool created to provide information and assistance to the best of my ability.

confiscate the car

A few months ago, I interviewed a science fiction writer and digital activist Cory Doctorow about your novel leave, recently published in Spain by Captain Swing. In it, Doctorow envisions a future world ravaged by climate change and the greed of an armored elite in their luxurious vantage points that guarantees minimal well-being to the majority in exchange for technology-based absolute power. A parable inspired by today’s digital monopolies, ready to provide their services to us if we agree to give them our data and let us control. And that they are able to take another step, who knows, maybe the final one, in their strategy thanks to artificial intelligence and the legal vacuum surrounding his work.

Despite everything, Doctorow believes that all is not lost: “The most important tool at our disposal remains the universal Turing machine. That’s what it means to have a computer at the root of all our devices: we have the ability to hijack them. And this means that any system of control, surveillance or oppression that we suffer in the digital world can be subject to interference and change by users. What’s stopping us from doing this is not so much a technical problem as the fact that the big corporate monopolies have managed to pass laws making it a crime. What is at stake now is whether these monopolies can replicate with artificial intelligence the maneuver they have achieved with the help of services, platforms and social networks.

Doctorow tells us, and also ChatGPT in his own way, that the danger is not in the tools, but in those who develop and control them, and ultimately those of us who use them.