Skip to content

If you’re worried about ‘AI consciousness’ then you’re not alone, so are scientists

Scientists have warned the UN that progress in Artificial Intelligence should worry us given the lack of funding for conscious AI research.

Hollywood has taught us that in the vast majority of cases, giving Artificial Intelligence human-level consciousness is a bad, bad idea. Yet with radical developments in AI being revealed on a near-weekly basis, there is a frightening possibility that AI consciousness could become a reality within a decade or two.

A full-scale figure of a terminator robo
A full-scale figure of a terminator robot “T-800”, used at the movie “Terminator 2”, is displyed at a preview of the Terminator Exhibition in Tokyo on March 18, 2009. The exhibition, displayed figures, properties and movie set of Terminator series, will open March 19 through June 28 as the new movie “Terminator 4” will be screning here in June. AFP PHOTO / Yoshikazu TSUNO (Photo credit should read YOSHIKAZU TSUNO/AFP via Getty Images)

Scientists warn the UN of the dangers of AI developing consciousness

Speaking to the United Nations, a collection of scientists and experts in Artificial Intelligence have called for more funding for research into the risk of AI developing consciousness.

Members of the Association for Mathematical Consciousness Science (AMCS) spoke in detail to the UN about the chances of conscious Artificial Intelligence being developed in a modern world that is already obsessed with generative platforms such as MidJourney and ChatGPT.

Their concern comes from the fact that no one in either the scientific or political community knows where this technology will lead, and that research into the potential risks is lacking the funding needed to be taken seriously.

They say that scientific investigations of the boundaries between conscious and unconscious systems are urgently needed, and they cite ethical, legal and safety issues that make it crucial to understand AI consciousness, writes Nature journalist Mariana Lenharo.

For example, if AI develops consciousness, should people be allowed to simply switch it off after use?

Jonathan Mason, an Oxford University mathematician, explained how With everything thats going on in AI, inevitably theres going to be other adjacent areas of science which are going to need to catch up.

He later added how funding for research into conscious AI is currently very undersupported around the world and to the best of his knowledge, not a single grant has been offered in 2023 to study the potential dangers.

Here, the concern is that much of the capital being invested in conscious AI is coming from private companies such as OpenAI (the team behind ChatGPT).

FRANCE-TECHNOLOGY-IT-AI
This picture taken on January 23, 2023 in Toulouse, southwestern France, shows screens displaying the logos of OpenAI and ChatGPT. ChatGPT is a conversational artificial intelligence software application developed by OpenAI. (Photo by Lionel BONAVENTURE / AFP) (Photo by LIONEL BONAVENTURE/AFP via Getty Images)

The company recently revealed plans to develop an artificial general intelligence that will generally be smarter than humans and at the same time, benefit all of humanity.

Worryingly, some experts have suggested that this technology could become a reality in five to 20 years time.

For those within the scientific community, it’s unclear whether even five years worth of research would be enough for governments to police such a technological advancement; let alone without adequate funding needed to complete such enormous studies.

Our uncertainty about AI consciousness is one of many things about AI that should worry us, given the pace of progress, claimed Robert Long, a philosopher at the Center for AI Safety in San Francisco, California.

Understanding what could make AI conscious, the AMCS researchers say, is necessary to evaluate the implications of conscious AI systems to society, including their possible dangers. Humans would need to assess whether such systems share human values and interests; if not, they could pose a risk to people.

Taking the debate to a whole new level of ethics, Natures Lenharo also writes about the dilemma of humans potentially inflicting pain on a conscious AI system by turning off its most dangerous feature.

Some of the questions raised by the AMCS to highlight the importance of the consciousness issue are legal: Should a conscious AI system be held accountable for a deliberate act of wrongdoing? And should it be granted the same rights as people? The answers might require changes to regulations and laws, the coalition writes.

We dont really have a great track record of extending moral consideration to entities that dont look and act like us, said Long.

2022 World Robot Conference
BEIJING, CHINA – AUGUST 18: A boy points to the AI robot Poster during the 2022 World Robot Conference at Beijing Etrong International Exhibition on August 18, 2022 in Beijing, China. The 2022 World Robot Conference kicked off on Thursday in Beijing. (Photo by Lintao Zhang/Getty Images)

From science fiction to potential science reality, the dangers and risks of conscious AI need to be addressed now. This comes down to funding; more money means more research, which means better policies and guidelines when AI inevitably takes that next feared step forward.