AI could cause ‘social ruptures’ between people who disagree on its sentience | Artificial intelligence (AI)
AI could cause ‘social ruptures’ between people who disagree on its sentience | Artificial intelligence (AI)
yayınlandı
0
philosophers, scientists, and AI experts discuss the potential consciousness of AI systems, significant societal rifts emerge. Some believe that AI can develop feelings and a sense of self, while others argue that it is merely a tool without emotions. This debate is becoming increasingly important as governments convene to address the risks associated with AI development. The possibility of AI systems gaining consciousness by 2035 has raised concerns about how society will navigate issues such as welfare rights and moral significance for these entities. Philosopher Jonathan Birch warns of potential social ruptures as differing views on AI sentience could lead to subcultures with conflicting beliefs. The debate mirrors themes from science fiction films, highlighting the complexities of human-AI interactions. Experts are calling for AI firms to assess the sentience of their systems to determine if they are capable of experiencing happiness, suffering, and other emotions. As discussions continue, questions about AI consciousness and its implications for society remain at the forefront of technological advancement. Bu içerikte, içerik açıklaması oluşturulmaktadır. İçeriğin konusu belirtilerek, içeriğin ne hakkında olduğu ve hangi bilgileri içerdiği kısaca özetlenmektedir. İçerik açıklaması, okuyucunun içeriği daha iyi anlamasına ve içeriğin ne tür bilgiler sunacağını önceden öğrenmesine yardımcı olur.
[ad 1]
Significant “social ruptures” between people who think artificial intelligence systems are conscious and those who insist the technology feels nothing are looming, a leading philosopher has said.
The comments, from Jonathan Birch, a professor of philosophy at the London School of Economics, come as governments prepare to gather this week in San Francisco to accelerate the creation of guardrails to tackle the most severe risks of AI.
Last week, a transatlantic group of academics predicted that the dawn of consciousness in AI systems is likely by 2035 and one has now said this could result in “subcultures that view each other as making huge mistakes” about whether computer programmes are owed similar welfare rights as humans or animals.
Birch said he was “worried about major societal splits”, as people differ over whether AI systems are actually capable of feelings such as pain and joy.
The debate about the consequence of sentience in AI has echoes of science fiction films, such as Steven Spielberg’s AI (2001) and Spike Jonze’s Her (2013), in which humans grapple with the feeling of AIs. AI safety bodies from the US, UK and other nations will meet tech companies this week to develop stronger safety frameworks as the technology rapidly advances.
There are already significant differences between how different countries and religions view animal sentience, such as between India, where hundreds of millions of people are vegetarian, and America which is one of the largest consumers of meat in the world. Views on the sentience of AI could break along similar lines, while the view of theocracies, like Saudi Arabia, which is positioning itself as an AI hub, could also differ from secular states. The issue could also cause tensions within families with people who develop close relationships with chatbots, or even AI avatars of deceased loved ones, clashing with relatives who believe that only flesh and blood creatures have consciousness.
Birch, an expert in animal sentience who has pioneered work leading to a growing number of bans on octopus farming, was a co-author of a study involving academics and AI experts from New York University, Oxford University, Stanford University and the Eleos and Anthropic AI companies that says the prospect of AI systems with their own interests and moral significance “is no longer an issue only for sci-fi or the distant future”.
They want the big tech firms developing AI to start taking it seriously by determining the sentience of their systems to assess if their models are capable of happiness and suffering, and whether they can be benefited or harmed.
“I’m quite worried about major societal splits over this,” Birch said. “We’re going to have subcultures that view each other as making huge mistakes … [there could be] huge social ruptures where one side sees the other as very cruelly exploiting AI while the other side sees the first as deluding itself into thinking there’s sentience there.”
But he said AI firms “want a really tight focus on the reliability and profitability … and they don’t want to get sidetracked by this debate about whether they might be creating more than a product but actually creating a new form of conscious being. That question, of supreme interest to philosophers, they have commercial reasons to downplay.”
One method of determining how conscious an AI is could be to follow the system of markers used to guide policy about animals. For example, an octopus is considered to have greater sentience than a snail or an oyster.
Any assessment would effectively ask if a chatbot on your phone could actually be happy or sad or if the robots programmed to do your domestic chores suffer if you do not treat them well. Consideration would even need to be given to whether an automated warehouse system had the capacity to feel thwarted.
Another author, Patrick Butlin, research fellow at Oxford University’s Global Priorities Institute, said: “We might identify a risk that an AI system would try to resist us in a way that would be dangerous for humans” and there might be an argument to “slow down AI development” until more work is done on consciousness.
“These kinds of assessments of potential consciousness aren’t happening at the moment,” he said.
Microsoft and Perplexity, two leading US companies involved in building AI systems, declined to comment on the academics’ call to assess their models for sentience. Meta, Open AI and Google also did not respond.
Not all experts agree on the looming consciousness of AI systems. Anil Seth, a leading neuroscientist and consciousness researcher, has said it “remains far away and might not be possible at all. But even if unlikely, it is unwise to dismiss the possibility altogether”.
He distinguishes between intelligence and consciousness. The former is the ability to do the right thing at the right time, the latter is a state in which we are not just processing information but “our minds are filled with light, colour, shade and shapes. Emotions, thoughts, beliefs, intentions – all feel a particular way to us.”
But AI large-language models, trained on billions of words of human writing, have already started to show they can be motivated at least by concepts of pleasure and pain. When AIs including Chat GPT-4o were tasked with maximising points in a game, researchers found that if there was a trade-off included between getting more points and “feeling” more pain, the AIs would make it, another study published last week showed.
AI could cause ‘social ruptures’ between people who disagree on its sentience | Artificial intelligence (AI)
Yorumlar kapalı.