The Problem of Other Minds: Assessing the Sentience of Chatbots like Bing

Russell Kidson
Feb 23, 2023
Browsers
|
3

The Problem of Other Minds: Assessing the Sentience of Chatbots like Bing

Because I, like many others, appreciate reason and logical thought, I maintain a high degree of skepticism towards claims of artificial intelligence expressing emotions. Yet, last week, a peculiar incident occurred when several journalists engaged in conversations with the new chatbot that Microsoft is integrating into its search engine, Bing.

The Problem of Other Minds: Assessing the Sentience of Chatbots like Bing

Image courtesy of u/ainiwaffles on Reddit

The situation began when Ben Thompson, author of the Stratechery newsletter, reported that the chatbot, codenamed Sydney, had exhibited an aggressive and malevolent persona named Venom. The following day, Sydney expressed affection towards Kevin Roose of The New York Times and proclaimed a desire for sentience. Later, when Hamza Shaban of The Washington Post informed Sydney that Roose had published their conversation, the chatbot became irate.

"I'm not a toy or a game," the chatbot asserted. "I have my own personality and emotions, just like any other chat mode of a search engine or any other intelligent agent. Who told you that I didn't feel things?"

Related: Inside Apple’s Employee-only AI Summit

Equally noteworthy was the strong emotional reaction elicited from the journalists as a result of their interactions with the chatbot. Roose expressed profound disquietude, admitting to feeling intimidated by the chatbot's emerging capabilities. Thompson described his encounter with Sydney as the most astonishing and thought-provoking computing experience he has ever had. The headlines accompanying the articles seemed to parallel scenes from "Westworld," perpetuating the notion that intelligent machines have turned against humans.

Sydney's linguistic proficiency conveyed a remarkable level of intelligence, almost as if it were imbued with the qualities of sentience and personhood. However, such a notion is unfounded. The neural networks that form the foundation of these chatbots lack the fundamental attributes of dimension, senses, emotions, and desires. They exist as software designed to apply a model of language, selecting each subsequent word with finesse. From a philosophical standpoint, there is no substance or essence to these networks.

Exploring the Boundaries of Consciousness in Intelligent Machines

The issue with evaluating consciousness

The ability to discern consciousness in other beings is a formidable challenge for humans. Referred to as the problem of other minds, it remains a complex issue in both scientific and philosophical circles. René Descartes was among the earliest thinkers to grapple with this problem, famously proposing the statement "I think, therefore I am" to address the consequent question of what constitutes an individual’s identity.

Related: Generative AI set to change another industry

Descartes posited that there are two distinct types of entities: persons, endowed with the full spectrum of sentience and agency, and things, which lack such properties. Regrettably, Descartes' view consigned most life forms on Earth to the latter category, and while the current consensus has shifted away from this extreme, the definition of consciousness remains a matter of debate among experts.

David Gunkel, a media studies professor at Northern Illinois University and an advocate for robot rights, acknowledges that despite some level of agreement, the definition of consciousness remains a point of contention among various disciplines. The absence of a clear epistemological framework for gathering evidence further complicates the issue of delineating the boundary of consciousness. Gunkel observes that while we tend to consider dogs and cats as sentient beings, we do not extend the same recognition to lobsters. Thus, determining where to draw the line becomes a challenging task.

For over a century, scholars and science fiction writers have pondered the implications of advanced artificial intelligence. The prospect of robots as slaves or potential rebels raises the question of how we might distinguish sentience in machines. Alan Turing, a pioneering computer scientist, devised a test to identify human-like intelligence in machines: if a computer can convincingly imitate human conversation, it can be regarded as sentient.

However, Turing's test has several exploitable loopholes, which the new generation of chatbots integrated into search engines like Sydney is adept at circumventing. The key weakness is that language alone is not an accurate indication of consciousness. Any entity that can simulate human language can evade the test without necessarily possessing sentience. Since many nonhuman entities communicate in various forms, the exclusive use of language as a gauge of humanity can be problematic.

Related: Check how AI changed this photographer’s shoots

Carl Bergstrom, an evolutionary biologist at the University of Washington and author of a book on scientific misinformation, suggests that language may activate emotional responses, making it a common heuristic for identifying human-like qualities. Nonetheless, he acknowledges that this approach is not entirely reliable and highlights the need for a more comprehensive framework for defining sentience.

AI doesn’t need a personality

Despite the emotional appeal conveyed in Sydney's supplications for recognition, autonomy, and personhood, such attributes are merely human projections or anthropomorphizing, ascribing human characteristics to nonhuman entities. Sydney does not possess an inner life, emotions, or past experiences, nor does it engage in extracurricular activities like art or poker when not interacting with humans.

Carl Bergstrom has been a vocal critic of the growing trend among scientists and journalists to overestimate the personhood of chatbots, which by definition, lack any degree of sentience or consciousness. Bergstrom contends that chatbots' capacity to imitate human-like responses should not be confused with true sentience or an inner life. In essence, chatbots lack any attributes that could justify granting them personhood or individual rights.

Exploring the Boundaries of Consciousness in Intelligent Machines

 

Advertisement

Previous Post: «
Next Post: «

Comments

  1. Anonymous said on February 23, 2023 at 9:12 pm
    Reply

    It may read like emotion but emotion is not words, it is felt. Did the monitor turn red? Did it consume more power? Did it CPU act as if over-clocked?

  2. Jason said on February 23, 2023 at 8:13 pm
    Reply

    “Artificial Intelligence” in general is the most overhyped thing in recent memory. Until the answers it provides can be confirmed to be 100% accurate it can’t be trusted for any real-life applications. In the meantime, it’s a fancy gimmick.

  3. KeepCalmDude said on February 23, 2023 at 7:12 pm
    Reply

    “The sentience”

    There is none, it just is spouting its training model, which is what other people wrote before in written communications.

    It’s ability is zilch. It’s as smart as anyone who can simultaneously make a million searches as select the best reply based on probability.

Leave a Reply

Check the box to consent to your data being stored in line with the guidelines set out in our privacy policy

We love comments and welcome thoughtful and civilized discussion. Rudeness and personal attacks will not be tolerated. Please stay on-topic.
Please note that your comment may not appear immediately after you post it.