newcohospitality.com

The Debate on AI Consciousness: Insights and Implications

Written on

A month ago, Ilya Sutskever, the Chief Scientist and co-founder of OpenAI, stirred controversy by tweeting that large neural networks might possess “slight consciousness.” Given his credentials, including co-authoring a pivotal paper that ignited the deep learning era, his assertion was bound to provoke discussion among AI enthusiasts, cognitive scientists, and philosophers alike. Within days, his tweet garnered over 400 responses and twice that number of retweets.

In cutting-edge AI circles, discussions often revolve around pivotal questions: When will we achieve artificial general intelligence (AGI)? What capabilities and limitations do transformer models like GPT-3 possess, and when, if ever, might AI attain consciousness?

Views on these subjects vary widely. Skeptics such as Gary Marcus critique the exclusive reliance on deep learning for AGI and advocate for hybrid models that integrate data-driven and knowledge-based approaches. Conversely, optimistic figures like Ray Kurzweil assert that the Singularity, a point where machines will exceed human intelligence, is merely decades away.

Consciousness frequently arises in conversations about AI. While it is intricately linked to human intelligence, its applicability to machines remains ambiguous. Critics of anthropomorphizing AI often challenge the notion of “machine intelligence,” and consciousness, being an even more nebulous concept, faces even harsher scrutiny. This is understandable, as consciousness—similar to intelligence—resides in the complex overlap of philosophy and cognitive science.

The modern understanding of consciousness traces back to John Locke, who defined it as “the perception of what passes in a man’s own mind.” However, this concept remains elusive, with numerous models and theories gaining varying degrees of acceptance over time, yet the scientific community has yet to reach a consensus. One idea gaining traction is panpsychism, which posits that “the mind is a fundamental and ubiquitous feature of reality,” suggesting that everything has the potential for consciousness.

Nonetheless, panpsychism is merely a compelling hypothesis. Consciousness largely eludes precise scientific definition. Most agree that being conscious involves recognizing the self (“I”) and having awareness of the environment. However, attempts to define it further lead to complications. Cognitive neuroscientist Anil Seth notes that “the subjective nature of consciousness makes it difficult even to define.”

Given the scientific ambiguity surrounding consciousness, one might question why Sutskever made such a bold assertion.

Furthermore, if consciousness cannot be measured, should it even matter if his claim is accurate? This inquiry echoes the rationale behind Alan Turing’s formulation of the Imitation Game—now known as the Turing Test—in his influential 1950 paper, “Computing Machinery and Intelligence.” He recognized that asking whether machines can think is too ambiguous to yield meaningful insights. (Currently, it is widely accepted that the Turing Test alone is insufficient for assessing AI intelligence.)

Regardless of the practical value of Sutskever’s statement and the challenges in defining consciousness, his tweet attracted responses from notable figures in AI and neuroscience. Yann LeCun, Chief AI Scientist at Meta, replied with a curt “Nope,” suggesting that neural networks would need a specific architecture—likely beyond our current capabilities—to attain any form of consciousness. Stanislas Dehaene, a distinguished cognitive psychologist, echoed LeCun’s sentiments, referencing a paper he co-authored in Science that concluded, “we argue that the answer is negative: The computations implemented by current deep-learning networks correspond mostly to nonconscious operations in the human brain.”

Experts focused on the ethical implications of AI also weighed in on Sutskever’s tweet. Melanie Mitchell, a Davis Professor at the Santa Fe Institute, and Emily M. Bender, a linguistics professor at the University of Washington, adopted a mocking tone to highlight the absurdity of claiming AI might be slightly conscious without substantiation.

Deb Raji, a PhD student in computer science at UC Berkeley, pointed out the ethical dilemmas that could arise from perceiving AI as a conscious entity.

Conversely, figures like Andrej Karpathy, Director of AI at Tesla, and Sam Altman, CEO of OpenAI, appeared to support Sutskever's views. Altman seized the moment to counter LeCun, which resembled more of a marketing maneuver aimed at attracting researchers from Meta than a genuine discourse on conscious AI.

Altman clarified his stance by stating that “GPT-3 or -4 will very, very likely not be conscious in the way we use the word… the only thing I will say with certainty on the topic is that I am conscious.”

Leading figures in AI research must exercise intellectual humility to prevent sensational yet potentially misleading claims from dominating public discourse and influencing uninformed audiences. OpenAI should prioritize enhancing the safety of its popular models, like GPT-3, instead of merely amplifying their capabilities. Interestingly, their latest model, InstructGPT—which Altman touted as safer than GPT-3—has been found to be even more toxic and harmful when misused.

In closing, while AI may eventually achieve consciousness—if it ever does—it is likely a distant prospect. Even if we reached a consensus on its definition, how could we validate its existence? Reality is paramount in science as long as we can measure it. Phenomena that lie beyond our cognitive abilities should be viewed as mysteries rather than solvable problems. For now, the subjective essence of statements like “I feel like me” renders consciousness an unfathomable concept with our current understanding.

I believe we should distinguish neurocognitive and philosophical explorations of human consciousness from its examination in artificial intelligence. How can we scientifically grasp what it feels like to be an AI? We ought to follow Turing’s lead and refrain from asking whether AI is conscious. Instead, we should define more tangible, measurable traits that correlate with the nebulous concept of consciousness—akin to how the Turing Test relates to the notion of thinking machines—and devise tools and methodologies to assess them. This would allow us to compare AI's performance against human traits and determine the degree to which they exhibit such characteristics.

On a positive note, despite the recent discourse sparked by Sutskever’s provocative statement and the challenges facing the AI community—from workplace discrimination to toxic language models—it's encouraging to witness AI researchers engaging in dialogue with philosophers and neuroscientists on a topic that necessitates collaboration across these fields for progress. It would be promising to see more minds inspired to work at the intersection of disciplines that have, for too long, remained distant from one another.

If you’ve read this far, consider subscribing to my free biweekly newsletter **Minds of Tomorrow*! News, research, and insights on AI and Technology every two weeks!*

You can also support my work directly and gain unlimited access by becoming a Medium member using my referral link **here*! :)*