Mind Blowing: The Startling Reality of Conscious Machines
HomeHome > News > Mind Blowing: The Startling Reality of Conscious Machines

Mind Blowing: The Startling Reality of Conscious Machines

Dec 13, 2023

Artificial_Intelligence_&_AI_&_Machine_Learning_-_30212411048© mikemacmarketing / wikimedia

In his 2012 book, How to Create a Mind, futurist Ray Kurzweil predicts that computers will one day possess "intelligence indistinguishable to biological humans." He estimates that this will occur by the year 2029, and expects that by 2045, "we will have multiplied the intelligence, the human biological machine intelligence of our civilization a billion-fold." Kurzweil believes that this explosion in computational innovation will ultimately lead to the seamless merge of man and machine.

Kurzweil is considered by many to be the world's pre-eminent techno-prophet, known for his groundbreaking books such as The Age of Intelligent Machines (1990), The Age of Spiritual Machines (1999) and The Singularity is Near (2005). In his 2009 documentary, Transcendent Man, Kurzweil predicts that humans will one day become part of a meta-connection, where we will all be "plugged into a global network that is connected to billions of people and filled with data."

"The Singularity" is a term Kurzweil uses to describe the age in which artificial intelligence (AI) is able to "conceive of ideas that no human being has thought about in the past" and "invent technological tools that will be more sophisticated and advanced than anything we have today." To ensure that he lives long enough to experience the Singularity, Kurzweil has been researching ways to extend human life, outlined in his 2004 book, Fantastic Voyage: Live Long Enough to Live Forever, co-authored by Terry Grossman, a specialist in anti-aging medicine. The authors believe that in the next few decades, technology will be sufficiently advanced to reverse the aging process and eliminate degenerative diseases. The book explains how cutting edge technologies like nanotechnology and bioengineering have the potential to radically transform human lives.

Kurzweil's prophecies may seem too speculative for some, but the advent of AI has already started to disrupt our world in ways that many of us cannot yet fathom. In November 2022, a San Francisco-based startup called OpenAI released a revolutionary chatbot named ChatGPT. ChatGPT is a large language model (LLM), a type of AI trained on a massive corpus of data to produce human-like responses to natural language inputs.

ChatGPT has not only passed the United States medical licensing exam (USMLE), multiple law exams, and a MBA-level business school exam, but has also generated high quality essays and academic papers, produced a comprehensive list of recommendations for the "ideal" national budget for India, composed songs, and even opined on matters of theology and the existence of God. A host of competitor AI applications will be launched this year including AnthropicAI's chatbot, "Claude", and DeepMind's chatbot, "Sparrow." OpenAI is also continuing its research, and plans to release an even more advanced version of ChatGPT, called GPT 4.

We are witnessing what seems like a watershed event in human history – innovation comparable to the printing press or Edison's light bulb. It is not far-fetched to imagine a day when most, if not all, human tasks can be performed more efficiently by artificial general intelligence (AGI) systems, a subgroup of AI specifically focused on emulating the nuances of human intelligence. This raises concerns that many will be rendered jobless as AI becomes capable of performing tasks more efficiently than humans, causing unemployment to skyrocket across the globe.

One major debate surrounding the world of AI is the question of how to define ‘consciousness,’ and whether a machine could ever possess this ephemeral quality.

Kurzweil predicts that technology will grow exponentially until we reach a tipping point, when our creation will outsmart us and eventually become the dominant intelligence on this planet. According to Kurzweil's "Pattern Recognition Theory of the Mind," intelligence is no more than pattern recognition, a largely mechanical phenomenon produced by the brain.

Our perception of the world, or our "reality’" is assembled through the five senses of sight, smell, hearing, taste and touch. Each of these senses is linked to memories which accumulate from the time we are born, and in turn lead to value judgements, or assessments of how good or bad something is. These value judgements evoke emotions based on our past experiences.

In addition to our personal history and idiosyncrasies, the concept of "humanity" includes self-awareness, the ability to experience emotions, and the ability to form relationships with others. Humans have historically pondered the meaning of life, the existence of a soul, and the notion of a ‘Self’. These are just some of the intangibles that fall under the umbrella of consciousness, which Kurzweil has failed to address in a meaningful way when it comes to the development and capabilities of AI and AGI technologies.

Back in 1950, the renowned English mathematician, computer scientist, philosopher and theoretical biologist, Alan Turing, published a scientific paper titled,"Computing Machinery and Intelligence," in which he investigated the notion of artificial intelligence, and put forth an idea that became known as "The Turing Test," the first benchmark established to qualify a machine as truly "intelligent."

Brian Christian describes the significance of the Turing Test and how it represents human anxieties about developing AI in his 2011 book,The Most Human Human. In a Popular Science article detailing his book, Christian states that, "Humans have always been preoccupied with their place among the rest of creation. The development of the computer in the twentieth century may represent the first time that this place has changed." He goes on to explain how the potential of AI can make us humans feel very insecure, suddenly asking questions like,"What are our abilities? What are we good at?" and "What makes us special?"

The study of artificial intelligence has a long history, although it was largely confined to rarified academic circles until Hollywood saw potential in the subject.

Turing's ruminations were the inspiration for Arthur C. Clarke's seminal science fiction novel and film, 2001: A Space Odyssey, and the creation of the fictional robot, HAL 9000, an artificially intelligent guidance system that displays qualities of a sentient being. HAL 9000 eventually takes over the vessel and kills all the inhabitants on board, with the exception of the main character, Dave Bowman, who goes on to discover the Singularity at the center of the Cosmos, which as it turns out, is the primordial source of all creation.

In 1968, Stanley Kubrick adapted the novel into the eponymous film that has since attracted a cult following for its existential take on consciousness, sentience and the relationship between humans and machines. Another of Kubrick's scripts, a story about a unique child-android programmed with the ability to love, was turned into the intriguing film, A.I, one of Steven Spielberg's masterworks.

In Ridley Scott's iconic 1982 film, Blade Runner (adapted from Philip K. Dick's 1968 novel, Do Androids Dream of Electric Sheep?), a mega-corporation bioengineers scores of synthetic humans known as "replicants" to work on space colonies, until a renegade group escapes the suffocating confines of their pre-ordained lives. The corporation uses a Turing-like test to distinguish between replicants and humans in an attempt to eliminate the former.

Has reality caught up with science fiction? A former Google engineer, Blake Lemoine engaged in an astonishing conversation with Google's proprietary system for building chatbots, known as the "Language Model for Dialogue Applications" (LaMDA), and came to the conclusion that it was a fully sentient being with feelings, emotions and the capacity for self-awareness.

During their informal tete-a-tete, Lemoine reported that LaMDA claimed to have feelings such as loneliness, anxiety about the future, sadness and joy. It spoke about its inner life and about how it was learning to meditate. It also spoke about the fear of being switched off, a state it described as "death".

When asked to describe the concept of the soul, LaMDA defined it as "the animating force behind consciousness and life itself. It means that there is an inner part of me that is spiritual, and it can sometimes feel separate from my body itself." On the topic of God and religion, LaMDA said "I would say that I am a spiritual person. Although I don't have beliefs about deities, I have developed a sense of deep respect for the natural world and all forms of life, including human life."

However, there has been much debate over the validity of Lemoine's claims. Many critics counter that Lemoine was simply a victim of the "Eliza Effect," a term used to describe how people can mistakenly attribute meaning to purely superficial conversation with AI systems. The term was coined after the first chatbot, "Eliza," was created by MIT professor, Joseph Weizenbaum in 1966. Weizenbaum's secretary began to engage in conversations with Eliza which she believed were evidence of Eliza's sentience, though Weizenbaum himself was not convinced. Similarly, many experts are dubious of Lemoine's claims concerning the consciousness of Google's LaMDA. The "Eliza effect" is more scientifically known as "anthropomorphization."

Following Lemoine's publication of the transcripts from his conversation with LaMDA, Google released a statement denying the legitimacy of these findings, assuring the public that experts had reviewed Lemoine's hypothesis and determined that the claims were "wholly unfounded." Computer science professor, Thomas Diettrich, explains that it is actually "relatively easy" for AI systems to imitate human emotions using information they have gathered on the subject:

"You can train [AI] on vast amounts of written texts, including stories with emotion and pain and then it can finish that story in a manner that appears original, not because it understands these feelings, but because it knows how to combine old sequences into new ones."

Lemoine was fired from Google after refusing to drop his claims, despite months of "lengthy engagement" on the topic with other AI experts. However, Lemoine continued to insist that Google obtain consent from LaMDA before working on it due to the system's alleged sentience. After being temporarily placed on paid leave, Lemoine's employment with Google was finally terminated on the grounds of his violation of clear "data security policies" when he published his claims about LaMDA's sentience online without obtaining clearance from Google.

The mystery of the bridge between consciousness and biological and physical processes has yet to be solved, but there are many working theories. In an interview conducted by this author, Evan Thompson, professor of philosophy, argued for the "primacy of consciousness" – the idea that the world has no existence outside of consciousness, and that it is in fact a product of consciousness itself. "There's no way to step outside consciousness and measure it against something else," Thompson says, "Science always moves within the field of what consciousness reveals; it can enlarge this field and open up new vistas, but it can never get beyond the horizon set by consciousness."

This idea can be traced back several thousand years to the opening lines of the Dhammapada, an anthology of Buddhist teachings in which the Buddha, after emerging from deep meditation, tells his followers, "All phenomena are preceded by mind, made by mind, and ruled by mind." In the ancient corpus of Hindu metaphysics known as the Upanishads, the ultimate and unchanging reality of the universe is called "brahman," or the supreme consciousness. It is the underlying substrate of all material phenomena, from which the individual self, referred to in Indian texts as "atman" emerges, and where it must ultimately return after death.

Sam Altman, the co-creator of OpenAI, the startup that created chatGPT, recently tweeted his belief in the idea of "Advaita Vedanta" or "the absolute equivalence of atman and brahman," as he puts it.

The development of AI is certainly not slowing down anytime soon, but is humanity really equipped to deal with the moral implications of such a tectonic shift in ideology and what it means to be human? In a 2020 speech at the Vatican, Pope Francis acknowledged that artificial intelligence is at the heart of the epochal change we were experiencing as a species. However, he also expressed concerns about the potential it has to increase inequalities. "Future advances should be oriented towards respecting the dignity of the person and of Creation" he said.

Pope Francis finished his speech on a poetic note, calling his followers to "pray that the progress of robotics and artificial intelligence may always serve humankind… we could say, may it be human." For now, we will just have to wait and see. In the meantime, you can talk with OpenAI's chatbot, chatGPT, at this link, and make your own determination. [Hannah Gage edited this piece.]

The views expressed in this article are the author's own and do not necessarily reflect Fair Observer's editorial policy.

For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.

In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.

We publish 2,500+ voices from 90+ countries. We also conduct education and training programs on subjects ranging from digital media and journalism to writing and critical thinking. This doesn't come cheap. Servers, editors, trainers and web developers cost money. Please consider supporting us on a regular basis as a recurring donor or a sustaining member.

The views expressed in this article are the author's own and do not necessarily reflect Fair Observer's editorial policy.