llms: is language indicative of consciousness?

TROIC
DataDrivenInvestor
Published in
4 min readJan 24, 2024

--

Auditory Safety. Credit: NIH

There is a new spotlight of a math framework on LLMs in Quanta Magazine, New Theory Suggests Chatbots Can Understand Text, stating that, “The duo turned to mathematical objects called random graphs. A graph is a collection of points (or nodes) connected by lines (or edges), and in a random graph the presence of an edge between any two nodes is dictated randomly — say, by a coin flip. A connection between a skill node and a text node, or between multiple skill nodes and a text node, means the LLM needs those skills to understand the text in that node. The model can generate text that it couldn’t possibly have seen in the training data, displaying skills that add up to what some would argue is understanding.”

If language carries understanding and LLMs have skills that appear close, could language be stretched to indicate consciousness? What is language to consciousness? Subjective experience is used to describe consciousness. Could there be language without — or with — the self?

How does language function in the brain in terms of dependency? Does language depend on thinking or memory? If there were no language, how much would be different for consciousness? It is possible for someone to say things out loud while in deep sleep, without awareness of doing so — or to remember subsequently. It is also possible while under the influence of some substance. Where do those stand with consciousness?

It is theorized that like thinking, language is a function of the human mind. The human mind, conceptually, has two components, electrical and chemical impulses of nerve cells. The mind includes the features of these impulses and their interactions. The mind can be divided into functions and qualifiers. Functions include thoughts, emotions, languages, feelings, regulations and so forth. Qualifiers [or features] include attention or prioritization, awareness or pre-prioritization, sense of self, sense of being, intent or free will. There are other qualifiers of functions like splits, distribution, sequences and so forth. It is the qualifiers that enable many aspects of functions. The collection of all qualifiers is postulated to be consciousness — the super qualifier.

There are some qualifiers that sometimes act like functions. For example, fusing, conceptually, as the interaction between electrical and chemical impulses — in sets — is a key function but may also act like a qualifier in some phases of other functions. Language and thinking do something similar. Their possibility is within the memory. However, they sometimes appear to act on [taking or using] what is in memory. They do so like they are qualifying. Simply, whatever is thought about is mostly in memory. It is also similar to whatever is expressed using language. Language is learned and is in memory. Others too — learned — are in the memory. But language walks through some of those in memory, like it is sometimes another factor, using information, while language itself is principally a function or information. When receiving language, like reading or listening, it seems like a way to reach the memory of others, taking or using what is there.

All functions are operated by the electrical and chemical impulses of neurons. Labels like memory, feelings and others make distinctions, but the mind organizes information similarly.

For key qualifiers like the quartet: intent, self, awareness and attention, they, conceptually act on functions in memory, resulting in thinking and language. Language often uses intent not to go off point, most times or to stop. There is attention too, awareness to hear oneself while speaking— and subjectivity.

Qualifiers act on language for it to work. Even in cases without awareness, like talking in deep sleep, or while under influence, other qualifiers are at play.

This means that qualifiers — collected under consciousness — act on language like they do for other functions. So, sending or receiving language, for humans, is an indication of consciousness. This extends to other organisms close to humans, in their forms of communication.

If non-organisms like paper or digital possess language, would that indicate consciousness? If language is a function, and has qualifiers acting on it, those objects have no qualifiers, so there is no consciousness. Just like a table or automobile is not conscious, because they have no qualifiers.

If LLMs qualify digital contents, does that indicate that an alternative consciousness has emerged in digital?

Since human consciousness is produced in the mind, it can be expressed as language and other bodily experiences. It can also be expressed or transmitted digitally. It does not make digital conscious, but gives the expressed consciousness — that qualifies functions — to what was produced on the mind, and made available in digital.

If within digital, a model can act to predict or generate, in ways that align with what can be expressed as human consciousness, there is a fraction of consciousness in that language output. This may not mean LLMs are conscious like organisms. They, however, do something that consciousness does in the mind, for humans — qualify information.

Visit us at DataDrivenInvestor.com

Subscribe to DDIntel here.

Have a unique story to share? Submit to DDIntel here.

Join our creator ecosystem here.

DDIntel captures the more notable pieces from our main site and our popular DDI Medium publication. Check us out for more insightful work from our community.

DDI Official Telegram Channel: https://t.me/+tafUp6ecEys4YjQ1

Follow us on LinkedIn, Twitter, YouTube, and Facebook.

--

--