The Interplay of Language, Cognition, and LLMs: Where Fuzziness Meets Precision
In our series on AI, LLMs, and Language so far we’ve explored a few implications of LLMs relating to language and literacy development:
1) LLMs gain their uncanny powers from the statistical nature of language itself;
2) the meaning and experiences of our world are more deeply entwined with the form and structure of our language than we previously imagined;
3) LLMs offer an opportunity for further convergence between human and machine language; and
4) LLMs can potentially extend our cognitive abilities, enabling us to process far more information.
In a previous series, “Innate vs. Developed,” we’ve also challenged the idea that language is entirely hardwired in our brains, highlighting the tension between our more recent linguistic innovations and our more ancient brain structures. Cormac McCarthy, the famed author of some of the most powerful literature ever written, did some fascinating pontificating on this very issue.
In this post, we’ll continue picking away at these tensions, considering implications for AI and LLMs.
Fuzziness and Precision in Language Development and Use
To start us off, I want to ground our exploration in two concepts we’ve covered previously in “An Ontogenesis Model of Word Learning in a Second Language”:
Fuzziness: “inexact or ambiguous encoding of different components or dimensions of the lexical representation that can be caused by several linguistic, cognitive, and learning-induced factors. These factors include, among others, changes in neural plasticity, the complexity of mapping L2 semantic representations on the existing L1 semantic representations and of mapping L2 forms on the semantic representations, and problems with L2 phonological encoding”
Optimum: “the ultimate attainment of a representation (or its individual components), i.e., the highest level of its acquisition, when the representation is properly encoded and no longer fuzzy”
I think these concepts are useful not only for thinking of learning new words in a language, but also for how we interact with LLMs and the language they are trained upon.
From Fuzziness → Optimum
When we first learn a language, whether while in the womb, in school, or after moving to a new community, what we hear and understand is fuzzy. The first thing we attune to is the prosody of the language: its tones, volume, and duration. We can’t yet fully distinguish words and sentences within a stream of speech, nor syllables from phonemes, nor vowels from consonants. Let alone connect those sounds (or signs) to meaning and communicate with them to others.
Yet as we gain greater discernment across hearing, vision, movement, and speaking, our representations of a language becomes more flexible and more precise. As I’ve written about elsewhere, connecting speech directly to its form in writing can enhance language and reading and writing development simultaneously. Oral and written language – and reading and writing – can develop reciprocally. Developing one supports refining the other.
Why would that be, given we didn’t invent the technology of writing until far down the timescale of human evolution?
Precision in Language and Cognition
Maybe it’s because the written form of a language requires greater precision in the representation in our minds. When greater precision is required, it takes more time and effort, at least initially, to produce.
As an example, you may have heard of the term “receptive bilinguals.” These are individuals who can understand the gist of an everyday conversation in another language, but may struggle to speak or produce it fluently. This is because they may have had fairly significant exposure to the language, especially in childhood, but their mental representations remain “fuzzy” because they rarely produce the language either orally or in written form.
The more that we hear and read AND produce a word – and particularly when we produce it both orally and in writing – the more likely and quickly we are to reach optimum.
We see this process play out in real time with babies. They listen to our sounds and watch our faces, then begin to babble, mimicking us. They begin connecting those sounds to things and ideas. And then they begin to gain a more precise understanding and use of a word, from there stringing multiple words together into sentences, again starting haphazardly and working towards greater flexibility and precision.
Fuzziness, Precision, and Specialization in Language, Cognition, Computation, and Literacy
LLMs have demonstrated that there is far more knowledge, meaning, and comprehension of the world embedded within the statistical relationships of the words and phrases we use than we previously suspected.
As we’ve also explored, there are fuzzier and more precise terms and concepts in a language. The more abstract and “decontextualized” an event or idea (meaning that the event or idea is not readily available in the context of that environment or moment) the more precise, vivid, or specialized our language becomes in the effort to describe it. This can lead us all the way to the extreme of computational language, which is highly precise, much harder for humans to learn, and quite alien in comparison to the general fuzziness of our everyday language used to communicate about everyday things.
The reason read-alouds are so very powerful in the beginning of childhood (and arguably, through adolescence, perhaps even beyond) is because they provide children with exposure to and immersion in this more decontextualized type of language and more abstract and broad understandings of the world. This helps prepare them for when they later engage with written forms of language and increasingly discipline-specific forms of discourse.
As language learning develops towards greater precision, networks in the brain are forged and strengthened. One of the reasons why early childhood is so incredibly important to language and literacy and motor development is because the brain supercharges the neural connections it is forming in all directions. Dendrites spring up like fungus after a rain. But learning new things requires a bit more effort as we age because we work far more on pruning our existing connections for efficiency.
Yet no matter our age, developing these increasingly robust cross-brain connections, and then increasingly specializing and refining them for specific domains and uses, can increase our mental resilience.
We can see this process of specialization play out in real time with young children as they learn to read and write. As they gain greater precision with representations of language through spelling, writing, and volume of reading, their brains increasingly forge further connections between the architecture used with executive function, speech, vision, and motor control, while then specializing and refining them.
Developing language and literacy in multiple languages – to the point of optimum – even further connects, specializes, and refines those networks. And when one is bi- or multi-literate on disciplinary topics – with the specialized and precise language required for communicating flexibly about those topics – then those networks are yet further refined.
This is similar, arguably, as with the development of cognition. Cognition—a fancy way of saying “awareness, knowledge, and understanding”—includes the facets of executive function and memory that are also tapped into when developing language, yet are surprisingly separable from language in the brain, in terms of the processes identified through brain scans, at the same time.
I think a useful way to think of this distinction may be the difference between the unconsciousness or the lack of awareness we may have about something PRIOR to learning it, and the unconsciousness and lack of awareness we have AFTER learning it to optimum. When we have attained fluency with a skill or pushed our knowledge into long-term memory, we no longer need to apply much effort – nor thought – to drawing upon it. It is the degree of effort that is required in order to learn or use something that determines the level of cognition we need to initially draw upon. And while we can certainly expand our cognitive ability and other aspects of our learning potential, there are also hard upper limits – such as the bottlenecks of our working memory and our attention.
We overcome those bottlenecks by committing important information to long-term memory through regular use and communication, automatizing regularly used skills through practice, and leveraging the institutionalization of knowledge-based communities and the technologies of writing (texts) and digitization to process and communicate and further refine larger volumes of information.
The Limitations and Potential of LLMs
While human children rapidly develop language and literacy from comparably minimal amounts of input and interaction in their world, LLMs are trained on vast bodies of text, the majority in written form (thus far). Their training is developed to refine and make more precise their abilities to predict the concatenations of continued tokens and words from what we have fed them.
Similar to human brains, LLMs move from a fuzzy-to-precise spectrum as they refine the “weights” they assign to linguistic tokens across their many layers. Early or small models of LLMs, akin to our “receptive bilingual” example earlier, demonstrate some receptive capabilities, but their generated outputs are highly fuzzy, as they did not have sufficient neural layers, training, and feedback (i.e. sufficient input and production) to achieve something close to optimum in their generation of human-like language.
But to state the obvious, LLMs do not experience the world as we do. They have no bodies, no sensory input, no social interactions (unless you count the part of their training that requires humans to provide them with corrective feedback). As a reminder, the fact that they have the capabilities they do–derived merely from the accumulated statistical relationships of parts of words–is remarkable. They do not “think,” at least, not in the manner in which our own cognition functions, and they do not continuously build and further refine their knowledge–yet–from ongoing interactions and input from other AI and with us.
LLMs are like if we took away all the other parts of our brain—those more ancient parts that continue solving problems and help us steer our way home and keeps our hearts beating—and only left the parts dedicated to language. That they are able to do all they can from mere statistical relationships forged from language alone is–again–remarkable, but it also shows us their limitations.
To be frank, that the dialogue has been so singularly focused on the “intelligence” of LLMs, with the goal of forming “artificial general intelligence” (AGI) seems remarkably off base to me. What I am far more interested in is the potential of these models to teach us something about our own development of language and literacy–and thus, how we can better teach those abilities–and to extend our own cognitive abilities.
Enhancing Cognition with AI
Towards this end, I want to suggest some implications for education that takes us away from fears about AI making kids dumber or taking away jobs from teachers.
AI and LLMs can enhance our cognitive abilities by helping us to:
Process Large Amounts of Information to Gain Knowledge: AI and LLMs are getting better and better (seemingly every week) in sifting through vast amounts of information, such as databases, research, transcripts, and other documents, to help us summarize, answer questions, paraphrase, and understand the relevant knowledge contained in them. Furthermore, they are getting better and better at translating across multiple languages and in reading multiple modalities. You can feed an LLM an image with text in another language and it can read it.
Augment Our Own Thinking and Writing: LLMs work really well in helping us spitball ideas or redraft our own writing. The fear that they will stop kids from being taught to write is misplaced – the writing produced by LLMs is only as good as what they are given. Yes, they are great at boilerplate forms of writing! But that’s the exact kind of writing that we do want to automate and reduce our own time and thinking on. When it comes to deeper writing and thinking like this series and post, it ain’t writing it for me. But I do find it really helpful when I get stuck or when I want to get suggestions for revision.
In Sum
The effectiveness of our use of AI and LLMs hinges on the quality of our input.
As with previous tools like Google Search, the more precise and informed our prompts, the more powerful and accurate their responses.
Another way of framing this idea: LLMs can help us further widen or refine our own ideas and language. They are far less useful in just handing them to us. They mirror and leverage what we provide to them.
There is a lot of talk about the “hallucinations” of LLMs, but perhaps a better way to frame it is as “pixelation,” or grain size. There are larger and smaller grain sizes of pixels. The coarser the grain, the less clear it is. The finer the grainer, the sharper it becomes. The more vague and broad the grain size we feed them, the more BS they will spit. The more precise and narrow grain sizes we provide, the more accurate and useful their responses will be. They can then help us move into different grain sizes from there (either widen our lens, or narrow our lens).
This means that we need to keep teaching our kids stuff. The more knowledge they have, the more precise and flexible their ability to wield language, the better they can use powerful tools like AI.
We can help kids to use AI in this way, and we can create tech-free spaces in our schools where they need to put in the cognitive effort and time they need to build their fluency with language and literacy and read texts that build their knowledge. And then when we engage them with the tech, we teach them how to use it to extend, rather than diminish, their own potential.
There’s implications here for teachers too – in fact, I think the most exciting potential for AI is actually freeing teachers up to spend more time teaching, and less time marking up papers and analyzing data. But that’s for another post.
#AI #LLMs #cognition #language #literacy #learning #education
Discuss...