The Return of Ontologies? Knowledge-Based AI correcting Generative AI

When I first heard about Large Language Models like ChatGPT, I was sceptical. Not merely because thirty years ago I experimented with a tiny language model and couldn't get it off the ground—like many tinkerers throughout AI's history, I'm sure! And my scepticism didn't come from a lack of understanding, I hope. In fact, LLMs being neural networks of words without connection to the real world, their drawbacks seemed quite apparent to me, perhaps because back in my student days I'd learned about another part of AI that builds systems connecting with real world knowledge, in a sense the very opposite of LLMs.

A few weeks ago, I became aware of AutoGPT and gained a new understanding of the potential of LLMs when combining several instances with specific roles in collaborative networks, aiding one another. Finally, I'm now thinking that this potential is going to be increased even further by utilizing that part of AI I studied way back when:

Knowledge-Based AI

So, what many might not realize is that there's a whole half of AI that seems to have partly fallen from grace these days, or at least is pretty much ignored by current surface-level trends. I'm talking about Knowledge-Based AI (also known as Symbolic AI), which focuses on rules, explicit knowledge representation and logical reasoning aiming at certainty and transparency, enabling humans to both understand generated results and have complete insight into how they were generated.

By being based on curated knowledge bases of knowledge graphs or ontologies that encode the core concepts and facts of actual domains, describing reality through formal rules and clear statements that should be understood as true or false, this approach has long been used to implement rule-based systems and expert systems that make logical and trackable inferences.

Knowledge-Based systems such as these have been highly successful within specific domains where they perform logical reasoning to reach explainable conclusions, for example in medical diagnosis of certain diseases. The downside, if you will, is that they are limited to areas of clearly defined knowledge.

Generative AI

With the current Generative AI success of Large Language Models (LLMs) like ChatGPT, you'd think that these have emerged victorious from the AI race, and that Knowledge-Based systems have lost out, having nothing more to contribute. Not so!

LLMs have their own limitations, as they're built on neural network technology, which does not deal in certain knowledge and instead depends on training on huge data sets.

Naturally, the resulting models are no better than their input. And LLMs in particular are fed texts in natural language, which by its very nature is a representation of thoughts and intentions that require human interpretation, or rather the interpretation of a language user that actually understands what the words mean. Therefore, LLMs can have no deep understanding of the external world that those texts may seek to describe, so no way of realizing the truth or falsity of their statements. Hence LLMs fall into the trap of hallucination, where certain combinations of input or randomness of the model produces false or nonsensical output. This problem is inherent in the technology LLMs are built on, and cannot fully be eradicated by increasing the size of the models or throwing more computing power at them.

The collaborative approach

Enter the collaborative agent model of AutoGPT, which combines LLMs in chains and spawns several instances to solve specific problems structured in task-subtask hierarchies. Firstly this makes them able to produce results they are not able to give alone, secondly it opens up for integration with other systems. For the technology of autonomous agents itself is nothing new; it's been around for decades and has been used to implement a whole slew of varied systems.

Some problems with the collaborative model have been described by the Camel project, for example non-terminating chat loops between agents, so let's alleviate that by enveloping each LLM inside a wrapper agent, dealing with simple input-output monitoring, logging, and controls like terminating potential loops.

This still leaves us with LLM agents that can talk to each other in natural language, meaning that human beings can study their interaction and judge how successful they are within the overall system—a system that we may of course implement for any specific purpose. Each agent can take on a specific role as described in its prompt, configured on startup. Such a collaborative LLM system can be put to solve at least as many types of problems as those addressed by autonomous agent systems in the past, and surely many more. Analysis and design of this kind of system may to a certain extent take the form of assigning roles or tasks to each agent consisting of text interpretation and “human-like” analysis, although split into manageable chunks. Finally we need to ensure that the agents cooperate in fine-tuning each other's output by giving each other feedback and corrections along the way.

Combining Knowledge-Based and Generative AI

And here is where Knowledge-Based systems, or indeed any other kind of system, may come in and be integrated in a collaborative agent setup. All each agent cares about is the input they're getting from the other agents, not their internal workings, so it's entirely feasible to have Knowledge-Based systems interact with LLMs and do fact-checking on their behalf, or stepping in to terminate faulty lines of LLM reasoning. It would certainly be advantageous if the Knowledge-Based system were able to chat with the LLMs in natural language also, but it is in no way a prerequisite, as it could alternatively be set up to monitor communications between LLMs and reset them to a previous state and alter their prompts if they veer away from their task or go off topic.

Conclusion

So, is it time Knowledge-Based systems and their ontologies are brought back? Of course they were never gone, but it seems to me that they necessarily must be used to temper the associations that LLMs produce, and give authoritative answers about topics with a level of certainty that LLMs by themselves cannot attain. For the LLMs like ChatGPT have no real understanding of the meaning of their word streams; they're floating in a lingual world, dreaming in language. A connection to real knowledge is needed to keep them tethered to the ground.

—Fred Johansen, former chair of the Norwegian AI Society. My AI background is in KQML for knowledge representing agents, Case-Based Reasoning, Semantic search as well as knowledge representation in RDF/OWL. Between 1996 and 2005 I worked in the Norwegian company CognIT, which was doing Semantic search and language modeling through its Corporum system. However, since I've not done any AI since 2013, I'm currently playing catch up to get a handle on LLMs.