Less is more in technology and in education

Mistaken Oracles in the Future of AI

It's popular among AI folks to think in terms of phases of AI, of which the current and most reachable target is likely “oracular AI”. Tools like ChatGPT are one manifestation of this, a form of question and answer system that can return answers that will soon seem superhuman in terms of breadth of content and flexibility of style. I suspect most educators don't think about this framework of AI as oracle much, but we should, because it explains a lot both about the current hype cycle around large language models and can help us gain critical footing with where to go next.

From the lesswrong site earlier, here's how they describe oracular AI (on their overall perspective, definitely take in the full set of ideas there):
> An Oracle AI is a regularly proposed solution to the problem of developing Friendly AI. It is conceptualized as a super-intelligent system which is designed for only answering questions, and has no ability to act in the world. The name was first suggested by Nick Bostrom.

Oracular here is an de-historicized ideal of the surface function of an oracle, made into an engineering system where the oracle just answers questions based on superhuman sources or means but “has no ability to act in the world.” The contrast is with our skynet future (choose your own AI gone wild movie example), where AI has a will and once connected to the means will most certainly wipe out all of humanity, whether for its own ends or as the only logical way to complete its preprogrammed (and originally innocuous, in most cliches) goals.

Two things to note here:
1. This is an incredibly narrow view of what makes AI ethical, focusing especially on the output, with little attention to the path to get there. I note in passing that much criticism of current AI is less with the outputs and more with the modes of exploitation and human capital and labor that go into producing said outputs.
2. This is completely backwards view of oracles.

The second point matters to me more, primarily because it's a recurring pattern in technological discussions. The term “oracle” has here been reduced to transactional function in a way that flattens its meaning to the point that it evokes the opposite of the historical reality. It's not just marketing pablum, but here a selective memory with significant consequences, a metaphor to frame the future. Metaphors like this construct an imaginary world from the scaffolding of the original domain. When we impoverish or selectively depict that original domain, when we distort it, we delude ourselves. It is not just a pedantic mistake but a flaw of thinking that makes more acceptable a view that we should treat with a bit more circumspection. What's more, the cues to suspicion are right there in front of us. The fullness of the idea matters, because we can see that the view of oracular AI as a friendly AI is a gross distortion, almost comically ignoring the wisdom that could be gained by considering the complex reality that is (and was) oracular practice.

(Since the term “oracle” generally looks back to ancient practices, for those who want some scholarly grounding, check out Sarah Iles-Johnston, Ancient Greek Divination, Michael Flower, The Seer in Ancient Greece, Nissinen's Ancient Prophecy, etc etc. or in other eras and with electronic resources access, e.g. Oxford Bibliographies on prophecy in the Renaissance.)

Long story made very short, oracles are not friendly question and answer machines. They are, in all periods and cultures, highly biased players in religio-political gamesmanship. In the case of the most famous perhaps, the Pythian oracle in Ancient Greece, the answers were notoriously difficult to interpret correctly (though the evidence for literary representations of riddling vs. actual delivery of riddling messages is more complicated). Predicting the future is a tricky business, and oracular institutions and individuals were by no means disinterested players. They looked after themselves an their own interests. They often maintained a veneer of neutrality in order to prosper.

That is all to say that oracularism is in fact a great metaphor for current and near future AI, but only if we historicize the term fully. I expect current AI to work very much like oracles, in all their messiness. They will be biased, subtly so in some cases. They will be sources from unclear methods, trusted and yet suspect at the same time. And they will depend above all on humans to make meaning from nonsense.

This last point, that the answers spouted by oracles might be as nonsensical as they are sensical, is vital. We lose track amidst the current noise around whether generative AI produces things that are correct or incorrect, copied or original, creative or stochastic boilerplate. The more important point is that humans will fill in the gaps and make sense of whatever they are given. We are the ones turning nonsense into sense, seeing meaning in a string of token probabilities, wanting to take as true something that might potentially be a grand edifice of bullshittery. That hasn't changed since the answer-givers were Pythian priestesses.

Oracular AI is a great metaphor. But it doesn't say what its proponents think it says. We humans are the ones who get to decide on whether it is meaningful or meaningless.

#chatgpt #ai #edtech #aiineducation #edtech #education