Friday, November 21, 2025

wikichip

 "Wouldn't it be great if you had all of Wikipedia in a chip in your head?"

So here's a conundrum: suppose Elon inserts a chip in your brain with, as he said somewhere "all of Wikipedia on it!" (I'm convinced that I read him saying this, but maybe I'm hallucinating like a chatbot.) 

But having Wikipedia in your brain doesn't entail that you understood or learnt any of it. Surely you'd still have to read the Wikipedia-chip-in-your-brain to know anything in it. And if you still have to read it, then having it in your brain is just a minor convenience. 

If, on the other hand, having it in your brain equals knowing the Wikipedia content, what then is learning and knowing? Is it just knowing when to regurgitate factoids, or in this case Wikipedia sentences? Is understanding just the fantasy belief or remembrance that you understand? Is this mere act of retrieval how we learn and know? Sounds like Searle's Chinese room, which was intended to show that the room did not understand Chinese any more than the person inside it did. So what is it to know or understand? 

Back in 1980 John Searle came up with a thought experiment designed to show that functionalism or behaviorism couldn't account for consciousness. A monolingual English speaker with a Chinese-English dictionary might be able to translate sentences from Mandarin to English, but no one would conclude that he understands the Mandarin sentences. Even memorizing the translations wouldn't suffice. That's just pushing the dictionary into the brain. It's just more know-what, not understanding how. 

Problem is, it seems to me that swapping Chinese words for English words is not too far from knowing Chinese. It's missing the grammar, of course, but using a grammar handbook deftly might be enough for anyone to say, you speak Chinese...a bit slowly; you speak slow Chinese. If I memorized the lexicon and those rules, isn't that what is meant by knowing a second language? 

The wikichip is a much more interesting thought experiment. Being able to recite at an appropriate prompt "In fourteen hundred ninety two, Columbus sailed the ocean blue" without knowing that "Columbus" is a name, and names designate persons, and persons are human, and humans are animals who speak, and this designatum was Italian and Italy is a place where Italians speak Italian, a language, a lot, and, btw, the ocean is water and a lot of it, a really really lot of it and you can sail on it and sailing is getting on a boat.... You need to know a lot for just a single sentence. 

The wikichip thought experiment demands an explanation of what learning is, and that's a lot more than just knowing a second language. The Chinese Room relies on translation, but what plays the role of translation in the wikichip? It's knowledge or understanding or some aggregate of learning, or many aggregates of learnings in manifolds of areas of experience. In other words, what is learning?

For starters, the wikichip intuition tells us right away that learning is not atomistic. Learning an idea, like the meaning of "ocean", involves not just its material, water, and quantity, too much to swim over, so much that there are whales and huge populations of fish living in them, but also to understand a bunch of symbolic categories -- oceans belong to nature, not to society, not to the manufactured world, they don't belong to the world of politics and economics, they are studied by natural scientists, are used metaphorically to signify vastness....

Learning is contextualizing, and context can include cultural categories, theories of the natural world, of the map, of politics, theories of all sorts of systems, some of which may contradict others. They aren't all consistent and some are even incoherent. Someone might be a monotheist and still believe in angels or a trinity of deities. To know such a religion, you'd have to know about its incoherence. 

Learning involves knowing a lot of stuff including materials and causes and purposes and categories and symbolic relations. But paradoxically, not necessarily truths of reality. If I think Italy borders Spain, and conclude that Columbus walked over the border to get to Spain where he embarked on his journey over the ocean, I've learnt something meaningful. Wrong, but meaningful, and I learnt it and understand it, understanding it so well that I could explain it to you! One can learn falsehoods. Considering how easily we apply barely coherent intuitions to experiences where those intuitions don't apply, I wouldn't be surprised if most of what we learn is false or at least incoherent or inconsistent with the few things we reliably know. 

So learning is more than knowing. It's contextualizing information usually for some purpose. It's having theories. Often, having wrong theories. But even wrong theories are a step beyond knowledge. They put divergent bits of knowledge arranged on a Procrustean bed to manipulate the bits into a crock of bullshit that you think just might work. That's human thinking and intelligence: understanding and explanation. It's predictive and risky: predictive to secure stability but risky because it's flawed. It's a recipe that often does work well enough until you have to eat it yourself. That's when you think, maybe I need to change up the recipe to get something I'm willing to swallow myself. 

The sociology of false beliefs is all about how and why people learn and hold falsehoods. It may be limited thinking -- uncritical thinking -- but it's still learning. So it does tell us something about intelligence, because learning requires a bit of intelligence. What's troubling about AI is that it's fully capable of behaving as if it is intelligent and appears to use it to learn, but is behaving equivalent to being? Again, that's the Turing test, but that test doesn't answer the question, it just skirts it in a convenient circle. 

For some engineers that will be enough to say the LLMs learn and understand. More modest engineers will say, "no it might be that it doesn't understand, but it behaves exactly as a human learner, and since we don't know what understanding is beyond this behavior (Turing test again) asking for more is irrelevant, and we should give it the same rating as the human thinker." The most modest might say "No, it doesn't understand, but it behaves as if it does, and that's what you were asking of us engineers. You didn't ask us to create synthetic humans. You asked us to give you synthetic intelligence. And this is what synthetic intelligence looks and acts like. We're not here to solve your natural science problems about the brain or consciousness. We're here to serve your needs to provide conveniences to all humankind. You're welcome." 

That's where the wikichip intuition leads: theorizing. Intelligence is not having a true theory or accurate model, it's having theories and models cobbled together with all these complexities, true, false, accurate or inaccurate. Being intelligent is having done scientific investigation since infancy, trying out what works in experience and coming up with theories that seemed to you to serve you. Do they in fact serve you? Not always: cognitive bias, stubbornness, transferring from one theory to locations or objects that don't apply -- there's so much wrong in our intelligence, but that's what it is to be "intelligent": iow mostly a big mess of stupid. At least for humans.   


No comments:

Post a Comment