Friday, November 21, 2025

wikichip

 "Wouldn't it be great if you had all of Wikipedia in a chip in your head?"

So here's a conundrum: suppose Elon inserts a chip in your brain with, as he said somewhere "all of Wikipedia on it!" (I'm convinced that I read him saying this, but maybe I'm hallucinating like a chatbot.) 

But having Wikipedia in your brain doesn't entail that you understood or learnt any of it. Surely you'd still have to read the Wikipedia-chip-in-your-brain to know anything in it. And if you still have to read it, then having it in your brain is just a minor convenience. 

If, on the other hand, having it in your brain equals knowing the Wikipedia content, what then is learning and knowing? Is it just knowing when to regurgitate factoids, or in this case Wikipedia sentences? Is understanding just the fantasy belief or remembrance that you understand? Is this mere act of retrieval how we learn and know? Sounds like Searle's Chinese room, which was intended to show that the room did not understand Chinese any more than the person inside it did. So what is it to know or understand? 

Back in 1980 John Searle came up with a thought experiment designed to show that functionalism or behaviorism couldn't account for consciousness. A monolingual English speaker with a Chinese-English dictionary might be able to translate sentences from Mandarin to English, but no one would conclude that he understands the Mandarin sentences. Even memorizing the translations wouldn't suffice. That's just pushing the dictionary into the brain. It's just more know-what, not understanding how. 

Problem is, it seems to me that swapping Chinese words for English words is not too far from knowing Chinese. It's missing the grammar, of course, but using a grammar handbook deftly might be enough for anyone to say, you speak Chinese...a bit slowly; you speak slow Chinese. If I memorized the lexicon and those rules, isn't that what is meant by knowing a second language? 

The wikichip is a much more interesting thought experiment. Being able to recite at an appropriate prompt "In fourteen hundred ninety two, Columbus sailed the ocean blue" without knowing that "Columbus" is a name, and names designate persons, and persons are human, and humans are animals who speak, and this designatum was Italian and Italy is a place where Italians speak Italian, a language, a lot, and, btw, the ocean is water and a lot of it, a really really lot of it and you can sail on it and sailing is getting on a boat.... You need to know a lot for just a single sentence. 

The wikichip thought experiment demands an explanation of what learning is, and that's a lot more than just knowing a second language. The Chinese Room relies on translation, but what plays the role of translation in the wikichip? It's knowledge or understanding or some aggregate of learning, or many aggregates of learnings in manifolds of areas of experience. In other words, what is learning?

For starters, the wikichip intuition tells us right away that learning is not atomistic. Learning an idea, like the meaning of "ocean", involves not just its material, water, and quantity, too much to swim over, so much that there are whales and huge populations of fish living in them, but also to understand a bunch of symbolic categories -- oceans belong to nature, not to society, not to the manufactured world, they don't belong to the world of politics and economics, they are studied by natural scientists, are used metaphorically to signify vastness....

Learning is contextualizing, and context can include cultural categories, theories of the natural world, of the map, of politics, theories of all sorts of systems, some of which may contradict others. They aren't all consistent and some are even incoherent. Someone might be a monotheist and still believe in angels or a trinity of deities. To know such a religion, you'd have to know about its incoherence. 

Learning involves knowing a lot of stuff including materials and causes and purposes and categories and symbolic relations. But paradoxically, not necessarily truths of reality. If I think Italy borders Spain, and conclude that Columbus walked over the border to get to Spain where he embarked on his journey over the ocean, I've learnt something meaningful. Wrong, but meaningful, and I learnt it and understand it, understanding it so well that I could explain it to you! One can learn falsehoods. Considering how easily we apply barely coherent intuitions to experiences where those intuitions don't apply, I wouldn't be surprised if most of what we learn is false or at least incoherent or inconsistent with the few things we reliably know. 

So learning is more than knowing. It's contextualizing information usually for some purpose. It's having theories. Often, having wrong theories. But even wrong theories are a step beyond knowledge. They put divergent bits of knowledge arranged on a Procrustean bed to manipulate the bits into a crock of bullshit that you think just might work. That's human thinking and intelligence: understanding and explanation. It's predictive and risky: predictive to secure stability but risky because it's flawed. It's a recipe that often does work well enough until you have to eat it yourself. That's when you think, maybe I need to change up the recipe to get something I'm willing to swallow myself. 

The sociology of false beliefs is all about how and why people learn and hold falsehoods. It may be limited thinking -- uncritical thinking -- but it's still learning. So it does tell us something about intelligence, because learning requires a bit of intelligence. What's troubling about AI is that it's fully capable of behaving as if it is intelligent and appears to use it to learn, but is behaving equivalent to being? Again, that's the Turing test, but that test doesn't answer the question, it just skirts it in a convenient circle. 

For some engineers that will be enough to say the LLMs learn and understand. More modest engineers will say, "no it might be that it doesn't understand, but it behaves exactly as a human learner, and since we don't know what understanding is beyond this behavior (Turing test again) asking for more is irrelevant, and we should give it the same rating as the human thinker." The most modest might say "No, it doesn't understand, but it behaves as if it does, and that's what you were asking of us engineers. You didn't ask us to create synthetic humans. You asked us to give you synthetic intelligence. And this is what synthetic intelligence looks and acts like. We're not here to solve your natural science problems about the brain or consciousness. We're here to serve your needs to provide conveniences to all humankind. You're welcome." 

That's where the wikichip intuition leads: theorizing. Intelligence is not having a true theory or accurate model, it's having theories and models cobbled together with all these complexities, true, false, accurate or inaccurate. Being intelligent is having done scientific investigation since infancy, trying out what works in experience and coming up with theories that seemed to you to serve you. Do they in fact serve you? Not always: cognitive bias, stubbornness, transferring from one theory to locations or objects that don't apply -- there's so much wrong in our intelligence, but that's what it is to be "intelligent": iow mostly a big mess of stupid. At least for humans.   


the lesson of AI

If there's a lesson to learn from LLMs, it's that humans don't think and aren't intelligent. 

Every time I hear "AI is intelligent!" I feel like Andersen's little boy in the Emperor's New Clothes. "Don't you see: just because AI can do what we do doesn't mean AI can think. It means we don't think, people!" 

It's not that AI means we're merely thinking meat machines. We already knew that. It's that what amounts to thinking in us is unintelligent, lame mimicry, just like AI.

I've written a lot on this blog about the computational character of language (respect to Chomsky) and the algorithmic nature of words and ideas (respect to Plato and Kant). But I don't really believe it. I write it because I wish to believe it. Because I wish to believe that we humans think intelligently, and because I want to believe that Wittgensteinian behaviorism just can't be so. Because it's just too vacuous. But the sad truth seems to be that we don't think intelligently. We "think" by picking up habits of thought and expectation, often causal stories that we don't bother to question. AI's reinforce, reinforce reinforce means for behavioral psych confirm, confirm, confirm, never think. 

Here's a few examples: 

When asked which is more likely, that San Francisco will be under water in 2035 or in 2035 a great earthquake will sift and shuffle the soil beneath San Francisco and the earthquake will send the mother of all tsunamis over the city and it will slip under water, people choose the latter as more likely than the former scenario. They don't think to themselves that the first scenario could have been the result of an earthquake and tsunami but also global warming's sea rise or North Korea lobbying a nuke nearby or a meteor -- you get the picture. Logically the more general story is less particular, so it's more likely. But people look for a familiar story with a familiar explanation. that's not thinking. That's mere mimicry, just like a neural network! (Now, the presentations of the two scenarios are loaded -- "It will be under water" implicates that it's under water for no reason, that we trust that what we're being told is the whole truth and nothing but. That just shows we prioritize trust more than our own thinking.)

I ask why do we hold a door open for others behind? Everyone I've ever asked this responds with a positive cause: it's polite, it's helpful (even though people open doors by themselves regularly and often the people behind feel compelled to hurry a bit so as not to inconvenience the door-holder -- that is to say, by holding the door you're actually inconveniencing them by compelling theme to hurry). Never has anyone thought, 'what if I didn't hold the door open? Then I'd be slamming the door in front of someone right behind me. They'd think I'm a dick. Ah, that must be why I hold the door open. It's to avoid being judged a dick. It's all virtue signaling.' And isn't that the essence of morality? In a social species, approval-seeking is the glue that keeps us all together. That's an important bit of self-understanding and species understanding. But no one thinks that one through. Because we don't think. We just mimic. Hold the door because ... polite? helpful? nice? We miss it all for the lack of a little thinking. 

Same with free speech. I ask what's the benefit of having freedom of speech if the market of ideas never persuades anyone? And everyone knows that the NYTimes reader is uninfluenced by Fox News, and vice versa, so what's the point? The response I get is never "What if we had no such freedom? How would it be enforced? Would we have to lie to each other on pain of punishment? Wouldn't we all know that we're all lying to each other? How could we ever trust anyone? The point of language would be utterly defeated. Why converse at all? How could a social species even survive without trust in shared information?" No one asks this. 

I see a long literature advising on how to overcome bias, especially confirmation bias (grasping for evidence in favor of one's beliefs or what one wants to believe) or myside bias (attacking evidence against what one already believes or wants to believe). Always top of the list is "be humble". But what does this mean more than "don't be attached to your beliefs"? It's question-begging. "Humble" is just another word for "don't be so biased in favor of your beliefs." If you're looking for a way to free yourself of your biases, how is "so free yourself of your biases, bro" thoughtful advice? It's nothing but a virtus dormitiva. That's monkeying, not thinking. And yet we set great value in such advice. "Be humble" -- what a useless, stupid piece of advice. It's just words, empty words. 

We don't think. We mimic with familiar narratives and habits, and monkey with metaphors and analogies. That's not thinking. That's repeating. 


the obvious evidence that propaganda persuades no one

 Is the Fox News watcher persuaded by what the NYTimes says? Not at all. Whatever the NYT presents only confirms the Fox watcher's beliefs about the NYT's bias. Is the NYT's reader persuaded by Fox News? Nope. Propaganda serves to confirm the beliefs that its audience already believes or wants to believe. Propaganda does not persuade. Far from it: it confirms the negative judgments about the out-group. Propaganda serves to confirm one's conviction that the out-group is morally and informationally defective and dangerous.  

Two facts emerge: propaganda doesn't persuade, and political polarization belies the post modernist and the Marxist assumption that the culture has a single discourse of power. There's no systemic belief structure directing all minds. Liberal democracy has a structure of mutually out-grouping, with the in-group significantly determined by accepting anything the out-group rejects, all the while developing and innovating means of rejecting the Other. 

In this innovation the Right has been particularly fecund. It used to be the conservatives who were stuck in the mud of the past. Now the Right is full of reinvention of natioalisms and conspiracy theories, while the Left is stuck with its Enlightenment principles and its self-righteous moral superiority and censoriousness. . 


sunk cost: loss aversion or Bayesian failure?

Loss aversion is a natural selection emotion tied to survival. Loss has a finite bottom boundary -- no bananas means starvation and death -- whereas acquisition has an infinite or unbounded superfluity. No one needs forty billion bananas and using them takes some effort and imagination, like maybe using them for bribes towards world domination. The normal person wants to assure herself first that there's something to eat tonight. World domination later, if we're still interested after dinner.

So sunk cost fallacy is an emotional attachment to what's been spent. But it is also a failure of Bayesian analysis of time. So you stay in the movie theater not only because you don't want to throw away the ticket you spent money on, but also because that emotional attachment -- loss aversion -- has blindt you to the time outside the theater. The ticket has focused on the loss rate of leaving: 100% of the next hour will be lost. But that's forgetting all the value outside the theater. 

This Bayesian interpretation predicts that people whose time is extremely valuable -- people with many jobs or jobs that have high returns whether in financial wealth or real wealth (personal rewards) are less likely to stay in the theater. Their focus will be trained on the time outside the theater. The losses will be adjusted for the broader context of the normal. We should expect that the very busy or very productive will be resistant to the fallacy. 

Of course, there's also the rich who don't worry about throwing a ticket away, the marginal value of which money is low or worthless. But overall, the sunk cost fallacy should occur only with people who have time to waste, whose time is not pressingly valuable. The sunk cost fallacy may be an arithmetic fallacy of focus, not just an evolutionary psychology of risk-aversion. 

Freud and Haidt got it backwards: the unconscious is rational; the conscious mind is not

A friend insists that I'm disciplined, since he sees that I take time every day to work out on the gymnastics bars in the local park. I object: I work out because I enjoy getting out of the house. He concedes that we do what we enjoy without discipline. 

We both have it all wrong. I'm well aware that the opportunity cost of staying at home is, on most days, far greater than the opportunity cost of going to the park and socialize while practicing acrobatics. I know that I need to socialize every day and maintain my strength and agility. But that rational cost-benefit equilibrium never motivates me. I'm comfortable at home, I don't feel energetic enough to brave the cold -- there's any number of reasons to stay home. If I debate with myself over whether to go out, I will stay. The immediateness of laziness -- the comfort of now -- overcomes any rational equilibrium. So how do I get to the park? It's not discipline. I don't even understand what "discipline" means. Is there an emotion of discipline? Is it suppressing one's thinking -- including one's deliberating and second guessing and procrastinating and distracting oneself -- and just do it? 

As mentioned in this post, the unconscious mind's rational intentions will make decisions without consultation with the conscious mind, as long as the conscious mind is distracted. Focus my conscious attention on going out and immediately I'm feeling comfortable and lazy and call up all the reasons to stay home. Think about anything else unrelated, and soon enough it seems time to grab the wool sweater and go. Sometimes I'll watch myself grab the wool without knowing when I made the decision to grab it. It just happens.

It's the unconscious mind that knows what's best for my long-term goals. It's my conscious mind that's swayed by the emotions of now. Haidt treats emotions as the unconscious mind, sort of following Kahneman. But this rational, disciplined, far-sighted unconscious mind is distinct from the emotions. It's the rational nagging mind of what I know I should do, and that I would do but for the interventions of my conscious, biased, instant-gratification emotions. The emotions are always immediate -- they are feelings and have to be felt in the now. The unconscious mind isn't in the now at all. It's a hidden subterfugeal world of long-term rational sabotages against my conscious will. Freud misplaced the conscience. It's not the superego, it's the subterego, the intuitive fast system that's thinking far ahead, working to keep me well against my will. 

the spirituality paradox

Spirituality often cloaks itself in moral guise as shedding selfishness in favor of embracing the Other whether it be other sentient beings or the world of inanimate phenomena: amor vincit omnia.

The goal of such spirituality is to transcend the self, but the purpose is to improve the self. So, for example, a spiritual cult or movement targets the individual member for itself. It's not a movement to save the cows and chickens, or preserve pristine nature. It's a movement to bring the individual's self to a higher spiritual state. In other words, it's a selfish purpose with a selfless goal. 

From my biased perspective it's not merely contradictory and self-defeating (I mean the doctrine is defeating the doctrine -- a doctrine at cross-purposes to itself), but also self-serving, decadent and essentially degenerate. Yes, you have only one life to live, so there's plenty of incentive to perfect that life for itself -- I'm down with that, for sure -- but there are billions of others and possibly an infinity of other interests to pursue than this one self. Arithmetically, the others should win, were it not for the infinite value of one's own life. 

But here's the difference: attending to things beyond oneself also perfects or augments one's own meagre life. One path to transcendent enlightenment is studying the Other, instead of limiting oneself to navel-gazing. That's a path towards two infinities added together: the broad study of, say, the psychology of the Other will shed equal light on one's own psychology, while the study of, say, thermodynamics or information theory, will take you far beyond oneself. 

Arithmetically, two infinite series are no greater than one infinite series, but you still can see the advantage of an infinite series within yourself and an infinite series of yourself and all the others', the infinitesimal within plus the infinite outside. 

love-fragility inequality

better to have loved and lost than never to have loved at all??

Romance might be the most wonderful experience in life. also the most precarious. Is the precariousness worse than the wonderfulness is good? Kahneman and Tversky and Thaler and Gilovich tell us that we're more risk-averse than benefit-embracing. The epigraph above must be a fiction. 

It's hard to measure such extreme emotions, but if it's true, as is widely reported, that losing a job is worse than losing a loved one, then maybe romance is an exception to behavioral psychology's "losing is twice as bad as gain is good". So "better to have loved and lost than never to have loved at all" is a good gamble, since there are worse things than losing in romance, but nothing better than loving.