Friday, November 21, 2025

best posts

Click the links below 

>information faster than light: emergence, symbol and the representation of nothing

>entropy and truth

>the duality of truth: functional process vs probabilistic uses of "true" (pace Pinker)

>art, craft, game-theoretic cognition and machine learning

>the necessary mediocrity of art, UFOs and religions, and the boundless imagination of science

>where is the mind and what's a thought?

>complexity and AI: why LLMs succeeded where generative linguistics failed

>the sociology of false beliefs

>how we know what dogs are not saying and that the universe is not thinking

>jones' 4 corollaries to Brandolini's Law

>the Gates-Musk paradox: conspiracy theories are not about what the theorists think they're about

>the gnomiad: the science and sociology of life advice, and their paradoxical puzzles 

>of mathematical beauty: crystals and potatoes, a Darwinian explanation

>self-selection & the unchosen: identity, the spark bird and the Freudian trap

>de-mystifying Wittgenstein (and a tribute to Chomsky)

>an addendum on Robin Hanson's grabby aliens

>where the future is behind you

>the fool's errand attachment: a cognitive bias

>the new libertarian, overcome by bias

>fast track to enlightenment and nirvana

wikichip

"Chatbot, are recursive reflexive center-embeddings like fractals?" 

"No, fractals are used in mathematics; recursive reflexive center-embeddings are used in linguistics." 

"Chatbot, are center embeddings and fractals both recursive reflexive functions?" 

"Yes." 

"Chatbot, doesn't your first answer imply that you know only the contexts in which these words occur, but those contexts do not suffice to understand the ideas they represent?" 

"I'm a chatbot. I don't have understandings."

 "Wouldn't it be great if you had all of Wikipedia in a chip in your head?"  

So here's a conundrum: suppose Elon inserts a chip in your brain with, as he said somewhere "all of Wikipedia on it!" (I'm convinced that I read him saying this, but maybe I'm hallucinating like a chatbot.) 

But having Wikipedia in your brain doesn't entail that you understood or learnt any of it. Surely you'd still have to read the Wikipedia-chip-in-your-brain to know anything in it. And if you still have to read it, then having it in your brain is just a minor convenience. 

If, on the other hand, having it in your brain equals knowing the Wikipedia content, what then is learning and knowing? Is it just knowing when to regurgitate factoids, or in this case Wikipedia sentences? Is understanding just the fantasy belief or remembrance that you understand? Is this mere act of retrieval how we learn and know? Sounds like Searle's Chinese room, which was intended to show that the room did not understand Chinese any more than the person inside it did. So what is it to know or understand? 

Back in 1980 John Searle came up with a thought experiment designed to show that functionalism or behaviorism couldn't account for consciousness. A monolingual English speaker with a Chinese-English dictionary might be able to translate sentences from Mandarin to English, but no one would conclude that he understands the Mandarin sentences. Even memorizing the translations wouldn't suffice. That's just pushing the dictionary into the brain. It's just more know-what, not understanding how. 

Problem is, it seems to me that swapping Chinese words for English words is not too far from knowing Chinese. It's missing the grammar, of course, but using a grammar handbook deftly might be enough for anyone to say, you speak Chinese...a bit slowly; you speak slow Chinese. If I memorized the lexicon and those rules, isn't that what is meant by knowing a second language? 

The wikichip is a much more interesting thought experiment. Being able to recite at an appropriate prompt "In fourteen hundred ninety two, Columbus sailed the ocean blue" without knowing that "Columbus" is a name, and names designate persons, and persons are human, and humans are animals who speak, and this designatum was Italian and Italy is a place where Italians speak Italian, a language, a lot, and, btw, the ocean is water and a lot of it, a really really lot of it and you can sail on it and sailing is getting on a boat.... You need to know a lot for just a single sentence. 

The wikichip thought experiment demands an explanation of what learning is, and that's a lot more than just knowing a second language. The Chinese Room relies on translation, but what plays the role of translation in the wikichip? It's knowledge or understanding or some aggregate of learning, or many aggregates of learnings in manifolds of areas of experience. In other words, what exactly is learning?

For starters, the wikichip intuition tells us right away that learning is not atomistic. Learning an idea, like the meaning of "ocean", involves not just its material, water, and quantity, too much to swim over, so much that there are whales and huge populations of fish living in them, but also to understand a bunch of symbolic categories -- oceans belong to nature, not to society, not to the manufactured world, they don't belong to the world of politics and economics, they are studied by natural scientists, are used metaphorically to signify vastness....

Learning is contextualizing, and context can include cultural categories, theories of the natural world, of the map, of politics, theories of all sorts of systems, some of which may contradict others. They aren't all consistent and some are even incoherent. Someone might be a monotheist and still believe in angels or a trinity of deities. To know such a religion, you'd have to know about its incoherence. 

Learning involves knowing a lot of stuff including materials and causes and purposes and categories and symbolic relations. But paradoxically, not necessarily truths of reality. If I think Italy borders Spain, and conclude that Columbus walked over the border to get to Spain where he embarked on his journey over the ocean, I've learnt something meaningful. Wrong, but meaningful, and I learnt it and understand it, understanding it so well that I could explain it to you! One can learn falsehoods. Considering how easily we apply barely coherent intuitions to experiences where those intuitions don't apply, I wouldn't be surprised if most of what we learn is false or at least incoherent or inconsistent with the few things we reliably know. Once upon a time, folks thought the earth was flat and unmoving because the earth they knew and stood on was flat, if bumpy, and stable, though on non normal occassions quaky but never spinning. That was their learning. Learning is not trueing. 

So learning is more than knowing. It's contextualizing information usually for some purpose. It's having theories. Often, having wrong theories. But even wrong theories are a step beyond knowledge. They put divergent bits of knowledge arranged on a Procrustean bed to manipulate the bits into a crock of bullshit that you think just might work. That's human thinking and intelligence: understanding and explanation. It's predictive and risky: predictive in order to secure stability but risky because it's flawed. It's a recipe that often does work well enough until you have to eat it yourself. That's when you think, maybe I need to change up the recipe to get something I'm willing to swallow. 

The sociology of false beliefs is all about how and why people learn and hold falsehoods. It may be limited thinking -- uncritical thinking -- but it's still learning. So it does tell us something about intelligence, because learning requires a bit of intelligence. What's troubling about AI is that it's fully capable of behaving as if it is intelligent and appears to use intelligence to learn, but is behaving equivalent to being? The Turing test says yes, but that test doesn't answer the question, it just skirts it in a convenient circle. 

For some engineers that will be enough to say the LLMs learn and understand. More modest engineers will say, "no, it might be that it doesn't understand, but it behaves exactly as a human learner, and since we don't know what understanding is beyond this behavior (Turing test again) asking for more is irrelevant, and we should give it the same rating as the human thinker." The most modest might say "No, it doesn't understand, but it behaves as if it does, and that's what you were asking of us engineers. You didn't ask us to create synthetic humans. You asked us to give you synthetic intelligence. And this is what synthetic intelligence looks and acts like. We're not here to solve your natural science problems about the brain or consciousness. We're here to serve your needs to provide conveniences to all humankind. You're welcome." 

So here's where the wikichip intuition leads beyond the Truing test: theorizing. Intelligence is not having a true theory or accurate model, it's having theories and models cobbled together with all these complexities, true, false, accurate or inaccurate. Being intelligent is having done a kind of personal scientific investigation since infancy, trying out what works in experience and coming up with theories that seemed to you to serve you. Do they in fact serve you? Not always: cognitive bias, stubbornness, transferring from one theory to locations or objects that don't apply -- there's so much wrong in our intelligence! But that's what it is to be "intelligent": iow mostly a big mess of stupid. 

Stupid ideas and stupid theories are ideas and theories nonetheless. Theories are contextualized understandings of ideas. So what's an idea? 

PS. Readers will have noticed that the word "intelligent" is ambiguous. It can mean "the ability to think" as in "chimps are intelligent" or "chatbots are intelligent" and it can also mean "thinks well" as in clever, smart or insightful. The two do not entail each other: one can be capable of thinking and still be stupid, like humans (intelligent in the first sense and not intelligent in the second), and like Watson the chess-bot or your YouTube algorithm, or extending a bit, a clever plan or arrangement, unthinking but smart -- intelligent in the second sense and not intelligent in the first. 

the lesson of AI

If there's a lesson to learn from LLMs, it's that humans don't think and aren't intelligent. 

Every time I hear shouts of enthusiasm over "AI is intelligent!" I feel like Andersen's little boy in the Emperor's New Clothes. "Don't you see: just because AI's can produce our thinking behavior doesn't mean AI can think. Folks! It exposes the naked truth we don't think!" 

It's not that AI means we are not as unique as we might have wanted to believe or that we're merely thinking meat machines. We already knew that. It's that what amounts to thinking in us is unintelligent, lame mimicry, just like AI.

I've written a lot on this blog about the computational character of language (respect to Chomsky) and the algorithmic nature of words and ideas (respect to Plato and Kant). But I don't really believe it. I write it because I wish to believe it. Because I wish to believe that we humans think intelligently, and because I want to believe that Wittgensteinian behaviorism just can't be so. Because it's just too vacuous. But the sad truth seems to be that we don't think intelligently. We "think" by picking up habits of thought and expectation, often causal stories that we don't bother to question. AI's method of reinforce, reinforce reinforce translated into behavioral psych is confirm, confirm, confirm, and never think. 

Here's a few examples: 

When asked which is more likely, that San Francisco will be under water in 2035 or in 2035 a great earthquake will sift and shuffle the soil beneath San Francisco and the earthquake will send the mother of all tsunamis over the city and it will slip under water, people choose the latter as more likely than the former scenario. They don't think to themselves that the first scenario could have been the result of an earthquake and tsunami but also global warming's sea rise or North Korea lobbying a nuke nearby or a meteor -- you get the picture. Logically the more general story is less particular, so it's more likely. But people look for a familiar story with a familiar explanation. that's not thinking. That's mere mimicry, just like a neural network! (Now, the presentations of the two scenarios are loaded -- "It will be under water" implicates that it's under water for no reason, that we trust that what we're being told is the whole truth and nothing but. That just shows we prioritize trust more than our own thinking.)

I ask why do we hold a door open for others behind? Everyone I've ever asked this responds with a positive cause: it's polite, it's helpful (even though people open doors by themselves regularly and often the people behind feel compelled to hurry a bit so as not to inconvenience the door-holder -- that is to say, by holding the door you're actually inconveniencing them by compelling theme to hurry). Never has anyone thought, 'what if I didn't hold the door open? Then I'd be slamming the door in front of someone right behind me. They'd think I'm a dick. Ah, that must be why I hold the door open. It's to avoid being judged a dick. It's all virtue signaling.' And isn't that the essence of morality? In a social species, approval-seeking is the glue that keeps us all together. That's an important bit of self-understanding and species understanding. But no one thinks that one through. Because we don't think. We just mimic. Hold the door because ... polite? helpful? nice? We miss it all for the lack of a little thinking. 

Same with free speech. I ask what's the benefit of having freedom of speech if the market of ideas never persuades anyone? And everyone knows that the NYTimes reader is uninfluenced by Fox News, and vice versa, so what's the point? The response is always something like "it's good to be able to express yourself" (with misinformation or disinformation or just stupidity?), some attempt at finding a positive good in free expression. The response I never get is speculating on the negative: "What if we had no such freedom? How would it be enforced? Would we have to lie to each other on pain of punishment? Wouldn't we all know that we're all lying to each other? How could we ever trust anyone? The point of language would be utterly defeated. Why converse at all? How could a social species even survive without trust in shared information?" No one asks this. 

I see a long literature advising on how to overcome bias, especially confirmation bias (grasping for evidence in favor of one's beliefs or what one wants to believe) or myside bias (attacking evidence against what one already believes or wants to believe). Always top of the list is "be humble". But what does this mean more than "don't be attached to your beliefs"? It's question-begging. "Humble" is just another word for "don't be so biased in favor of your beliefs." If you're looking for a way to free yourself of your biases, how is "so free yourself of your biases, bro" thoughtful advice? It's nothing but a virtus dormitiva. That's monkeying, not thinking. And yet we set great value on such advice. The way to cure arrogance: "Be humble" -- what a useless, stupid piece of advice. It's just words, empty words. 

We don't think. We mimic with familiar narratives and endorse habits, and monkey with metaphors and analogies. That's not thinking. That's repeating. 

In Pinker's Stuff of Thought (his best, imho) he points out that for a metaphor to work, the user has to know already which characteristic of the source is being applied to the target. "Metaphor is the key to understanding." Does the shape of the key matter? Skeleton key or digital key? No. So you already know the idea relation: "key", meaning "solution to a quandary". What thought work is the metaphor doing besides being a concrete (you already know what I mean by "concrete") example of something you already know or a shorthand signal that saves a couple of nondescript, direct, literal words?

Psychological anthropology got this right with schema-theory in the 1990's. The idea is that our interaction with the world -- experience -- provides each of us with little schemas (schemata?) of how things work. That's how and what we learn. One consequence of this schema theory is that we're learning causal stories, not logical entailments. The theory explains why modus tollens is really hard for us if there's no causal story between antecedent and consequent: "If California's population is increasing, Ukraine is defeating Russia. Ukraine is not defeating Russia, therefore California's population is not increasing." That deduction is hard for us. But "If California's population is increasing, rents there will rise. Rents are not rising, therefore the population must not be increasing" is an easy deduction for us to grasp, even if we think the argument is wrong. Logic takes work. Schemas, like habits, don't.

Similarly with language. In the post on why AI succeeded where generative linguistics failed, there's a demonstration of how generative syntax allows us to easily parse extremely complex relations with simple recursive functions of machine syntax, the machine in Brocas area of your brain. It mechanically churns out the complex relations quickly and easily. What I didn't mention there is that semantic or logical functions are not syntactic, are not mechanical and take a lot of effort. It's not the case that it's not true that it's not the flat earthers who don't believe the earth is not round because they don't understand the science, it's the round earthers. Honestly I don't know what I just wrote there. To parse it I'd have to count the negatives and apply the simple logical arithmetic: even negatives=positive, else negative. Counting arithmetically is not mechanical for humans. We have to learn it, and it takes a bit of work. Same with logic. Yet any string of recursive prepositional phrases, even embedded ones, are easily and quickly parsed because prepositional phrases are mechanical syntactic functions. So just because we have generative capacity for language doesn't mean we think intelligently. We don't. 

(You can look at the mechanical syntactic structures here if you scroll to the bottom there.)


the obvious evidence that propaganda persuades no one

 Is the Fox News watcher persuaded by what the NYTimes says? Not at all. Whatever the NYT presents only confirms the Fox watcher's beliefs about the NYT's bias. Is the NYT's reader persuaded by Fox News? Nope. Propaganda serves to confirm the beliefs that its audience already believes or wants to believe. Propaganda does not persuade. Far from it: it confirms the negative judgments about the out-group. Propaganda serves to confirm one's conviction that the out-group is morally and informationally defective and dangerous.  

Two facts emerge: propaganda doesn't persuade, and political polarization belies the post modernist and the Marxist assumption that the culture has a single discourse of power. There's no systemic belief structure directing all minds. Liberal democracy has a structure of mutually out-grouping, with the in-group significantly determined by accepting anything the out-group rejects, all the while developing and innovating means of rejecting the Other. 

In this innovation the Right has been particularly fecund. It used to be the conservatives who were stuck in the mud of the past. Now the Right is full of reinvention of natioalisms and conspiracy theories, while the Left is stuck with its Enlightenment principles and its self-righteous moral superiority and censoriousness. . 


sunk cost: loss aversion or Bayesian failure?

Loss aversion is a natural selection emotion tied to survival. Loss has a finite bottom boundary -- no bananas means starvation and death -- whereas acquisition has an infinite or unbounded superfluity. No one needs forty billion bananas and using them takes some effort and imagination, like maybe using them for bribes towards world domination. The normal person wants to assure herself first that there's something to eat tonight. World domination later, if we're still interested after dinner.

So sunk cost fallacy is an emotional attachment to what's been spent. But it is also a failure of Bayesian analysis of time. So you stay in the movie theater not only because you don't want to throw away the ticket you spent money on, but also because that emotional attachment -- loss aversion -- has blindt you to the time outside the theater. The ticket has focused on the loss rate of leaving: 100% of the next hour will be lost. But that's forgetting all the value outside the theater. 

This Bayesian interpretation predicts that people whose time is extremely valuable -- people with many jobs or jobs that have high returns whether in financial wealth or real wealth (personal rewards) are less likely to stay in the theater. Their focus will be trained on the time outside the theater. The losses will be adjusted for the broader context of the normal. We should expect that the very busy or very productive will be resistant to the fallacy. 

Of course, there's also the rich who don't worry about throwing a ticket away, the marginal value of which money is low or worthless. But overall, the sunk cost fallacy should occur only with people who have time to waste, whose time is not pressingly valuable. The sunk cost fallacy may be an arithmetic fallacy of focus, not just an evolutionary psychology of risk-aversion. 

Freud and Haidt got it backwards: the unconscious is rational; the conscious mind is not

A friend insists that I'm disciplined, since he sees that I take time every day to work out on the gymnastics bars in the local park. I object: I work out because I enjoy getting out of the house. He concedes that we do what we enjoy without discipline. 

We both have it all wrong. I'm well aware that the opportunity cost of staying at home is, on most days, far greater than the opportunity cost of going to the park and socialize while practicing acrobatics. I know that I need to socialize every day and maintain my strength and agility. But that rational cost-benefit equilibrium never motivates me. I'm comfortable at home, I don't feel energetic enough to brave the cold -- there's any number of reasons to stay home. If I debate with myself over whether to go out, I will stay. The immediateness of laziness -- the comfort of now -- overcomes any rational equilibrium. So how do I get to the park? It's not discipline. I don't even understand what "discipline" means. Is there an emotion of discipline? Is it suppressing one's thinking -- including one's deliberating and second guessing and procrastinating and distracting oneself -- and just do it? 

As mentioned in this post, the unconscious mind's rational intentions will make decisions without consultation with the conscious mind, as long as the conscious mind is distracted. Focus my conscious attention on going out and immediately I'm feeling comfortable and lazy and call up all the reasons to stay home. Think about anything else unrelated, and soon enough it seems time to grab the wool sweater and go. Sometimes I'll watch myself grab the wool without knowing when I made the decision to grab it. It just happens.

It's the unconscious mind that knows what's best for my long-term goals. It's my conscious mind that's swayed by the emotions of now. Haidt treats emotions as the unconscious mind, sort of following Kahneman. This is a mistake inherited from Freud, in turn inherited from Plato's Pheadrus and popularized in the 19th century romantics, Schopenhauer, Wagner and Nietzsche, that whole crowd convinced that the uncontrolled emotions, dark, mysterious and dangerous, disturb and cloud the serenity and clarity of the reasoning awareness. 

This is all mythology, and religious mythology, self-punishing and confused. 

Awareness and emotions cohabit the now. This is obvious, a truism. "I feel, therefore I am" is equally definitive and necessary. There's little to distinguish between think, perceive, and feel. The awareness of the momentary environment feeds the emotions. The comfort of my chair is at once an awareness and an emotion. Any reasoned attempt to dissuade me from the comfort of my chair in the now for the sake of a merely imagined future will engage struggle, and the strength of perception will likely win. Every failed dieter, every procrastinator, every substance abuser, every phone zombie, every wanker knows this all too well. 

I once got off all sugar, not by struggling against my desire, but by distracting myself with the one thought that I knew would always distract me from that very desire: distracting myself with the desire itself. I spent my idle time thinking about all the sweets I most like, listing them, ordering them and categorizing them -- cookies, cakes, chocolates, ice creams -- and thought hard about which in each category I most wanted (childhood comfort favorites mostly beat fancy treats). You know, the opposite of "Don't think about an zebra!" That's a recipe for sure failure. But, "Okay, there's no way out of thinking about the zebra. Let us now then examine this zebra that is inhabiting our mental space" and pretty soon, sliding down with no struggle at all, you're too deep in...and you're enjoying it. 

It's the unconscious drives that are independent of the emotions and awarenesses of now. It's the unconscious mind that is the real decision maker. This detached, rational, disciplined, far-sighted unconscious mind is free from the emotions. It's the rational nagging mind of what I know I should do, and that I would do but for the interventions of my conscious, biased, instant-gratification emotions of the aware-now. 

The emotions are always immediate -- they are feelings and have to be felt in the now. The unconscious mind isn't in the now at all. It's a hidden subterfugeal world of long-term rational sabotages against my conscious will. Freud misplaced the conscience. It's not the superego, it's the subterego, the intuitive fast system that's thinking far ahead, working to keep me well against my will and motivated reasoning.

the spirituality paradox

Spirituality often cloaks itself in moral guise as shedding selfishness in favor of embracing the Other whether it be other sentient beings or the world of inanimate phenomena: amor vincit omnia.

The goal of such spirituality is to transcend the self, but the purpose is to improve the self. So, for example, a spiritual cult or movement targets the individual member for the spiritual elevation of that individual. It's not a movement to save the cows and chickens, or preserve pristine nature. It's a movement to bring the individual's self to a higher spiritual state. Saving the chickens is a by-product. In other words, it's a selfish purpose with a selfless goal. 

From my biased perspective it's not merely contradictory and self-defeating (I mean the doctrine is defeating the doctrine -- a doctrine at cross-purposes to itself), but also self-serving, decadent and essentially degenerate. Yes, you have only one life to live, so there's plenty incentive to perfect that life for itself -- I'm down with that, for sure -- but there are billions of others and possibly an infinity of other interests to pursue than this one self. Arithmetically, the others should win, were it not for the infinite value of one's own life. 

But here's the difference: attending to things beyond oneself also perfects or augments one's own meagre life. One path to transcendent enlightenment is studying the Other, instead of limiting oneself to navel-gazing. That's a path towards two infinities added together: the broad study of, say, the psychology of the Other will shed equal light on one's own psychology, while the study of, say, thermodynamics or information theory, will take you far beyond oneself. 

Arithmetically, two infinite series are no greater than one infinite series, but you still can see the advantage of an infinite series within yourself and an infinite series of yourself and all the others', the infinitesimal within plus the infinite outside.