Monday, July 21, 2025

self selection & the unchosen: identity, the spark bird, and the Freudian trap

A friend says that all the geniuses seem to have had a seminal moment in childhood that turned them towards their calling. 

The allure of this explanation seems obvious. Except for the fans of Gladwell's 10,000 hours, there's something special about geniuses; not like the rest of us. There must be an explanation of their magic specialness. There must have been a cause, an igniting start for this astonishing and extraordinary career. There must have been a special moment.  

I think this is all a fallacy, a failure to use Bayesian reasoning, compounded with a myth promoted by Freud that has somehow become an article of faith in folk psychology. 

Geniuses, or talented celebrities of any stripe, are the focus of great popular interest. They are often interviewed by journalists who are not geniuses, just ordinary blokes who want to please their audience. What does the audience want? To know how the genius became genius. It's a disguised version of "why are you a genius and I'm not?" or more personally, "how come I'm just your average slob, damnit!" 

And to answer this question, our ordinary-bloke journalist asks, "how did you become a genius?" Not in those words, of course, but more like "what got you interested in what you do?" "How did it all begin?" in other words, our vicarious spokesperson here is specifically asking the genius to come up with a beginning experience. And how does genius do this? Why, by scanning her past as far back as possible to find that start. Seek and you shall find. There's almost always some event that will fit the bill. End result: any genius will have a beginning story and it will likely be nestled in childhood. It's a case of confirmation bias, or inductive fallacy -- they're basically the same -- and they both mistake evidence for science. 

Why is this a failure to apply Bayesian reasoning? Consider all the events of childhood. Millions of them, aren't there? And of all those events, many of them could have been sparks for interest. Now, how many of these produced no enduring interest? How many produced no impression at all? How many sparked a brief but not lasting interest? Well, all of them except the one that genius held onto. In other words, it's not the event at all, it's the potential interest already within the genius. 

The relationship between the choice of story and the development of enduring interest is no doubt idiosyncratic and complex, including all the chances and complexities of environment. Or it may have been a driving interest from within. Whatever the case, there's no reason to believe that the event was the cause. 

Among birders -- lovers of birds, bird-watchers, bird devotees, ornithologists -- there's a common experience they describe as the spark bird, the bird that ignited their interest in becoming a birder. The metaphor says a lot, I think. A spark without tinder is a flash in the pan. It's the tinder that makes the flame. No parade of birds, no matter how long, exotic, remarkable, big or tiny, could spark my interest in birds. Is it because we had a bird at home for a little while, so I got used to them? But why didn't that bird initially spark me? 

It's because me. It's not because of the parade of flaunting birds. It seems ludicrous to think that the bird makes the birder. When looking at the events of childhood, it's just as important, maybe even more important, to look at all the events that made no impression. That's the base rate. That's the norm. Surely that's just as informative, if not more. What makes a genius is not the event, but the personality, the personality that ignored all the other big events, or dabbled in some of them and lost interest. 

The Freudian myth that childhood events are responsible for our development ignores the normal. I've posted on this oblivious obvious of the ordinary. It's the normal stuff, the everyday, the 99% of childhood, that makes up 99% of experience. And even that is only background. Oneself is 100% of one. Freud liked to have an explanation for everything, even if he had to fictionalize one. No doubt it made him seem more valuable and smart, and it's the character of the charlatan to know more than everyone else. Charlatanism is the pretense of having extra-normal knowledge. But it's just a pretense.  

The same experience can have radically different -- even opposite -- effects depending on the person. Much of psychology ignores the radical diversity of congenital personality. It's one of the deep beauties of  Ruth Benedict's understanding of personality that she recognized the congenial origin of diversity. Anthropologists often mistake her view as "culture determines personality", which is consistent with current popular post modern and also Marxist blank slate views that there's nothing innate that culture can't manipulate. 

Benedict was diametrically and essentially opposed to such a view in her very heart. If culture determined personality, there could be no deviance, yet every culture has its deviants. On her view, every culture has its own set of norms, a selection of possible human traits, so it has a kind of personality of its own. That's the personality of the culture, not the individuals born into it. Some are born with traits akin to their culture's, others born with traits further from it and still others too far from it to conform to the culture. Those are the deviants. On her view, those who were born with exactly the traits of the culture, are perfectly normal, are never criticized, and need never doubt themselves. In short, the perfectly normal are the psychopaths -- having self doubt is what it means to have a conscience, just what they lack. She's on the side of the deviants.

Notice that this account of psychopathy is not the current "they are born without a conscience." It's that they happen to have been born into a culture that 100% supports them. If you have self-doubts, not satisfied with yourself, troubled by yourself -- that's good! It means you're not a psychopath! It's good that you don't fit in perfectly with your culture. After all, the cultural norms are not necessarily good. They are just the local norms. The Good, whatever that is, is beyond the relative norms of any local system of norms. Those norms are just a kind of compass to set the direction of individuals in the society, to facilitate their ability to succeed within that culture. Unfortunately, it can leave behind those who are born with traits too far to fit in. She instructs us that in native cultures, the deviants are often embraced as sacred -- weird, not normal, but special and to be respected as such. We, rigid as we are and all too self-righteous, should learn from them. 

For her, innate diversity is the essence of human personality, which she describes as "congenial nature". Far from endorsing cultural norms, she turns it upside down: the perfectly normal are the most dangerous individuals, the Dick Cheney's who can order the mass murder of thousands without hesitation. 

It's not what happened to you as a child. It's the luck of being born as you in a culture not yours. 

how we know what dogs are not saying and that the universe is not thinking

Category: the sociology of false beliefs

It's sweet that the New Age mystics want to believe that animal communication is as rich as ours, that the underground funghi modulating trees' nutrition form an ecosystem mind, that the earth itself is conscious and the vast universe deep in thought. Who wouldn't want to believe the animal kingdom and all of nature and even the whole of the universe "are all one" with us? Why not blame the artificiality of civilization on the human distinctions that stand as obstacles in our way towards universal connection. The New Agey are sweet people too, and no doubt in the naive past maybe everyone believed such harmonious speculations. 

Their speculations are illusions, however appealing, and a little thought would show how wrong. And they are not just wrong, they are misleading, misdirecting their understanding of the world to the familiar wish-list of feel-goods and away from what is truly astounding and surprising about humans and how very different we are from the rest of the informational world not only in our communication and understandings, but identity and individuation as well. 

Starting with language: 

I've had this conversation so many times, whenever I explain how extraordinary human language is and how powerful, more powerful than they have ever imagined. 

"But animals communicate too!" they object.

"So what are they saying?" I reply. 

"We don't speak their language, so we don't know what they're saying." 

"Oh, but yes you do." 

Yes, we do. More important, we know what they're not saying. They're not discussing what happened yesterday or what didn't happen but should have that they expected, or what might happen tomorrow, and what might not. 

A dog growls. About yesterday? Unmistakably not. It's an expressive gesture of warning: you're too close, dangerously close. It's accompanied with a show of teeth, which couldn't be seen from a great distance. So you know the growl is about here and now. Is it communication? Of course. The whole purpose is to communicate -- immediately, very immediately, so you don't move any closer. 

So why can't a dog growl about yesterday? 

Here's the simple difference between expressive communication and symbolic language. We humans have a word for yesterday. It's "yesterday". It's not expressive of yesterday. Yesterday doesn't have a sound or even a look or color or scent. Yesterday is a notion, an idea. The word "yesterday" has no intrinsic relation to that idea, it's just the sound sequence English speakers use to talk about the day past. Dog's don't have this symbolic power. They're confined to expressive responses to the here and now. Whimper "I'm hurt, be gentle with me." Bark: "Alert! Dog here, ready for action." Howl: "I'm alone over here" or something like that. I'm not completely fluent in dog. 

So you do know what they're saying, and, more important, what they're not saying. Unless they're barking in telegraphic code, which is about as likely as that they might be aliens hiding here to spy on humans. Not likely, but no doubt someone out there will believe it. Because, you know, they are surveilling us.

A friend, a reader of poetry, insists that the power of language is its expressivity. What gets me is that the expressive power of language, of limited and little importance, should draw attention and admiration, while the truly and literally unimaginable power of symbolic language is utterly unrecognized. 

Suppose you wanted to tell a friend about your dog, but you had no word "dog". You might try making a typical dog sound -- barking, or bow-wow, rufruf. But how would your friend know that you're talking about your dog rather than talking about your dog's barking too much? Or that you're talking about all dogs barking too loud? So maybe you try mimicking your dog. This charade might actually work, as long as what you want to communicate is not what your dog did yesterday. Good luck on that charade! No wonder dogs don't express the past. Without a symbolic language, it would be a huge waste of time and effort for a useless communication about what isn't even present. 

But that's exactly what human language facilitates. We not only talk about what happened yesterday, we can talk about what didn't happen yesterday (!), and what is unlikely to happen tomorrow. In fact, most of what we say is about what isn't present here and now. And that's for good reason -- we all can see what's here and now, why talk about it? "This is the elevator. This is the elevator button. I'm pressing the elevator button." No. In the elevator we talk about the weather. Outside, not the weather in the elevator. 

The superpower of language is its ability to represent not the real, but the irreal -- what is not here, what doesn't exist, what couldn't exist, what might be, what was and what never was. The work I didn't finish. The book I have at home. Our mutual friend on vacationing somewhere for the week. That's the superpower of language.

And what makes this possible? By now you recognize that it can't be its expressivity. So what's the secret? The secret is this: the distinctive sounds that we produce have no relation to what they mean. We don't use bow-wow to indicate a dog. We use an arbtrary sound sequence that doesn't sound like a dog, doesn't look like a dog and doesn't smell like a dog. And we have a distinct sound sequence for the dog's bark, the dog's coat, the dog's scent, the dog's size, the dog's color and a word for 'my'. It's this non relation to the meaning that allows all sorts of nuanced meanings that couldn't be accomplished with expressive sounds. The foundation of the superpower is it's arbitrary relation to the meaning -- that "dog" doesn't sound like a dog. This arbitrariness alone is what allows an abstract notion like yesterday to be attached to a sound sequence. Maybe the most impressive words in English are "no" and "not" and "nowhere" and "never". There's a familiar world of what's here, but these words allow us to designate the complementary set of everything else, all that isn't. That should blow your mind. Unless you're a dog, in which case it's just a command. 

In the post on information faster than light, it's this ability for language -- symbolic communication -- that allows us to designate a shadow as a thing, when reductionist physics cannot treat it as a thing at all, but an absence that cannot move. It's our symbolism that allows us to talk about a shadow moving, and that's the only way a shadow can move faster than light. It's this level of complexity using a physical object -- for human language, it's a sequence of sounds -- arbitrarily to mean some idea that is not physical at all, allowing information to move faster than the universe's speed limit. That's beyond amazing. It's on the level of the miraculous -- the realm beyond the natural to the supernatural. Of course we've all known that ideas and thoughts are supernatural but it's symbolic language that bridges from the supernatural to the natural. Neither ideas nor thoughts can move at all, anywhere. Thoughts and ideas don't have a where. But symbols do and the information contained in them. It's this weird divorce from reality represented in the symbol, the symbol that crucially has no intrinsic relation to the thing it represents. It is all too easy to fail to see its power. Human civilization conducted mathematics for thousands of years before the Indians introduced the zero as a number. 

We've been using language for many thousands of years, taking it for granted, without any clear understanding of how it works, even when using words as magical incantations, at once recognizing its supernatural force and misunderstanding entirely how that force works. The incantation, focusing on the sound sequence, which is arbitrary, ignores its relation to the idea, which is the supernatural part. Primitive magic has it all backwards. 

You can see this in language today. We are inclined to think that athletics is a bit more serious than sports, even though they denote the same activities. Students in my classes thought "god" was a more formal and distinguished word than, say, "prostitute" and I had to point out that not only is "prostitute" a formal word for a ho, but "god" is a thoroughly common word, even a profane one as in "goddamnit", "God, what an idiot", "OMG, OMG!", "godawful", and the like, whereas "prostitute" never plays those roles. The social value of a word resides in the sound sequence, not in the idea. "God", whether you believe in it or not, is the loftiest possible idea by definition, though skeptics will quibble over "loftiest" as its equal, also by definition. In any case, it denotes the loftiest being. The sound sequence? It's just a sound sequence. In Romance it's one dialectal form or another of "deus", Arabic, "Allah", and in Russian the sequence is "Bog". 

So: there are distinctions to be made. Expressive communication, rampant throughout the natural realm including animals and plants, is not at all like human symbolic language. Whatever the animals are thinking, and whatever behaviors and adjustments and responses of stars and galaxies, they are not representing themselves symbolically. They are without the symbol "I". That alone should disabuse the New Age mystic of their assumptions that we are all one. It's only a mind filled with symbols that can represent "I" or "we" or "they" or "we are all one" for that matter, and more important symbolically, "We are not all one." . Whatever "oneness" we might have with the rest of the universe, it's not articulable beyond the kind of symbolism we use with "shadow". It's a symbolic fiction, a nothing, a vacuum that we can fill with any emotions or predilections we choose, whether it be the joy of oneness or the emptyness of the great one pointless universe. It's all just metaphors of symbolic language. 

And what about the earth and the universe? Can they think? Certainly inanimate objects respond to their environment and other objects, so they are all adjusting to each other. But again they are not symbolizing their responses. They're just behaving without cognizing their behaviors. There's a kind of hierarchy of information throughout the phenomenal world, and even though inanimate phenomena can behave in emergent ways beyond the reductionist laws of physics -- heat, for example, generated by Brownian motion is a classic case of an emergent property that can't be reduced to the behavior of any particular molecule, or the direction of time's arrow, an emergent property of probability, not physical law -- even if inanimate objects might be regulating their internal structure to adjust to stimuli from outside, inanimate objects do not symbolize or treat the objects around them as meaning tools to discuss the future of what will not happen or dream of nonexistent fictions or bemoan the absence of what was. 

Ironically, mistaking behavior for thinking is now widely accepted among those who insist that AI is intelligent. Their justification is the Turing Test. But that test is itself a behavioral test, an expression of despair that we have no better means of understanding intelligence. 

So what is intelligence? AI can refer to itself; it can have intentions; it can even lie (usually in trying to give you the answer you want rather than an accurate answer). 

Here's one answer to that question: symbolic representation and computability over those symbols. LLMs' learning is not computational. At least, not yet. So far it's still mimicry. 

This weakness of neural networks was already well understood and predicted back in the 1990’s. In my ph.d. program in linguistics at the time it was The Big Debate. Fodor and Pinker (his second trade book Words and Rules was specifically about this problem) argued that neural networks would not succeed in generating all and only the possible sentences of a language — analogous to solving a math problem algorithmically — but neural networks would merely approximate that set of possible sentences through mimicry. 

Ironically, neural networks turned out to be more successful than generativist linguistics. A language is too compromised by structural noise internal and external that humans can nevertheless learn beyond the grammar, for any single generative syntax to predict completely. So mimicry can succeed in producing what a generative algorithm can’t, since humans use both mimicry and computational generativity. 

I mention this because the language facts show something else essential: consciousness is irrelevant to the ability to generate language, since native speakers mostly aren’t conscious of the grammar by which they produce sentences. (This fact would not be available in math as mathematicians work with the functional syntax overtly.) And since there’s lots of persuasive evidence from human neurology (Christof Koch’s work, for example) showing that, bizarre and illogical as it may seem, consciousness is post decision, the moment of recognition or understanding is likely a mere epiphenomen and not necessary. There must be some other means by which humans functionally distinguish the infinite application of the algorithm versus mere inductive likelihoods of empirical mimicry. It’s a debate as old as Plato — an idea is a generative algorithm, a formulaic function ranging over not just the actual but the possible — and a rebuke to Wittgenstein’s behaviorist games and family resemblances.

Why do the New Agey prefer to speculate about oneness and unity with nature -- the Franciscan nature, not the Darwinian "nature red in tooth and claw" of Tennyson? No doubt it makes them feel happy, positive, socially agreeable, and they may believe that it contributed to their health both mental and physical. More power to them! I hope it works for them. But the root source of their wrong beliefs is a lack of knowledge and a bias in their thinking. Both would be impossible were they exposed to knowledge -- in this case knowledge of how symbols and symbol systems like language work -- and a bit of clear thinking. To me, that's hopeful. These are falsehoods that evanesce with a bit of education. Would they be better off as people? Maybe not. They might be better off in ignorance. 

To sum up: dogs and planets are not talking about the irreal, which is mostly what we humans talk about. That may be all there is to the difference in intelligence. We might all -- I mean everything in the universe -- have some kind of consciousness, but that's not intelligence. And the evidence seems to indicate strongly that consciousness is not intelligent thinking, it's a post hoc response to decisions already made internally. Intelligence is the manipulation or computation of symbols, and symbols are themselves algorithmic. What we learn is a mess of mimicry, algorithmic syntax and abstractions tied to symbols. That symbol relation -- an object representing a property or set of properties -- is having thoughts, ideas or meanings, and that is thinking. 

corporate elites are followers, not drivers

A student in my class explains that the market is full of sugary sweets because the elites have purposely lured us. They -- the elites have this name "they" -- give us sweets as children to hook us on sugar so that we want more as adults. Then we get fat and diabetic and become the stooges for Big Pharma. It's a conspiracy.

A another student insists that the reason women are sold jeans with fake or shallow pockets is because "they" -- fashion industry? -- want to compel women into buying expensive handbags. Corporate conspiracy.

A third student introduces all his comments with "I think the elites are trying to get us to...".

These analyses of the market seem reasonable. The pieces of the puzzles all fit. There's an inevitable logic to it. And these explanations are insightful, a credit to whoever came up with them. To accept such views is not only to be logical but to be insightful beyond the surface of the world, seeing into the depths of how everything actually works, and not just insightful but sort of heroic, since the way the world works is nefarious with greedy agents undermining the innocent, ordinary citizens who deserve better. 

All very logical, insightful and heroic until you recognize the all too obvious. Why don't fashion designers market fake pockets to men? Why not induce soy bean addiction by feeding children tofu? 

Because in western culture, men won't buy pants without practical pockets, plain and simple. (If you're interested in why pockets are so essential to western masculinity, take a look at this.) Why doesn't Big Agri addict the public on soy beans? Because the public doesn't like them enough. Because they're not sweet. Because humans like sweets and get addicted to them. Go look at hunter-gatherers like the hadza and you'll see they smoke out beehives to steal their honey. Nobody's making money off that. They don't have a cabal of elites. 

More of the obvious: if elites were running the show, why aren't the railroad magnates of the Gilded Age still calling the shots? How did the auto industry replace it? Why aren't the auto CEO's all descendants of the rail magnates? And why are the largest corporations in the world not railroads, not cars, but info-tech? 

Because the elites don't run the market. It's the consumer at the bottom, not the wealthy at the top. Schumpeter's creative destruction is driven by innovation and consumerism. The elites have a temporary effect on the market. If you want to look into the future, you'd need to know the innovations of the future, and you can't know those or they'd already exist (hat tip to David Deutsch). At best you can guess what people will want. That's not always so easy, since trends are like the law of unintended consequences. They are not only unpredictable, they are absurd. In my neighborhood, waiting an hour on line to get a novel bagel is now a sign of prestige. The trendy consumer actually brags about it. Some years ago Richard Ocejo wrote a dissertation on the trend in my neighborhood for recreations of authenticity (an absurd contradiction in itself) amidst gentrification -- not looking for the newest, most convenient and most efficient, but things like spending money and time on a visit to a barber for a shave: old-school, except the barber is a hipster. Only the most imaginative scifi authors could come up with these aberrations, and since useless deviations are infinitely more various than the useful, which are restricted to real efficiencies and real needs, only an infinity of sci-fi authors could predict at best the range of possibilities. 

Any entrepreneur knows that in the market place, the consumer leads. The elites -- whoever they are -- follow. There's plenty of corporate shenanigans to manipulate the consumer, but that's more back seat driving. The consumer, on the other hand, is a kind of monopoly. If bros want to wear nothing but T-shirts and jeans with at least five pockets, it's vastly easier to supply them with those than to cook up ploys to get them to like something else that might fail anyway, and all that investment lost. Marketing is all about figuring out what people want, what people will fall for. It's all about the psychology of what appeals to consumers. It's a study, not a practice of hypnotism or a bootcamp for militaristic coercion.

Conspiracy theorists assume that corporate elites are all working in concert, a monolith manipulating us, controlling us, deceiving us.  Quite aside from the mysteriousness of who these elites are -- we'll get to that -- there's this suspicion that some cabal is in control and we're not. 

Far from the truth. The nature of the market is that the capitalist follows the market and has only limited ability to manipulate. A cartel can choose to produce electric bulbs with planned obsolescence, but only if you want electric lights in the first place. And you want electric lights not because GE hypnotized you into desiring them. GE makes the bulbs because they are so incredibly useful to the consumer. Who is driving that market? It's the consumer -- you -- not them. 

A friend and fellow blogger complains that AI will make billions for the billionaires, but won't make ordinary people's lives better. That's all backwards. No benefit to the consumer, no billions to the rich. That's what a market is all about, and that's why entrepreneurs get into the market -- the inexhaustible wants of the consumer. There will also be military contracts and plenty of other government projects, but the big money is in the vast distributed consumer market. Even many of those government projects are developed to placate constituencies. 

The danger of the market is not elite manipulation. Quite the opposite. The consumer wants instant gratification, convenience, comfort, and too much of those are maladaptive to our evolution. Like sugar? Have as much as you want. Is it good? Oh so very good. Is it good for you? ...? With our tech, there's no more need to walk or orient oneself, no need to write or think about one's writing. Already my CUNY students tell me that an anecdote using the cardinal directions is meaningless to them. What other cognitive abilities will erode? We've lost so many skills since the agricultural revolution and even more since the industrial.

On the other hand, Gen Z have much easier access to information than any generation prior. But doesn't that just mean more fuel for their confirmation bias and polarization? That will be a tech-generated social dysfunction.

In the remote past, humans struggled to survive and did it together; now we struggle against our own personal excesses, desires, motivations and now, motivated cognitions. No doubt tech will someday resolve those as well. Someday, not today.

And so it is with all markets. It's the consumer that leads, the producer the sycophantic follower. The irony, of course is that the scale of the deal means that while the consumer gets what it wants in real wealth (electric lights or a computer-cum-phone-cum-recorder-cum-camera-cum-screen-cum-map-cum-factotem in your pocket) and the producer gets financial wealth, often obscene financial wealth. And why does the producer get so rich? Because the consumer likes. And once the consumer has become dependent on it -- what the economists call sticky demand -- then the corporates can manipulate more effectively with, for example, planned obsolescence. 

Consumer control is most evident in news media. Chomsky's manufacturing consent ignores the complexity of the information market. The NYTimes reader prefers to read the NYT rather than watch Fox because the reader -- the information consumer -- has already chosen what to expect. The Times can at best give the consumer what it wants. To change the consumer would risk losing that market share of the audience. To lose the audience would be to lose the advertisers, who pay for access to that audience. It's not a conspiracy. It's a market. It's not the elites running the Times, it's the Time's consumer running the Times. 

(An acquaintance complains that the newspapers report only what their corporate donors want reported. Conspiracy thinking often begins with a lack of knowledge and a theory of distrust filling the vacuum of knowledge. Newspapers don't have corporate donors. They have advertisers looking for a specific audience. Each news venue targets a specific market share of the public. Catering to it includes spinning the news to conform to the audiences biases, and if there's competition for that market share, as the WaPo and the NYT do, they each have to cover any news item that might be interesting to the audience regardless of their bias, otherwise their competition might cover it and lure their audience away. In response to his complaint, I google-searched "union busting at Amazon and Tesla". Both WaPo and NYT reported multiple times, however the Times reported on Amazon union busting more frequently than the WaPo, which is owned by Bezos, exec. chair of Amazon. Why does the WaPo report on it? Obviously, if it didn't it would lose market share, and embarrass itself as well. Why did the NYT report on it so frequently? Probably to embarrass WaPo and get an edge. It's a comedy of competition, not a corruption scheme paid for by mysterious donors. I venture to guess that my acquaintance never actually read the NYT or WaPo.)

The information market has the important consequence that propaganda does not change people's views. It confirms or validates the views they already have or already want to have. Propaganda can rally the faithful or direct their actions, but not convert. It can convince but not persuade, that is, it can give the readers more confidence in their bias. And a further consequence: censorship -- successful suppression of information -- may be more dangerous than propaganda. Propaganda can polarize dangerously, but censorship can leave the public entirely deceived, on the one hand, and distrustful of the censor, on the other. Censorship is the germ of conspiracy theory suspicion. 

What the market shows is not an elite leading and manipulating us. It shows a systematic market failure: the market gives you what you want, you become dependent, and too much of what you want is not good for you. It's not a conspiracy. It's a great big stupid dog chasing its own tail into exhaustion. The invisible hand is not a dark, clever, deceptive secret ruse; it's a vacant-minded onanistic dysfunction. The invisible hand is secret -- a secret, addicted wanker. 

But without it, you'd be building mud huts and living on the cusp of starvation. 

Wednesday, July 2, 2025

the easy path to enlightenment & nirvana

Scott Aronson wonders whether attaining enlightenment is worth it given the sacrifice of time. Are there better uses of one's once-in-a-lifetime time? I wholeheartedly agree with his decision to go with all the other pursuits and accomplishments of knowledge or investigation. After all, enlightenment is a selfish goal, it benefits only oneself whereas understanding the world in its many systems and puzzles not only benefits others, it augments the mind beyond oneself. Nirvana isn't all it's cracked up to be.

But I disagree with Aronson that there's a sacrifice. When I was 13 or 14 or so, I wondered about this enlightenment question after looking into the nirvana goal. The result of enlightenment was characterized by several testimonies of those who claimed to have achieved it, as leaving experience just as it was except no more striving to attain enlightenment. Being of a logical cast, I concluded that if I stop trying to achieve nirvana, I'd be in nirvana. And so it was! 

From time to time -- when I'm frustrated with my own struggles -- I have to remind myself that I'm enlightened. Then I have a good laugh and get back to figuring out how to manage whatever struggle is before me. Nirvana, you know, doesn't write blog posts, and posts don't write themselves. :-) (Being in the state of nirvana is kind of useless, to be honest. At least for my purposes.)

I gotta say, though, nirvana is seriously lacking in personality. If you value nothing, grieve over nothing, nothing to worry over, never doubt or ever want, what are you and who are you and why are you alive? I say, grieve and worry and doubt yourself and rage and be stupid. That's personality! Better yet, take a leaf from the Bhagavad Gita -- find your role in society and play that role to the hilt. That's freedom from your selfish concerns (including nirvana) and an opportunity to play the role with style, rendering all the depths and struggles and doubts as mere surface style. A Nietzschean Gita! 

Maybe if I'd thought about enlightenment a few years earlier at an even more tender age, I could have been the Dali Lama, but that would not be my preference anyway, so no regrets. Politics is not really my thing. I find politics tawdry. Cognitive science, linguistics, behavioral psych, evolutionary psych, information, semiology, complexity, mind -- all the topics listed in the blog title -- that's what I'm spending my nirvana on. It's a different notion of "enlightenment", the Western notion, synonymous with science "scire" to know, or in the context of this blog, to understand or explain. 

two modes of information acquisition

Gamers and explorers, two modes of information acquisition:

Explorers want to understand. Gamers wanna win.

Hugo Mercier and Dan Sperber in The Enigma of Reason attempt to explain why a species reliant on its intelligence for its survival would be so given to self-delusions. Humans are distinctively reliant on cognition and reason, reliant more on figuring out how its environment works than on merely instinctively reacting to it. We uniquely explore and exploit our understanding of our environment, so much so that Pinker called the environment we're adapted to "the cognitive niche". Where other species exploit a jungle or savannah or shoreline or arboreal or underground niche, the human species exploits our own cognition, our understanding and theorizing about our environment so that that cognition is our primary evolutionary niche given by natural selection. If cognition, rather than instinct, is our survival superpower, why is it that this human cognition-reliant species can be so given to cognitive bias -- intentionally false cognition -- and especially to confirmation bias (more accurately, as Keith Stanovich admonishes us, myside bias)? How can we be so reliant on reason over instinct, yet be so irrational? If our cognition is our environmental niche, and we are so given to the falsehoods of cognitive biases, how do we even survive? It's a big important question, not just for our species' survival, which I guess is important, but for the theory of natural selection, which is bigger than our species. So this cognitive information problem is a seriously big deal. 

Their clever answer: we acquire information through a highly efficient and motivated means of advocacy. We progress by taking sides and defending them. Two heads are better than one, and competition in a zero-sum game is the very fierce essence of evolutionary progress. Human interactive nature "red in tooth and claw", intellectually. It's a brilliant answer! 

With one giant flaw: zero-sum advocacy leads to increasing conflict between half-truths, not a synthesis of progress. Polarization, implied and maybe even required for the model, doesn't resolve into higher truths. 

Fortunately, biased advocacy is not the only means of information acquisition for us. Curiosity, even obsessive curiosity, is not uncommon. The strange drive to understand an aspect of the environment is obvious in the history of the sciences from Einstein's ten years probing one idea, to Newton poking his eye, and Archemides pondering in the bath. Why so curious? From a natural selection perspective -- yes, evolutionary psychology again -- the more we understand about the world, the more we can predict potential dangers. The will to power is overvalued: the will to theorize is the pervasive drive of a cognitive species thrown in temporal space. Theorizing attempts to overpower time: the whole purpose of theory is to predict. And it gives us license to act. Fortune favors the wise; natural selection favors the predictive mind. 

This presumably evolutionary drive has two flaws: it's not as immediately exigent, fierce and threatened as advocacy, so it's more complacent, and it doesn't contain within itself any check on its own bias. Exploring is fine fun and useful as far as it goes. Debating with an adversary will expose your narrowness pretty quick. 

The difference shows the superpower of the advocacy model. It's a gamer's mentality, relentless and all-consuming engagement, a competitive sport in which discipline is enforced by the opposition team and the immediacy of losing. Constant engagement in battle -- not at all like a marathon run where you can zone out all alone.

Advocacy is also insufferable. If you have the mindset of an explorer, arguing with a gamer is almost useless. You're looking for deeper understanding from discussion, but much of what you get from a gaming advocate is superficial garbage, often not even accurate. Your adversary might be brilliant, incisive, fast and devastatingly critical, but informative? Your goal is to understand better, but their's is not. Their's is to win. What drives the explorer is an engagement with information for its own sake in a larger context of understanding the world beyond the explorer. The gamers are serving themselves within the narrow space of the competition. Like any competitive market, advocacy is a short-sighted goal. 

The difference in motives plays out in social institutions as well. The market's creative destruction promotes innovation and progress, but also short-sighted excesses leading to collapses. By contrast, academia purports to have a longer vision. But it can fall prey to conformity more easily than a competitive market. The sciences try to strike a balance of adversarial criticism with a long view, but dissent is not natural to institutions, and the sciences are housed and nurtured in them. 

In education, both models are possible. A student can accept information uncritically as an explorer, or question authority as an adversarial advocate. Not every young person accepts the religion of her parents. 

That most believers choose the religion of their parents, however, presents a puzzle at the heart of the Mercier and Sperber advocacy model. Why and when and how do we choose our team to defend? If it's a social story, conformity should rule, and advocacy would be superfluous with no role to play except where the social authorities haven't decided. Then the question is, how do the advocates choose between sides? The sociology of false beliefs post suggests that wherever there are hierarchies in a society, there will be differences of interests and emotional investments. Those will be sources of polarization. 

There are other, more specialized, modes of information acquisition including the Charlatan's Check (upcoming post), and there are modes that don't work like propaganda (propaganda vs censorship: asymmetrical effects in escalation bias and polarization). The plain and obvious fact that Fox News doesn't persuade the NYTimes readers and the NYTimes has no power to persuade the Fox watcher demonstrates that propaganda has no persuasive power on the public. The public has already chosen its kool aid by the time it opens the pages of the Times or turned on the TV to Fox. That choice can't come from the propaganda venue. It could be from peers, from social location, from a social aspiration or a trusted friend, but not from the propaganda. 

Finally, there are also two internal strategies with deeply asymmetrical contrasting effects: confidence (self assertion) and doubt (especially self-doubt). One is self-reflexive and the other is not. The difference is not trivial nor merely qualitative. It's dimensional. Confidence is, in itself, one-dimensional unidirectional. Doubt, on the other hand, questions its own direction and must consider many directions in the thought space, including its own doubt. This does not imply that doubters see all directions, of course, but it will likely see more than confidence will. Doubt is the explorer's navigator, confidence the gamer. 

Tuesday, July 1, 2025

the duality of truth: functional process vs probabilistic uses of "true" (pace Pinker)

"There are no truths" cannot be true. Therefore there must be at least two, that sentence and this one.

The purpose of this post is to distinguish two uses of "true", one functional, implicit and necessary for communication, and another use, ordinary and commonly discussed albeit logical, metaphysical and theoretical and ... unnecessary and maybe even undesirable.

A friend, arguing that theories have value beyond their truth, asks me to set truth aside when evaluating a theory. He likes to entertain post modernist ideas. 

I ask, in a Pinker mood, do you think it's true that theories have value beyond their truth? You said so, but if we are to set truth aside, what am I supposed to think you believe when you say, "theories have value beyond their truth"? If I'm not to believe that you think this is true, then what am I supposed to conclude about what you're saying? Why should I even try to understand or credit it? Why are you even talking to me? Why should I listen?? To enjoy the sound of your voice???

Stephen Pinker makes much of this basic universal necessity of truth to criticize the post modernists and the Nietzsche-chic. He's partly right. Partly.

The notion of truth in the little conversation above is playing two different roles accomplishing two different kinds of work. One is communicative, functional and procedural. In order to converse, there must be an assumption of truth to the assertions intended by the conversants. Assuming otherwise defeats the purpose of conversational exchange of information. Abandoning that assumption of truth-intention also lands the conversants in a liar paradox from which conversation can go nowhere. 

That process-functional use of truth is usually not spoken, since it's a prior assumption of any meaningful conversation. H. Paul Grice called this conversational cooperation. If you're looking for a beautiful, powerful and revelatory theory, read his two papers on the logic of conversation or the sections of Larry Horn's Natural History of Negation dealing with implicature. Although Grice does not discuss them, even greetings have an assumption of cooperativeness, since they carry a meaning of agreeableness or friendliness or at least politeness. All cooperative. But even when we get into a heated argument, we still do it in a shared language! And because? Because we want agreement or respect or submission -- something from the other.  That's a social species. Even our bickering is cooperative in this needy way. When social scientists describe our species as a cooperative one, language use is not always their first example, but it should be.

So there's this practical, functional, process use of truth -- to oil the gears of information exchange, really the catalyst without which conversational information exchange wouldn't work. Without it language would never have evolved. Assuming everyone talking to you is lying, what survival benefit would you gain from listening to idle, useless noise? Language could only evolve if the exchange of information were useful and believed, prima facie at least. This is clearly expressed in the English "Fool me once, shame on you...", and if it weren't so we couldn't deceive one another with a lie. You can deceive with a lie, and that's because the underlying assumption of conversation is and must be trust. It's probably why kids believe in Santa Claus, and adults believe in all sorts of absurdities. (This default gullibility presents a deep natural selection puzzle: how do we survive if we believe absurdities that don't match with our environment. Mercier's Not Born Yesterday mostly solves this smoothly. We use many other clues to the truth of assertions we're told. Do they cohere with other established experiences or narratives? Do I have reason to distrust this person in particular? IOW, we're not stupid, despite our cooperative default mode.) 

This practical use of truth is not generally expressed. " 'My name is jones', is true." Those last two words are a waste of breath. "My name is 'jones' " would suffice. Its assertion of truth is part of the conversational function itself. 

This communicative use of truth is a mere convenience with no grand metaphysical or scientific import, but a necessary convenience nonetheless. 

The other use of "true" is the one we most often talk about when we talk about truth and truths. It gets a lot of attention because it's louder than the tacit assumed one. It's a logical or philosophical notion, not just a practical one, and it's not a function of a process. It pops up not only when you want to talk philosophy and logic, but also in disputes over facts and disputes over the content of other people's assertions. You could call this the evaluative use of "true" or explicit use. A Wittgensteinian might call this philosophical truth, abusing language outside the practical game for which it is useful.  

The explicit use though familiar, is metaphysical. "That's true" implies 'that's real', 'that exists', 'I agree that's real' or 'yes, that exists'. It's essential to how we perceive and understand. It's not only metaphysical, defining reality and existence, but it's also epistemic, defining what we think we know and agree to. You can see how different this is from the tacit conversational use. This philosophical use of "true" to indicate "reality" or "existence" -- all very metaphysical. The functional use belongs very much in the interactive world, not of metaphysics or the assessment of facts, but of pursuing the process of informational exchange. 

It's worth pointing out, as an aside, that "Truth" is widely taken as a mystery. This appears to be yet another mix up. Facts, what's actually true of the world, is a mystery -- science approaches it but never quite perfectly or completely. Look at the immense lure of quantum physics. That's full of mystery. The world is full of mystery from its start, from its base to its top and in every dimension and direction. Maybe for that reason people, especially in religions ("I am the Truth" and "What is truth?), truth itself is viewed as a mystery. Actually truth itself is quite a simple and boring property. It's a property of assertions and that's it. "The Empire State Building is on 14th Street" is false; "The Empire State Building is on 34th Street" is true. Is the Empire State Building true? That's a confused question. Things outside of asserstions are not true or false and even words alone are not true or false. Only assertions get to be true or false. "I am the Truth" is a figure of speech, intended to mystify. "'I am the son of God' is true" is an assertion, and it could be true or false. "I am the Truth" taken literally, is just incoherent, albeit poetic and mysteriously impressive as well as conveniently ambiguous. Does it mean, believe that I'm god? Or, you should follow what I do? Or something even more abstract about the universe and all of experience? All of the above, probably. It's the beautiful and powerful, even majestic, poetry of religion. My favorite poetry is the poetry that I can grasp sort of but can't entirely understand. Mystery is the intellectual gift that keeps on giving. Never grasped, never spent. Ever luring. 

The two uses emerge from different motives and sources. They belong to different human processes. One is necessary for communication, although the tacit assumption can be questioned: "you don't really believe that, do you?" And because it is a kind of interactive game, as with game rules, it's not up to the individual whether to use it while playing. On the other hand, the metaphysical truth isn't necessary at all. It's a contention of the individual who asserts it. 

Steven Pinker vociferously objects to the popular post modernist, Nietzschean rejection of truth. He wrote a book about it. But it seems to me that he doesn't give enough credibility (a word essentially tied to truth) to falsehoods. :)

Going back to the top, my friend and I were discussing what makes a theory good. His answer was "It's good if it works. A theory can have value beyond truth." For example, a sociologist or a Straussian might assert that a community's religion promotes social cohesion and individual psychological health. The truth of the religion plays no role there.

Or the Pope might say "God is good" whether he believes it or not, whether it's true or not or even if there's a god or not. And he may assert that it is true, again even if it's not and he knows it's not. Does his audience assume that he believes what he says? Not necessarily. I often think he doesn't really believe any of the hocus pocus and he knows that many of his followers don't either. The conversational cooperative assumption of truth has been slightly shifted to a pretense of truthfulness. It's all a communal dance of mutual convenience, not metaphysical truth. The priority is the community, not its truth. Similarly, Donald Trump often doesn't seem to expect anyone to believe some of his assertions. It's more cheer-leading his supporters than arguing over facts. Who cares about facts? It's about the Us team. 

The Pope example may seem a stretch, but as AI spreads along with conspiracy theories, such deceitful conveniences may become the rule. Does anyone believe propaganda? Does anyone believe the alternative conspiracy theories? There was a time when information about health, for example, was univocal (saturated fat and cholesterol are bad) and everyone believed it. Now there is so much contradictory information about health that no one knows what to think or do, hence the unbounded proliferation of health advice. So what should people believe, or how should they decide? Whatever my friends believe? And what do they believe? The YouTube influencer who seems grooviest to them? It's not about truth but identity, trend and social fashion signals.

Here's where my friend has a point, a qualified point. Our sophisticated Pope thinks the theory he publicly espouses is a good one even if it's false. And that's true even if he assumes that his sophisticated audience likewise thinks the theory is good but false.

The answer to "what makes a theory a good theory?" is yet another question "good for what?" And it's this qualification that brings us back to metaphysical truth. How do you assess what is good without a notion of truth? You say "faith" is good. How do you defend that? "It's good for the community" is a sound answer only if that answer is true. Whatever the defense, it'll depend on truth, or something close to truth like high probability. All kinds of lies may be good, but if they are good, then "they are good" must be true in the metaphysical sense about them, or at least, in a world where there are no absolute certainties, most likely to be true. (The post "entropy and truth" explains the role of probability in theoretical "truth".)

It seems to me that the only way around this moral necessity of "true" is another process, the process of ignoring it. Humans believe all sorts of things with little defense or question. And we still function individually and socially. Sometimes, maybe even often, ignoring truth (or high probability) is essential and necessary in the process of functioning. Those who have an accurate assessment of themselves tend to be unhappier and less successful than those who hold unwarranted, false confidence in themselves. 

This feeds a question discussed in another post "academicism versus activism". There is an essential value of truth-seeking for any academic. But that ivory tower is an isolated place of observation. The rest of the world we actually live in is a world of choices and actions. What's more, it's overrun with others' competing choices and actions, interests, movements, forces, militaries, power plays in a social context of inequalities of influence, and within those inequalities classes that are served by those militaries local or national, and classes that are not so served, those that "the law protects but does not bind" and those that "the law binds but does not protect". J. P. Carse called these finite games, but the life-world is set up such that they are unavoidable, games though they be. 

In such a world, expressing truth is not always the priority choice. Institutional propaganda holds a powerfully funded and generated institutional microphone with big speakers to defend and justify itself. On the socio-political stage, the oppressed must state their interests as loudly as they can, otherwise their side of the truth won't be heard and the only means of getting any play will be forceful actions alone. And in fact there are multiple interpretations to any event or any circumstance (see "true but wrong" and "the questions you ask determine the answers you get"). Deciding which is true can't even depend on the inhabitants of the ivory tower, since science is an ongoing investigation, not a body of truths, and scientists have no crystal ball. The players in the world of unequal influence promote interests, the truths they promote serve those interests, not the truth-for-truth's-sake of the academic.

So there are two very different roles of "true". One works in the game of communication and is an indispensable rule of that game. The other is essential for the academic process of research (though it should be probability, not truth, see the "entropy and truth" post), but not everywhere outside the ivory tower where wealth and influence rule regardless of truth. 

[Of course, the Ivory Tower is also pressured by outside funding and inside biases. The professional class outside the sciences often underestimates these pressures, assuming scientists to be objective and incorruptible, while the conspiracy theorists overestimate them as if every scientist were morally inferior to them.]

These two common uses of "truth" or "true" are often not clearly distinguished by users or even by philosophers interested in truth or even Stephen Pinker who uses the necessity of one to try to prove the necessity of the other. Usually the focus is on the one or the other, not both, so one is forgotten in favor of the other. All uses of "truth" are linguistic -- it's a word and it applies to linguistic assertions, not to non linguistic things -- but one of the uses involves as well a lot of metaphysics and ontology. The conversational use is not metaphysical, it's practical. The metaphysical one might not be necessary; the conversational one is. It's one of the rules of the game, the sine qua non of the game. 

Monday, June 30, 2025

information faster than light: emergence, symbol and the representation of nothing

"shadows can move faster than light because nothing can move faster than light"

"ideas are algorithmic shadows"

jones to his friend, Qwick:

"I'm telling you, information can in fact move faster than light."

"No way."

"Way. Here's a thought experiment. You have a bat signal, a super-powerful spotlight. Super-super powerful, like made of lasers. You cast the signal to the west. The light flies out into space. Then, with the light still on, you smoothly pivot your spotlight to the east. This motion takes you, say, three seconds. All the while the bat signal is spreading across the night sky until you stop at the east. Now suppose there's a planet many light years away directly west of your bat signal and another planet equidistant from your spotlight but directly east. In three seconds, the bat signal has traversed light years of space in three seconds."

"Something's wrong. The light of the bat signal takes years to get to each planet -- at the speed of light, not faster."

"The light, yes, but the bat signal does not consist of just the light that travels at one moment. It's the symbol alone that remains stable across the skies. The symbol and its meaning don't even depend on any light. You could transcribe it with a pencil or a stencil or draw it on sand or an arrangement of cheer-leaders or military recriuts. What has traveled across the sky is not one particular arrangement of light, but the shape signifier holding the interpretable meaning of Batman."

"Why should I take that as one bat signal traveling across the sky rather than a series of distinct bat signals?"

"Because the recipient on the distant planets interprets the signal as the same, coming from the same source from the relatively same time."

"But how do they know it's from the same source?"

"Suppose in this thought experiment, they'd been told years ahead that they'll get a bat signal on a certain day. And they get it that day. They then report back and some years later the report arrives and indeed they both got the signal the same day but one three seconds later than the other."

"Okay, it's a thought experiment, so let's suppose all that. It's still a kind of a fiction that they got the same signal, since they didn't get the same one -- they got a similar signal from a different set of configured light rays."

"Sure, but you can see that the specific light rays don't matter, it's the symbol and, importantly, its information, that is the same, and that it's from the same source. Right?"

"But so what? No light has traveled faster than light."

"Well exactly!! You get it? No light or light information has violated any reductionist law of physics, but the symbol and its information has."

"How can the laws of physics not apply to something physical." 

"Symbols are not just physical. They carry meanings, and meanings are not physical."

"Don't get all Platonic on me. I don't have to buy that other-worldly realism, Aristotle crushed that one two thousand years ago."

"You tell me what a thought is or what a meaning is. Look, here's a simpler example. Suppose the moon is casting a shadow out in space. What's creating the shadow? No trick question, just simple."

"The moon. It's in the way of the sunlight."

"Okay. Suppose you're traveling away from the moon in its shadow . As you go through space you'll see the shadow gets wider, the diameter gets wider because the rays of light from the sun are not parallel, they are radiating from the sun each at a slight angle, like a flashlight."

"The moon is obstructing  the sun's light cone. Okay. I think I see where this is going, pardon the pun."

"As the moon moves, the shadow moves in that direction faster, and faster the further from the moon. Eventually, far enough away, it will be moving faster than light."

"I get it, but I don't understand how it can be since it doesn't accord with the law of physics, the limit of the speed of light."

"Nothing has been violated. Literally. A shadow is not a thing. It's an absence of a thing, namely the absence of light. And the sun's light is not traveling faster than the speed of light. It's the absence that is traveling. Nothing is traveling faster than light in this example. Literally, a nothing is traveling faster than light.

Take the bat signal again. The light is not traveling faster than light, and the symbol is not traveling from one planet to the other. It's only the cognitive information of the symbol -- our perception of meaning tied to the symbol -- that is traveling faster than light. It's a kind of fiction ranging over the light rays. That fiction is what is meant by emergence. The shadow of the moon doesn't physically, reductively move faster than light, and that's because a shadow is, reductively, a nothing, an absence of light, not a physical thing. What then is a shadow if not a physical thing? It's a pattern that we perceive that we give a name -- a symbol. Even the physical effects of the shadow, any cooling of what's in its path, are only traveling faster than light in the sense that we perceive those aggregate effects are integrated -- by our interpretation! -- as the effects of a unity we identify as the shadow. It's a fiction of your symbolic representation, like a zero -- "0" -- the symbol that denotes what? Nothing! And it's not the symbol that moves, it's the interpretive information attached to it, not of itself, but by virtue of our collective meaning-association. 

Zero is a symbol, and a symbol token -- any particular use or application of it at some place and time -- has a physical shape. But as a symbol it also has a meaning. It means nothing, an absence of all things in the case of "zero" and "shadow". Zero represents our concept of nothing. But notice that a meaning is itself not a physical thing. Symbols are mostly like this -- they are at a remove from things because they represent some kind of meaning or idea, for example, a shadow. There is no such physical thing as a shadow. There's just absence of light and our interpretation -- meaning again -- of that aggregate of absences. It's a pattern, and we can represent that pattern as a nothing thing called a shadow. It's just a name and a perception that we can talk about, locate, explain and use in our science. And it's that pattern of nothingness that can move faster than light because it's not any physical thing, it's a cognized pattern, in this case, a pattern of nothing. But any symbolic pattern could conceivably travel faster than light."

"You mean, because the bat signal is interpreted as the same symbol with the same meaning, the same information from the same source that thought it and intended it and directed it intentionally..."

"Yessss. Symbolism is a paragon of emergent properties. And it is characteristic of emergent properties that they have their own laws and don't have to follow the reductive laws of physics. Their constituent parts may have to follow reductionist laws of physics if those parts are physical, but the emergent properties or entities don't have to and often don't. That's what emergence is all about."

"I see. No physical thing has been violated. Physics has not been breached in this interpretive information travel. It seems like physics information and symbolic cognitive information belong to different...I don't want to say worlds or realms, but to different sciences."

"Yep, But they may as well inhabit different realms or worlds. You could show the physical consequences of a meaning -- here are all the chairs denoted by "chair" -- but those consequences do not exhaust the meaning, since "chair" applies to the possible chairs as well as the actual ones you can display."

"A definition of the word "chair" would imply the possible chairs..."

"But the definition is just another expression of the meaning, and you'd have to ask, what do the words of that definition mean. There is something beyond the physical about meanings."

"So this really is Platonic after all."

"Maybe. Depends on what you mean by Platonic. I'll tell you this: pretty much no one thinks consciousness is a physical phenomenon. But no one thinks it's Platonic. It's somehow tied to the physical, only physical beings have it, and only that local thing can have its own local consciousness. But meanings aren't tied in that way to things at all. They don't even have a location. Or even words, since they can be translated from language to language. Consciousness isn't the hard problem. Thoughts, meanings and ideas are. You can speculate about how they are generated in the brain and grasped by the mind, or what they refer to in the world or in possible worlds, and those may be their causes and consequences, but that's not what they are themselves."

"So what are they?"

"Elusive."