Monday, March 24, 2025

fungible and non fungible fictions: money vs religion in war

Whom do you trust, and why? Would you trust someone who believes in a fiction that you know is a fantasy? 

On the neoliberal understanding of international relations, trade fosters international peace since war is an obstacle to trade and wealth creation. Money is transpersonal and transnational -- like math and music, it is a universal language, math a measure, music an emotional manipulation, money a price of value.

Like religion, money is also a kind of fiction based on a faith. In the case of religion, the faith depends a bit more on the individual's investment in the religion than on the coreligionists or their investment. If one's coreligionists all turn atheist, one may maintain one's religion without loss of belief. The faith in money depends entirely on the collective investment. Without that collective faith, no value. 

Unlike religion, money is most valuable to those who have the least of it, yet those with a lot of it stand to lose the most if the currency collapses. These paradoxical asymmetries are not only ironic but socially dysfunctional. 

(Compare, for example: I have no religion, so I have no use for religion at all, aside from an intellectual curiosity about those who believe and the history of believers. And those who are most invested in their religion are impervious to any attack on it. The contrast with money is stark.)

The classical economic idea is that all people have needs, creating a vast demand for certain kinds of goods. On that view, trade and comparative advantage are the most efficient means of wealth creation for everyone. Religions -- and here I'm referring to supernatural-based worship-religions, not philosophical advice systems like the Tao -- are not universal needs and don't respond universally to any basic needs they might serve. Some of us believe in multiple gods, others none but ghosts of ancestors, others have only one, some none at all. Religions are like local currencies, except that there's no currency exchange rate, in fact there's no exchange market at all. When you convert someone to another religion, you expect the convert to give up the old religious doctrines and values, and you're frustrated if they don't. On the contrary, currency is not essentialist at all. One of its main purposes is to serve as a medium of exchange. It's anti-essentialist. It serves some other purpose than itself, and it's a universal purpose.

One might conclude that supernatural religions, lacking an exchange market, wouldn't interact with money, a medium of exchange. But they do in national contests, especially in war, and despite the universal recognition of trade trust. It's because religion is a non fungible fiction. Both properties -- non fungiblility and fictionality -- are essential to its interactive character.

Suppose I have a carrot and I want to sell it. It's not a fiction, but a local resource -- I have it with me, you don't. It has a value on the market, so anyone can see and understand its price, and so anyone can buy it either to use or to resell. It requires only a little trust between buyer and seller to achieve an exchange. 

Suppose I have a religious belief. It has a value to my coreligionists, but it's really a fiction, so not only does it have limited value to the non believers but I know that others don't see its value. Can I trust those others? How can I, when the belief I hold is a fiction that no one else would buy unless sharing that fiction.

The assumption that others will not share the fiction, should incline the believers to justify the fiction to strengthen it. No one is more entrenched in their views than when challenged by criticism. The fact that the religion is an unbelievable fiction doesn't make it any the less believed. On the contrary, its fictionality inspires more steadfastness of belief. The unbelievabilities flourish and multiply in religions -- djinns, angels, devils, ghosts. It's a Pandora's box, an open door to common-fare imaginings (very unlike the extraordinary revelations of science, which are far beyond common imagination, see the post on the mediocrity of art and the unbounded imagination of science).

Religion, like property, leads to conflict. In the case of property, the threat of violence is essential and definitional: "it's mine" means no more nor less than "If you try to take it, imma hurt you" or I'll get someone or some authority to hurt you. But trade, the exchange of resources through money, overcomes this obstacle on the property side. It's the difference between trade and sharing.

Religion has a dual relation to property. If you adopt my religion, I'm none the poorer for it. That's one reason why many are surprised by woke objections to cultural appropriation. You adopt my religion or religious ideas or values, I don't lose; if anything, I win! But if you try to take my religious beliefs from me, then my beliefs are like property, my loss. 

It's often observed that money is a fiction. This is misleading. Money is a fungible fiction, so it facilitates exchange. Not so with non fungible fictions like supernatural religion.

Sunday, March 23, 2025

The Gates-Musk paradox and the surprising source of distrust

Ever notice that fringe conspiracy theories surround liberal philanthropists but not brazenly selfish libertarians? 

Bill Gates, who wants to end malaria, not for himself -- he lives in Washington State for god's sake -- but to help the helpless in the tropics (and yes, selfishly to cleanse his conscience and legacy and yes, he invests in the cures, but he could invest those funds elsewhere and earn even more). Gates is the regular target of some of the most nefarious conspiracy theories, among them some of the most absurd ones like promoting vaccines with the intent of inserting a chip in every body to surveil or control us all. 

Meanwhile Elon Musk, who shows no interest in protecting the impoverished or helping the helpless, whose philanthropic trust gives money to his own enterprises -- iow, it's just a selfish money-laundering scheme -- Elon Musk who believes in libertarian selfishness and promotes selfishness, even taking gov't subsidies to float his business and fatten his wallet, who plainly and publicly buys political influence, who really does seem to be intent on actually controlling the world, and who owns the public square itself under the pretense of ridding it of censorship (although his first act was to restrict criticism of himself), this Elon Musk who actually, physically and literally inserts chips in human brains at Neurolink -- that Elon Musk has not a single fringe tinfoil hat conspiracy theory attached to him. Ever notice that? Isn't that odd?

I want to call this the Gates-Musk paradox. 

I want to be clear at the outset, that I'm not complaining that the conspiracy theorist is treating Gates unfairly and I'm not defending Gates' philanthropy. Gates could be a misguided, arrogant, meddling fool and Musk a brilliant hero of our time (though I doubt that, given other of his proposals I've written about here). I'm interested only in understanding the paradox to see it if tells us anything about theorizing and theorists, that is, about human thinking. 

This paradox, btw, is not just true of Gates and Musk. It is a general character of conspiracy theory, maybe even a law. 

Consider Soros, another classic philanthropist spending his money on helping the helpless whether they be despised immigrants or victims of racism or of autocracy. Whatever you think of his goals or his means, they are not selfish. Yes, he wants to influence governments, but to encourage liberal democracy so that all its nation's people have equal access to rule. You can't call that enslaving the world -- forcing people to choose for themselves what they want of their government -- but world enslavement is what he's accused of attempting. Meanwhile, Peter Thiel promotes monopoly as the best business strategy -- not to benefit society or the little guy, but to make most money fastest for the monopolist alone. There are no elaborate fictionalized conspiracy theories or wacky conspiracy theorist calls to alarm surrounding Thiel. But Soros? He's second after the Federal Reserve among conspiracy theory targets. 

Again, I'm not suggesting that Soros' domestic or foreign interventions are good ideas. Personally, I think he's an antiquated relic of the Cold War, the successes of China and Singapore demonstrating that his political proposals are not necessary conditions for social prosperity, and the many market failures of the US sadly demonstrating that his proposals are not sufficient conditions either. Having no crystal ball, I have no idea what will or would be the consequences of his interventions, just as I have no idea what will come of Donald Trump's strategic and antagonistic tariffs on China or his transactional tariffs on virtually everyplace else. It's just remarkable that there are no conspiracy theories targeting Mr. Trump while there are many targeting Soros. 

Or take the Federal Reserve compared with any other bank, say, J.P Morgan-Chase and Goldman Sachs. The Fed pursues a well-defined albeit incompatible mission veering between the Scylla of inflation and the Charybdis of unemployment, and it does this surprisingly well, responsibly and efficiently, given its narrow means, quite unlike the typical dysfunction of government. The giant banks, on the other hand, have no such public-interest mission. Which are the targets of conspiracy theories? You got it: The Fed. 

So what's going on with this Gates-Musk paradox? It seems as if the conspiracy theory crowd have purposely chosen the wrong targets, welcoming the dangerous Musks and Thiels and Kochs and big banks, while shining light on shadows that they themselves have cast for the purpose of shining a light on the unsuspecting. Crazy, no? 

It takes more effort to invent a danger than to acknowledge a public one in plain sight. So why all the attention to the do-goody philanthropists, embracing all the while the self-professed self-oriented and even lying self-promoters? 

The obvious response is that a conspiracy theory has to have an element of secrecy and deception so they can't attach to Musk and Thiel or Charles Koch or the late Sheldon Adelson. And of course that's true about a conspiracy. But consider what that means about conspiracy theories and the theorists' concerns. Are the theorists concerned about nefarious actions and their dangers or are they concerned about secrecy and deception? Is conspiracy theory about danger or about distrust? The Musks and Thiels also do not inspire trust. The conspiracy theorist is not distrustful of the targets, they're distrustful of reality, of information. The essence of a fringe conspiracy is not just distrust, it's the fictionalization of distrust, the irreality that confirms their distrust. 

There's a lot more to be said about this paradox, but I want to stop here with this consequence of it: the paradox means that distrust of reality and information (not of danger nor of conspirators) is the focus of conspiracy theory. It may have been obvious that distrust was the driving emotion or cognitive principle among conspiracy theories. My goal in this post is to provide the evidence that this is so. The paradox is that evidence. 


where the future is behind you

Why do we talk about the future as ahead of us and the past behind us? 

Among the Aymara, an ancient Andean people, it's the other way around. For them, the past is before them, the future behind. That is, the past, which we know with some certainty having actually experienced it, is like what can be seen in front of us. The future, which we aren't certain of, is the unseen, like what's behind us. Given that our vision is our paramount sense, and, like predator species our eyes are both facing front (unlike vegetarian species like squirrels and goats whose eyes are set to the sides of their faces so they can see the focused predators approaching them), this distribution of information -- certainty of the past before us versus uncertainty of the future behind -- makes perfect sense. It makes so much sense that you wonder why we think the future is ahead of us and the past behind. 

A moment's thought provides a good answer. Since we are a predator species, we want to see our prey in order to capture it. We're goal-oriented, desire-oriented. It's all about what we want and how to get it. Our notion of time is a self-interested one. Time, for us, is the answer to what we want. 

The common word "progress" -- a basic notion of time for us -- always means future and always good. By definition! It's more than just a deep cultural bias towards time, it's a cultural value. 

Think about fashion. Fashion is this progress-value stripped bare of any other good. The latest in clothing, architecture, art, trendy ideas -- they are not improvements in any value except that they are not yesterday's style. In the 50's the coolest ties were thin and skirts were long. In the 60's hip ties were thick and skirts very short. Are thick ties and improvement on thin ones? Is there some benefit to a thick tie? Is there any practical use in these trends? Culture critics like to analyse the meaning of these differences, but they forget that a) what's most important is the mere difference from the most recent past and b) meanings are typically justifications after the fact. Fashion is progress without any other good than newness -- mere difference, to use the semiologic word. 

What about the Aymara, then? What is time the answer to, for them? 

An odd feature of the Aymara language is its grammatical encoding of degrees of certainty. It's impossible to say "It's raining" without including a grammatical piece on the verb indicating whether you know it's raining because you have direct evidence (like "I see it is raining now"), epistemic conclusion (I see people opening umbrellas therefore "it must be raining") or various degrees of uncertainty ("I think it's raining', "it's probably raining"). Now, obviously, English speakers can express all these degrees and types of certainty too -- look at the glosses I just gave. But they are not grammaticalized. They are separated into individual words like "might", "must", "know", "probably" and "I think" and are included at the speaker's will, optionally. Certainty -- degrees of knowledge and evidence -- are grammatically inseparable in Aymara.

You can see where this is going. It suggests that these degrees of knowledge grammaticalized in their language have a pervasive influence on their perception and maybe their attitudes and culture. For us, information is self-oriented. To the Aymara, information is not desire, but understanding, the gradations from ignorance to belief to knowledge and certainty. To them time is not the answer to what they want, it's the answer to what they can know. 

Maybe it's too much to suppose that our time perspective is all about individual wants. After all, there are many cultures that are collectivist and not so individualist as ours in the US, and their view of the future is just as predatory as ours is. Roman architecture showed no sign of fashion or progress. They thought their style was optimal so why change? That was generally their attitude towards their culture: "We're the greatest in the world, we rule, why change anything?" including their agriculture, one reason for their collapse. Hero of Alexandria invented a steam engine around the 2nd or 3rd century, but did the Romans use it to improve their agriculture or their transport? They used it to impress visiting barbarians with statues moving their limbs or wings to all appearances miraculously by themselves. Not a progressive vision. It would be unfair to compare their clothing fashions since production was so much slower than ours. But it does seem that their sense of civic virtue contrasts with our individualism. How many prominent Romans fallen out of public favor chose suicide as a noble and dignified choice? For us suicide is all about individual solitary personal despair. Civic dignity? Does George Bush even hide his face in shame much less sit on a sword? 

On the other hand, the Romans did love any new religious mystery and semper prorsum -- always forward -- was a common Latin motto. 

Lakoff & Johnson's Metaphors We Live By shows that these orientation 'metaphors' -- time is in a spatial one dimensional line with the future before us the past behind, or good is up, bad is down -- are arbitrary, and their justifications are post hoc. So you might say that the stock market goes up when it's value increases on analogy with a pile of dollars increasing with its height, but on the other hand, if you pile up a pyramid of gold bars, the greatest value will be at the bottom layer and the very top the very least. "Good is up, the stock market goes up when it increases in value" is arbitrary. Hades was the richest of the gods, his realm the deep down source of all precious metals and gems -- wealth is down. "High" frequency mouse squeaks are down and thunder, the "low" frequency, is up. It's all arbitrary and you can find a justification after the fact for any so-called orientational metaphor. 

I do wonder, though, how much different we'd be if we spoke Aymara and admitted that the future is unseen and unknown. Our individualist future seems short-sighted and narrow. How many physicians will admit that what's understood today will be tomorrow's ignorance, today's cure tomorrow's harm? How many of us, knowing how foolish we were in the past are willing to admit that given what we'll know tomorrow, we must be wrong and foolish now?

simple way to encounter your unconscious mind

It happened like this. I'm lying in bed having just awakened in the morning. But I don't want to get out of bed. Like every day. 

I have no trouble waking up. In the last half century, I haven't used an alarm clock once. I tell myself just before I go to sleep at what time I'll need to wake up, and just like that, I wake up almost exactly to the minute as planned. I learned this in my adolescence from some radio broadcast describing this method. I tried it and it worked. Fifty years later, I still have no trouble waking up when I need to. It's automatic and accurate. Most animals have a kind of accurate internal clock, and this method is merely letting it run a behavior on autopilot. 

Getting out of bed once awake, now that's a whole different problem. 

It's always a struggle. Here's a way to understand the problem. For every moment when I want to get out of bed I want to stay in bed for just one moment longer, and any each tiny moment is not enough to make me late. It's a sorites paradox (exactly which lost hair made me definitively bald) or a mathematical induction (if moment n doesn't make me late and moment n+1 doesn't either, then I should never get out of bed...logic!), and I'm stuck in it in real time. I'm not a believer in discipline. I want the exit from bed to be as magically automatic and seamless as waking up is for me. But it's not. It's a struggle and I lose repeatedly, partly because the logic -- that each tiny moment is not enough to make me late -- is inexorable. And even when that logic fails, I'm still struggling with myself, I want to get up but I don't want to get up. Discipline here just exacerbates the struggle. It might help to structure the waking: stop thinking and just get up. But isn't that just as puzzling? Why doesn't "stop thinking and just do" result in staying in bed? It'a real quandary. 

The morning I'm describing above, I gave up. I thought, I'm getting nowhere, let me just think about what I'm going to teach today after I get up and dressed and out the door. Thinking about what I'm teaching engrosses me, always. There's so much I want to convey to the class, and I want it to be well-ordered but also comprehensive. It's a lot and I'm devoted to it and I'm soon far away in thoughts about systems and explanations of them and misunderstandings about them and ... then, suddenly, I discover I'm sitting on the edge of the bed. When did this happen?? When did I even decide to get out of bed???

I'm sitting on the edge of the bed, but I don't know when I made this decision to get out of bed. There must have been a decision, and it must have happened while I was thinking about teaching. But I was thinking about teaching, not about getting out of bed. 

You can see where this is going. Somewhere in the back of my mind -- to use a locational metaphor that probably will bias my account of what happened -- somewhere some process obedient to the recognized need for me to get out of bed, moved the levers of my motor functions in the brain and I got up and out of bed without my surface awareness. And "I" -- the surface awareness -- didn't learn about it until well after it was all accomplished. 

I thought to myself (to my aware self), if this is really how my mind works and gets tough things done that need to be done -- I get over struggles when I'm thinking about something else --  then I should be able to repeat this process with intent. And so I did the next morning. And every morning thereafter. I never try to get out of bed. I just get immersed in something else, and the decision is made for me unawares. 

And if I could do this in bed, couldn't I do this with other actions? What action? Some other situation in which I never want to exit but must. The hot shower, of course.

By now you recognize what a hedonist I am. In the shower, I have the same problem. For every moment in the shower I always want to stay just one moment longer. It's like that little mathematical induction. I should stay in there forever or until I drop, wrinkled like a prune. How I ever get out of there, I don't know. Or I didn't know, and now I do. It's when I'm not thinking about the shower. It must be how I always get out of there, but never noticed. So I tried the bed method and, lo and behold, it worked. 

Doesn't that imply that all my decisive choices are like this? Done without my awareness?

There's plenty of research that tells us that our awareness is late in the decision process. Christof Koch, in his book The Quest for Consciousness, describes the work he did on this -- but he's just one of many. Deflationary theories of the mind like Chater's also align with this observation, and experiments with split brains confirm that the mind justifies its actions regardless of the sources of its actions, iow, what we, using our folk psychology call our decision-making process -- "I chose to do this because of such and such reason" -- is actually all post hoc: I do; and then my mind invents or figures out a reason convenient to its self-narrative. Descartes got it backwards. Not "I think, therefore I am"; it's "someone's thinking, but it aint me". :-)

What's new here is that I seem to be able to access this process after the fact, and knowing this, I can game it by letting it do its thing without my struggling with it. It knows I need to get out of bed and turn off the hot shower. I don't need to tell it. All I need to do is think about teaching and systems and ideas, or anything that takes me far from the matter at hand. 

The more I attend to this, the more I observe it. Watching my decision-making process has become almost a commonplace, as if I had a constant companion, a kind of double within me. I haven't yet explored all its underground activities. Does it run my biases? Is it the one who loses appetite when I'm in fasting mode? Just how much influence does it have over me? 

And who is this person? Is he (it?) my obedient self, the responsible one, or the one frightened to be late or diverge from the program? Or does he have a variety of intents depending on his mood or on the circumstances. And if gender is an identity signal system, an interactive language, does it even have a gender? It could be hosted by a male body but with no sense of sexual identity at all, just decision-making in response to worries and needs, or maybe at most the needs for the actions given to male sex bodies in our culture and no more gender-narrative than that -- male body with no gender narrative and no identity signals? Or is it sensitive to my gender-signaling needs? It could be my inner heteronormative man. And how can I test this possibly deflationary, flat unconscious mind, aside from just watching its actions post hoc?

More likely, there are many inner Me's. The eater, the exerciser, the self-punisher, the self-lover, the self-defender and self-slayer. Let's not count. 

I observe the automated decision-makers more an more, at almost any moment of action, especially when I'm changing course -- from writing to getting up for coffee or even grabbing for the cup next to me (as I just did), to putting myself together to leave the apartment, check the range to ensure the gas isn't on (post Covid I can't trust my nose to do this anymore). I'm often unaware of these decisions until after I've (one of the other "I"s) made them. And is the other I aware or is it mechanical? Does it have thoughts ever, and insinuate them into my awareness? I intuit that it is immediately connected to the emotions, and the biases that are irrepressibly tied to those emotions. How is that different from having a thought? On a deflationary or flat view of mind, there might be no difference. The Other Me runs the biases, the surface Me merely fictionalizes to itself an identity-signaling Me-story. 

And I do see this social Me and the inner Other I. When I first spy someone that I know I have to socialize with but whom I don't really feel comfortable socializing with, I feel a jolt of negative arousal, almost like fear. Surely that must be the Other inner self. 

This is all far-afield. I only meant to explain how to wake up in the morning and get out of bed with no struggle, no discipline, automatically like magic. Try it. See whom you meet, or who meets you.

what the invisible hand can't see

Adam Smith early in his Wealth of Nations, explains that where there is a need, capital, seeing opportunity for profit, will go to that need and supply it. For a price, of course. Since the incentive is the profit, the need must be expressed, at least potentially, in money. If there's no money in a particular market, capital cannot see any opportunity. 

What Adam Smith didn't observe in his book was what capital couldn't see, and couldn't see it not because it didn't want to see it but because it is simply blind to it. And that invisibility is extreme poverty. In a money economy, where there is extreme poverty -- no money -- there is no market and nothing to draw capital to it. 

Poverty is an embarrassing gap in Smith's book. 

Smith's book also doesn't see how short-sighted the invisible hand is. Were the employers of labor to raise wages, consumption would in time likely grow, incentivizing more production and more employment and yet more consumption and more production and more employment and...an upwards spiral of increasing wealth from the top to the bottom and back up top. But the market, as we know from the 2008 collapse, is short-sighted, too short-sighted to see the advantage of raising wages now to benefit from the upward spiral later. In the short run, the market incentivizes the producer to keep wages as low as possible. Marx and Keynes saw this alike. There is no immediate incentive to spend more on labor. And that's because of the necessary character of the invisible hand.  

The point of the "invisible hand" metaphor -- and its groundbreaking emergent-property insight -- is very much like Darwin's natural selection, emphasis on "natural", and Galton's wisdom of the crowd. Without any intent to do so, it produces a beneficial end-goal for all members participating. "Without intent" means "without intervention" on the part of thought or analysis or theory. The virtuous goal happens all by itself unintended, like natural selection and the wisdom of the crowd. 

The focus in natural selection, the crowd and the invisible hand is on their successes -- natural selection yields amazing abilities of phenotypes, the distributed crowd yields accuracy beyond experts and the invisible hand yields consumer surplus and efficiency and wealth creation, all by themselves without any intent to do so. Extinction and starvation not so much -- out of sight, out of mind. Even market bubbles -- the failure of the crowd's distributed information -- is out of sight. Opportunity blinds us with wishful thinking. Unlike soap bubbles that are visible until they burst, a market bubble is invisible to those that make them until they burst. Then they're more than visible, they're felt.

To see beyond the incentives or to breed a species or to prevent a bubble requires thought, theory and the predictive foresight theory affords, and intervention. Maybe Smith could be excused for leaving out the starving if his project were an exclusively scientific one to describe and explain the market, and not to describe or explain circumstances where there is no market, like extreme poverty. If that were so, he would be exclusively analysing the market, not explaining how it should work, but only how it does work. But the latter end of his book is full of prescriptions: educators shouldn't be given salaries but should be paid directly by their students; commercial interests should not be allowed to lobby government lest they intervene in the market. So the book is not just scientific. It has a prescriptive point as well. The poverty gap is not excusable. 

public spitting: transgressive pride and autonomy

I've been puzzled about public spitting for decades, too stupid to recognize the obvious. Men who spit, spit on a world that they see is unworthy of them and their basic dignity. 

Lemme explain. In American culture, spitting seems to be about pride, masculine autonomy and something akin to solidarity or community: "I'm too proud to conform to norms of a class from which I am excluded. Acceding to such norms would compromise my proud masculine autonomy. My identity of masculine pride is supported by a community of likewise autonomous anti-elite, self-justifying refuseniks, comfortable and even snug and happy in our community of crude habits." That's a lot of lofty language and reasoning, but it comes down to a preference for peer behaviors cultivated during the youthful developmental stage and an indifference to adult elite notions of etiquette, perhaps perceived as effeminate or effete or fay. 

While the privileged educated elite respect their world, a world from which they derive so much including and especially respect for their education, their sophistication and career accomplishments, the non elite have little reason to respect their environment. "This degraded and degrading world around me isn't worthy of me and my pride. I spit on it freely and trash it as it deserves. I don't think twice about it. Why should I?"

The knock-off effects of respect also include propaganda beliefs and conspiracy theory beliefs (many blog posts here on this topic). It's evident in the elite cherishing of "proper" English and the respect it gets from high and low. There's been plenty of public attention lately given to privilege -- white privilege, mostly -- and not enough to respect vs lack of respect across the social pyramid. Respect infects our understanding of the world in our theories about it, infects our attitudes about our surroundings and even our perception of language. It also infects our self esteem and our judgments of others. And it is an interactive game -- we get it from others and even from our surroundings and others' perception of our surroundings. 

Monday, March 17, 2025

the strange beauty of logical positivism and popular and academic misconceptions about it

There are two common misconceptions about logical positivism. 1. the positivists, very much like self-righteous New Atheists, set out to prove that non scientific theories like religion and metaphysics, are false and only science can be true, and 2. Logical positivism fails at its own criterion of meaningfulness. 

(1) has got LP backwards. LP considers religions and metaphysical systems to be true, in fact necessarily true, while it's the scientific theories that are possibly false, not necessarily true at all. 

That's the strange beauty of LP. The difference LP draws between theories is not between the true or the false, but "meaningful" and "not meaningful", using a peculiar definition of "meaning".  LP doesn't touch on any other aspect or virtue of non verifiable theories, their aesthetic value, their mystery or charm or inspirational insight, their moral or social value. Just their meaning, where "meaning" in LP is used as a theoretical jargon for "phenomenal informational impact -- how the world of phenomena and events are and are not." The challenge of unpacking their use of "meaning" such that it isn't circular is the reason for (2).

(2) is flatly false. Apply LP to LP and it verifies. (2) also assumes that LP is a theory and not either a definition or description or an axiomatic system or merely a kind of practical advice like Popper's demarcation. 

The popular misconception has it that if LP is a theory it should apply to itself, but LP can't itself be verified. I think people who say this must not have tried to apply LP to LP, maybe because "it doesn't apply to itself" is self-reflexive and so clever-sounding that they don't bother to experiment to verify whether the clever is also true. Whatever their reason for why they don't apply it to itself, we can apply it here and now:

LP says that theories are meaningful (in the sense of "tells us how the world is or is not", "what's in the phenomenal world and what isn't") if their statements and predictions about the world are verifiable. Is this assertion verifiable? Sure. God is not a directly verifiable object. It's not meaningful in the LP sense of telling us how the phenomenal world is. Religion is meaningless in that sense. Are the bones of dinosaurs verifiable? Yes. Archeology is meaningful in the LP sense of telling us about the phenomenal world, in this case where to find dinosaur bones. LP is verified by both these cases. LP is meaningful in the LP sense of meaningful, telling us how the world is and is not. 

This is all crude and simplistic, but it shows how to apply LP to LP. Let's try again with something more substantial.

Creationism cannot predict the fossil record. There's no book of trilobites and dinosaurs in scriptures, and scriptures don't need them. It doesn't tell us how the world is, phenomenally. That's an unverifiable theory, and notice, it's a necessarily true theory -- no empirical evidence can prove it false. (The New Atheist will complain about it's internal contradictions, but those are logical disproofs, not evidential disconfirmations, and LP is concerned only with evidence. That's a huge difference.) Archeology does predict the fossil record. Treatises on trilobites and dinosaurs belong to science. Biology and archeology tell us what we will and will not find when we dig into the earth and find bones. Are they true? Well, not necessarily. They are the most likely theories of the topic given the evidence currently available. True? Who knows what we'll discover tomorrow? And that's one difference between the religious or metaphysical theories and the scientific theories -- according to LP.  What we discover tomorrow could trash our current science. It will never trash the religious or metaphysical theories. Are these differences between creationism and archeology verifiable? Yes, the difference seems to be verified. That difference is the LP criterion, the LP "theory". 

You may have already noticed that creationism is strictly ambiguous over verificationism, since it doesn't predict, so you could say verificationism can't apply to creationism. IOW, the problem is not that verificationism doesn't apply to verificationism -- it does -- it's that verificationism doesn't apply to the necessarily true and meaningless (in the LP sense) theories. If that's so, the creationist shouldn't care about evidence to begin with. It's a credo of faith, not evidence. No worries. 

Right at the outset, it's important to know that Karl Popper identified an essential flaw in LP. Verifiability runs into the inductive fallacy. Verifying a theory supports the theory but can't prove an explanatory theory. That is, it can't prove a theory that predicts the possible (as compared with post hoc descriptions of a closed set of observations). Popper replaced verificationism with falsificationism -- that a "meaningful" explanation must identify the conditions under which it would be false. The consequence is that scientific theories are never provably true, they are just the ones that haven't yet been proven false. (There are independent theoretical criteria, like probability, discussed here: entropy and truth.) There are weaknesses in falsifiability too, but it was an important advance over LP's primitive verificationism. LP was using its confirmation bias to confirm its theory of confirmationism. It's a typically human failure to use clear Bayesian reasoning, not looking for the useful evidence, looking for the useless instead. 

The weakness of LP is not alone its confirmationism. It also defined "meaningfulness" in terms of verificationism and vice versa. Their criterion of science was circular. That's because LP's use of "meaning" is not a theory at all. It's a definition or axiom or maybe a kind of practical advice. Definitions generally don't apply to themselves. The word, category or idea "blue" is not blue, and it would be irrelevant even if it were blue. "Blue" can be used as a kind of practical advice: you can view these objects as having this common property of being blue. "Look at these -- they all have a kind of similar hue. For convenience let's call them 'blue' so they're 'blue-ish'." That's all there is to a definition. And if you can provide for all or many possible additional individuals ("that newly discovered thing there should be included in the set") all the better. Most definitions don't apply to themselves, though some do: the set of definitions that define themselves, for example. Not very practical. 

Another weakness of LP, most obvious in Wittgenstein's Tractatus, was the belief that there could be atomic facts, indivisible facts independent of any other facts or ideas or theories. But facts are partly theoretical -- they are conjectural, dependent on the likelihood of the theory -- and like theory, their value is their falsifiability. There's a frequency theory of hues that classes navy blue with sky blue although Russians distinguish them as distinct hues. When I was a child I refused to wear anything navy blue so repulsive to me was this color. My favorite color was sky blue. Which fact is relevant -- that navy blue and sky blue are opposite ends of a single color or that they are two colors? Depends on the theory and its purpose. (See "true but wrong" on this blog.) "Whales are giant fish" belongs to a biological taxonomy that sufficed for the deity in the Book of Job, and that book makes effective, memorable use of it. I have no problem with the "whales are giant fish" theory. It's just not useful for science, a predictive theory of what's out there in the phenomenal world and how it got there. People can wear different hats, you know. 

The lack of conjectural theory in LP led to Wittgenstein's private language argument, a kind of reductio ad absurdum of his verificationism, applying verificationism to the mind. Can experiential states be verified, he asks. Well, on a verificationsit model of truth, no. Rather than seeing this as a disproof of the  verificationist premise, and rather than seek a better conjecture, he oddly, and perversely, embraced the absurd result that the experiential is meaningless, and advocated for a kind of behaviorism that prevailed in philosophy and the sciences until Chomsky in 1956 demonstrated that such a behavioral program couldn't account for the productivity and inventiveness -- the creativeness -- of speech, that the mind played a necessary role in behavior. Chomsky's program was a better conjecture that led to a better understanding of the mind. 

The common view holds that Wittgenstein's later views is a rejection of his earlier logical positivism. I think that's another misconception. His later views are, I think, best understood as pushing his earlier views to their extreme and often counterintuitive and even absurd logical consequences, an insistence on biting the philosophical bullets one after another. It's a wonder he had any teeth left. 

He did succeed in distinguishing language from philosophy, igniting a productive interest among language philosophers. They contributed a lot to the understanding of linguistic semantics, though maybe not to philosophy. It wasn't until Grice's work that the distinction was resolved. 

So much for the strange beauty and the misconceptions. 

In sum, "It doesn't apply to itself" sounds like a clever dismissal of logical positivism from those who don't know much about it or don't want to know about it or who'd like to dismiss it as mere scientism. Unfortunately, they miss everything interesting in it. There were a lot of flaws in logical positivism in its early efforts, but failure of reflexive application is not one. Those flaws are evident in Wittgenstein's early work and in his later work as well, where they produced logical absurdities when applied to the mind, leaving the philosophy of science in an impoverished behavioral model until Chomsky's 1956 Syntactic Structures. That that restrictive impoverishment led to extraordinary behavioral insights -- Ryle's criticism of the Cartesian ghost in the machine, Austin's speech act theory, among many many others -- might be a topic for another post. As Jerry Fodor often said, behaviorism was provably wrong, but brilliant.