Sunday, March 23, 2025

where the future is behind you

Why do we talk about the future as ahead of us and the past behind us? 

Among the Aymara, an ancient Andean people, it's the other way around. For them, the past is before them, the future behind. That is, the past, which we know with some certainty having actually experienced it, is like what can be seen in front of us. The future, which we aren't certain of, is the unseen, like what's behind us. Given that our vision is our paramount sense, and, like predator species our eyes are both facing front (unlike vegetarian species like squirrels and goats whose eyes are set to the sides of their faces so they can see the focused predators approaching them), this distribution of information -- certainty of the past before us versus uncertainty of the future behind -- makes perfect sense. It makes so much sense that you wonder why we think the future is ahead of us and the past behind. 

A moment's thought provides a good answer. Since we are a predator species, we want to see our prey in order to capture it. We're goal-oriented, desire-oriented. It's all about what we want and how to get it. Our notion of time is a self-interested one. Time, for us, is the answer to what we want. 

The common word "progress" -- a basic notion of time for us -- always means future and always good. By definition! It's more than just a deep cultural bias towards time, it's a cultural value. 

Think about fashion. Fashion is this progress-value stripped bare of any other good. The latest in clothing, architecture, art, trendy ideas -- they are not improvements in any value except that they are not yesterday's style. In the 50's the coolest ties were thin and skirts were long. In the 60's hip ties were thick and skirts very short. Are thick ties and improvement on thin ones? Is there some benefit to a thick tie? Is there any practical use in these trends? Culture critics like to analyse the meaning of these differences, but they forget that a) what's most important is the mere difference from the most recent past and b) meanings are typically justifications after the fact. Fashion is progress without any other good than newness -- mere difference, to use the semiologic word. 

What about the Aymara, then? What is time the answer to, for them? 

An odd feature of the Aymara language is its grammatical encoding of degrees of certainty. It's impossible to say "It's raining" without including a grammatical piece on the verb indicating whether you know it's raining because you have direct evidence (like "I see it is raining now"), epistemic conclusion (I see people opening umbrellas therefore "it must be raining") or various degrees of uncertainty ("I think it's raining', "it's probably raining"). Now, obviously, English speakers can express all these degrees and types of certainty too -- look at the glosses I just gave. But they are not grammaticalized. They are separated into individual words like "might", "must", "know", "probably" and "I think" and are included at the speaker's will, optionally. Certainty -- degrees of knowledge and evidence -- are grammatically inseparable in Aymara.

You can see where this is going. It suggests that these degrees of knowledge grammaticalized in their language have a pervasive influence on their perception and maybe their attitudes and culture. For us, information is self-oriented. To the Aymara, information is not desire, but understanding, the gradations from ignorance to belief to knowledge and certainty. To them time is not the answer to what they want, it's the answer to what they can know. 

Maybe it's too much to suppose that our time perspective is all about individual wants. After all, there are many cultures that are collectivist and not so individualist as ours in the US, and their view of the future is just as predatory as ours is. Roman architecture showed no sign of fashion or progress. They thought their style was optimal so why change? That was generally their attitude towards their culture: "We're the greatest in the world, we rule, why change anything?" including their agriculture, one reason for their collapse. Hero of Alexandria invented a steam engine around the 2nd or 3rd century, but did the Romans use it to improve their agriculture or their transport? They used it to impress visiting barbarians with statues moving their limbs or wings to all appearances miraculously by themselves. Not a progressive vision. It would be unfair to compare their clothing fashions since production was so much slower than ours. But it does seem that their sense of civic virtue contrasts with our individualism. How many prominent Romans fallen out of public favor chose suicide as a noble and dignified choice? For us suicide is all about individual solitary personal despair. Civic dignity? Does George Bush even hide his face in shame much less sit on a sword? 

On the other hand, the Romans did love any new religious mystery and semper prorsum -- always forward -- was a common Latin motto. 

Lakoff & Johnson's Metaphors We Live By shows that these orientation 'metaphors' -- time is in a spatial one dimensional line with the future before us the past behind, or good is up, bad is down -- are arbitrary, and their justifications are post hoc. So you might say that the stock market goes up when it's value increases on analogy with a pile of dollars increasing with its height, but on the other hand, if you pile up a pyramid of gold bars, the greatest value will be at the bottom layer and the very top the very least. "Good is up, the stock market goes up when it increases in value" is arbitrary. Hades was the richest of the gods, his realm the deep down source of all precious metals and gems -- wealth is down. "High" frequency mouse squeaks are down and thunder, the "low" frequency, is up. It's all arbitrary and you can find a justification after the fact for any so-called orientational metaphor. 

I do wonder, though, how much different we'd be if we spoke Aymara and admitted that the future is unseen and unknown. Our individualist future seems short-sighted and narrow. How many physicians will admit that what's understood today will be tomorrow's ignorance, today's cure tomorrow's harm? How many of us, knowing how foolish we were in the past are willing to admit that given what we'll know tomorrow, we must be wrong and foolish now?

simple way to encounter your unconscious mind

It happened like this. I'm lying in bed having just awakened in the morning. But I don't want to get out of bed. Like every day. 

I have no trouble waking up. In the last half century, I haven't used an alarm clock once. I tell myself just before I go to sleep at what time I'll need to wake up, and just like that, I wake up almost exactly to the minute as planned. I learned this in my adolescence from some radio broadcast describing this method. I tried it and it worked. Fifty years later, I still have no trouble waking up when I need to. It's automatic and accurate. Most animals have a kind of accurate internal clock, and this method is merely letting it run a behavior on autopilot. 

Getting out of bed once awake, now that's a whole different problem. 

It's always a struggle. Here's a way to understand the problem. For every moment when I want to get out of bed I want to stay in bed for just one moment longer, and any each tiny moment is not enough to make me late. It's a sorites paradox (exactly which lost hair made me definitively bald), and I'm stuck in it in real time. I'm not a believer in discipline. I want the exit from bed to be as magically automatic and seamless as waking up is for me. But it's not. It's a struggle and I lose repeatedly, partly because the logic -- that each tiny moment is not enough to make me late -- is inexorable. And even when that logic fails, I'm still struggling with myself, I want to get up but I don't want to get up. Discipline here just exacerbates the struggle. It might help to structure the waking: stop thinking and just get up. But isn't that just as puzzling? Why doesn't "stop thinking and just do" result in staying in bed? It'a real quandary. 

The morning I'm describing above, I gave up. I thought, I'm getting nowhere, let me just think about what I'm going to teach today after I get up and dressed and out the door. Thinking about what I'm teaching engrosses me, always. There's so much I want to convey to the class, and I want it to be well-ordered but also comprehensive. It's a lot and I'm devoted to it and I'm soon far away in thoughts about systems and explanations of them and misunderstandings about them and ... then, suddenly, I discover I'm sitting on the edge of the bed. When did this happen?? When did I even decide to get out of bed???

I'm sitting on the edge of the bed, but I don't know when I made this decision to get out of bed. There must have been a decision, and it must have happened while I was thinking about teaching. But I was thinking about teaching, not about getting out of bed. 

You can see where this is going. Somewhere in the back of my mind -- to use a locational metaphor that probably will bias my account of what happened -- somewhere some process obedient to the recognized need for me to get out of bed, moved the levers of my motor functions in the brain and I got up and out of bed without my surface awareness. And "I" -- the surface awareness -- didn't learn about it until well after it was all accomplished. 

I thought to myself (to my aware self), if this is really how my mind works, then I should be able to repeat this process with intent. And so I did the next morning. And every morning thereafter. 

And if I could do this in bed, couldn't I do this with other actions? What action? Some other situation in which I never want to exit but must. The hot shower, of course.

By now you recognize what a hedonist I am. In the shower, I have the same problem. For every moment in the shower I always want to stay just one moment longer. It's like a little mathematical induction. I should stay in there forever or until I drop, wrinkled like a prune. How I ever get out of there, I don't know. Or I didn't know, and now I do. It's when I'm not thinking about the shower. It must be how I always get out of there, but never noticed. So I tried the bed method and, lo and behold, it worked. 

Doesn't that imply that all my decisive choices are like this? Done without my awareness?

There's plenty of research that tells us that our awareness is late in the decision process. Christof Koch, in his book The Quest for Consciousness, describes the work he did on this -- but he's just one of many. Deflationary theories of the mind like Chater's also align with this observation, and experiments with split brains confirm that the mind justifies its actions regardless of the sources of its actions, iow, what we, using our folk psychology call our decision-making process -- "I chose to do this because of such and such reason" -- is actually all post hoc: I do; and then my mind invents or figures out a reason convenient to its self-narrative. Descartes got it backwards. Not "I think, therefore I am"; it's "someone's thinking, but it aint me". :-)

What's new here is that I seem to be able to access this process after the fact, and knowing this, I can game it by letting it do its thing without my struggling with it. It knows I need to get out of bed and turn off the hot shower. I don't need to tell it. All I need to do is think about teaching and systems and ideas, or anything that takes me far from the matter at hand. 

The more I attend to this, the more I observe it. Watching my decision-making process has become almost a commonplace, as if I had a constant companion, a kind of double within me. I haven't yet explored all its underground activities. Does it run my biases? Is it the one who loses appetite when I'm in fasting mode? Just how much influence does it have over me? 

And who is this person? Is he (it?) my obedient self, the responsible one, or the one frightened to be late or diverge from the program? Or does he have a variety of intents depending on his mood or on the circumstances. And if gender is an identity signal system, an interactive language, does it even have a gender? It could be hosted by a male body but with no sense of sexual identity at all, just decision-making in response to worries and needs, or maybe at most the needs for the actions given to male sex bodies in our culture and no more gender-narrative than that -- male body with no gender narrative and no identity signals? Or is it sensitive to my gender-signaling needs? It could be my inner heteronormative man. And how can I test this possibly deflationary, flat unconscious mind, aside from just watching its actions post hoc?

More likely, there are many inner Me's. The eater, the exerciser, the self-punisher, the self-lover, the self-defender and self-slayer. Let's not count. 

I observe the automated decision-makers more an more, at almost any moment of action, especially when I'm changing course -- from writing to getting up for coffee or even grabbing for the cup next to me (as I just did), to putting myself together to leave the apartment, check the range to ensure the gas isn't on (post Covid I can't trust my nose to do this anymore). I'm often unaware of these decisions until after I've (one of the other "I"s) made them. And is the other I aware or is it mechanical? Does it have thoughts ever, and insinuate them into my awareness? I intuit that it is immediately connected to the emotions, and the biases that are irrepressibly tied to those emotions. How is that different from having a thought? On a deflationary or flat view of mind, there might be no difference. The Other Me runs the biases, the surface Me merely fictionalizes to itself an identity-signaling Me-story. 

And I do see this social Me and the inner Other I. When I first spy someone that I know I have to socialize with but whom I don't really feel comfortable socializing with, I feel a jolt of negative arousal, almost like fear. Surely that must be the Other inner self. 

This is all far-afield. I only meant to explain how to wake up in the morning and get out of bed with no struggle, no discipline, automatically like magic. Try it. See whom you meet, or who meets you.

what the invisible hand can't see

Adam Smith early in his Wealth of Nations, explains that where there is a need, capital, seeing opportunity for profit, will go to that need and supply it. For a price, of course. Since the incentive is the profit, the need must be expressed, at least potentially, in money. If there's no money in a particular market, capital cannot see any opportunity. 

What Adam Smith didn't observe in his book was what capital couldn't see, and couldn't see it not because it didn't want to see it but because it is simply blind to it. And that invisibility is extreme poverty. In a money economy, where there is extreme poverty -- no money -- there is no market and nothing to draw capital to it. 

Poverty is an embarrassing gap in Smith's book. 

Smith's book also doesn't see how short-sighted the invisible hand is. Were the employers of labor to raise wages, consumption would in time likely grow, incentivizing more production and more employment and yet more consumption and more production and more employment and...an upwards spiral of increasing wealth from the top to the bottom and back up top. But the market, as we know from the 2008 collapse, is short-sighted, too short-sighted to see the advantage of raising wages now to benefit from the upward spiral later. In the short run, the market incentivizes the producer to keep wages as low as possible. Marx and Keynes saw this alike. There is no immediate incentive to spend more on labor. And that's because of the necessary character of the invisible hand.  

The point of the "invisible hand" metaphor -- and its groundbreaking emergent-property insight -- is very much like Darwin's natural selection, emphasis on "natural", and Galton's wisdom of the crowd. Without any intent to do so, it produces a beneficial end-goal for all members participating. "Without intent" means "without intervention" on the part of thought or analysis or theory. The virtuous goal happens all by itself unintended, like natural selection and the wisdom of the crowd. 

The focus in natural selection, the crowd and the invisible hand is on their successes -- natural selection yields amazing abilities of phenotypes, the distributed crowd yields accuracy beyond experts and the invisible hand yields consumer surplus and efficiency and wealth creation, all by themselves without any intent to do so. Extinction and starvation not so much -- out of sight, out of mind. Even market bubbles -- the failure of the crowd's distributed information -- is out of sight. Opportunity blinds us with wishful thinking. Unlike soap bubbles that are visible until they burst, a market bubble is invisible to those that make them until they burst. Then they're more than visible, they're felt.

To see beyond the incentives or to breed a species or to prevent a bubble requires thought, theory and the predictive foresight theory affords, and intervention. Maybe Smith could be excused for leaving out the starving if his project were an exclusively scientific one to describe and explain the market, and not to describe or explain circumstances where there is no market, like extreme poverty. If that were so, he would be exclusively analysing the market, not explaining how it should work, but only how it does work. But the latter end of his book is full of prescriptions: educators shouldn't be given salaries but should be paid directly by their students; commercial interests should not be allowed to lobby government lest they intervene in the market. So the book is not just scientific. It has a prescriptive point as well. The poverty gap is not excusable. 

public spitting: transgressive pride and autonomy

I've been puzzled about public spitting for decades, too stupid to recognize the obvious. Men who spit, spit on a world that they see is unworthy of them and their basic dignity. 

Lemme explain. In American culture, spitting seems to be about pride, masculine autonomy and something akin to solidarity or community: "I'm too proud to conform to norms of a class from which I am excluded. Acceding to such norms would compromise my proud masculine autonomy. My identity of masculine pride is supported by a community of likewise autonomous anti-elite, self-justifying refuseniks, comfortable and even snug and happy in our community of crude habits." That's a lot of lofty language and reasoning, but it comes down to a preference for peer behaviors cultivated during the youthful developmental stage and an indifference to adult elite notions of etiquette, perhaps perceived as effeminate or effete or fay. 

While the privileged educated elite respect their world, a world from which they derive so much including and especially respect for their education, their sophistication and career accomplishments, the non elite have little reason to respect their environment. "This degraded and degrading world around me isn't worthy of me and my pride. I spit on it freely and trash it as it deserves. I don't think twice about it. Why should I?"

The knock-off effects of respect also include propaganda beliefs and conspiracy theory beliefs (many blog posts here on this topic). It's evident in the elite cherishing of "proper" English and the respect it gets from high and low. There's been plenty of public attention lately given to privilege -- white privilege, mostly -- and not enough to respect vs lack of respect across the social pyramid. Respect infects our understanding of the world in our theories about it, infects our attitudes about our surroundings and even our perception of language. It also infects our self esteem and our judgments of others. And it is an interactive game -- we get it from others and even from our surroundings and others' perception of our surroundings. 

Monday, March 17, 2025

the strange beauty of logical positivism and popular and academic misconceptions about it

There are two common misconceptions about logical positivism. 1. the positivists, like New Atheists, set out to prove that non scientific theories like religion and metaphysics, are false and only science can be true, and 2. Logical positivism fails at its own criterion of meaningfulness. 

(1) has got LP backwards. LP allows religions and metaphysical systems as not just true, but necessarily true, while it's the scientific theories that are possibly false, not necessarily true at all. 

That's the strange beauty of LP. The difference LP draws between theories is not between the true or the false, but "meaningful" and "not meaningful", using a peculiar definition of "meaning".  LP doesn't touch on any other aspect or virtue of non verifiable theories, their aesthetic value, their mystery or charm or inspirational insight, their moral or social value. Just their meaning, where "meaning" in LP is used as a theoretical jargon for "phenomenal informational impact -- how the world of phenomena and events are and are not." The challenge of unpacking their use of "meaning" such that it isn't circular is the reason for (2).

(2) is flatly false. Apply LP to LP and it verifies. (2) also assumes that LP is a theory and not either a definition or description or an axiomatic system or merely a kind of practical advice like Popper's demarcation. 

The popular misconception has it that if LP is a theory it should apply to itself, but LP can't itself be verified. I think people who say this must not have tried to apply LP to LP, maybe because "it doesn't apply to itself" is self-reflexive and so clever-sounding that they don't bother to experiment to verify whether the clever is also true. Whatever their reason for why they don't apply it to itself, we can apply it here and now:

LP says that theories are meaningful (in the sense of "tells us how the world is or is not", "what's in the phenomenal world and what isn't") if their statements and predictions about the world are verifiable. Is this assertion verifiable? Sure. God is not a directly verifiable object. It's not meaningful in the LP sense of telling us how the phenomenal world is. Religion is meaningless in that sense. Are the bones of dinosaurs verifiable? Yes. Archeology is meaningful in the LP sense of telling us about the phenomenal world, in this case where to find dinosaur bones. LP is verified by both these cases. LP is meaningful in the LP sense of meaningful, telling us how the world is and is not. 

This is all crude and simplistic, but it shows how to apply LP to LP. Let's try again with something more substantial.

Creationism cannot predict the fossil record. There's no book of trilobites and dinosaurs in scriptures, and scriptures don't need them. It doesn't tell us how the world is, phenomenally. That's an unverifiable theory, and notice, it's a necessarily true theory -- no empirical evidence can prove it false. (The New Atheist will complain about it's internal contradictions, but those are logical disproofs, not evidential disconfirmations, and LP is concerned only with evidence. That's a huge difference.) Archeology does predict the fossil record. Treatises on trilobites and dinosaurs belong to science. Biology and archeology tell us what we will and will not find when we dig into the earth and find bones. Are they true? Well, not necessarily. They are the most likely theories of the topic given the evidence currently available. True? Who knows what we'll discover tomorrow? And that's one difference between the religious or metaphysical theories and the scientific theories -- according to LP.  What we discover tomorrow could trash our current science. It will never trash the religious or metaphysical theories. Are these differences between creationism and archeology verifiable? Yes, the difference seems to be verified. That difference is the LP criterion, the LP "theory". 

You may have already noticed that creationism is strictly ambiguous over verificationism, since it doesn't predict, so you could say verificationism can't apply to creationism. IOW, the problem is not that verificationism doesn't apply to verificationism -- it does -- it's that verificationism doesn't apply to the necessarily true and meaningless (in the LP sense) theories. 

Right at the outset, it's important to know that Karl Popper identified an essential flaw in LP. Verifiability runs into the inductive fallacy. Verifying a theory supports the theory but can't prove an explanatory theory, that is, it can't prove a theory that predicts the possible (as compared with post hoc descriptions of a closed set of observations). Popper replaced verificationism with falsificationism -- that a "meaningful" explanation must identify the conditions under which it would be false. The consequence is that scientific theories are never provably true, they are just the ones that haven't yet been proven false. There are weaknesses in falsifiability too, but it was an important advance over LP's primitive verificationism. LP was using its confirmation bias to confirm its theory of confirmationism. It's a typically human failure to use clear Bayesian reasoning. 

The weakness of LP is not alone its confirmationism. It also defined "meaningfulness" in terms of verificationism and vice versa. Their criterion of science was circular. That's because LP's use of "meaning" is not a theory at all. It's a definition or axiom or maybe a kind of practical advice. Definitions generally don't apply to themselves. "Blue" is not blue, and it would be irrelevant even if it were blue. "Blue" can be used as a kind of practical advice: you can view these objects as having this common property of being blue. "Look at these -- they all have a kind of similar hue. For convenience let's call them 'blue' so they're 'blue-ish'." That's all there is to a definition. And if you can provide for all or many possible additional individuals ("that thing there should be included in the set") all the better. Most definitions don't apply to themselves, though some do: the set of definitions that define themselves, for example. Not very practical. 

Another weakness of LP, most obvious in Wittgenstein's Tractatus, was the belief that there could be atomic facts, indivisible facts independent of any other facts or ideas or theories. But facts are partly theoretical -- they are conjectural, dependent on the likelihood of the theory -- and like theory, their value is their falsifiability. There's a frequency theory of hues that classes navy blue with sky blue although Russians distinguish them as distinct hues. When I was a child I refused to wear anything navy blue so repulsive to me was this color. My favorite color was sky blue. Which fact is relevant -- that navy blue and sky blue are opposite ends of a single color or that they are two colors? Depends on the theory and its purpose. (See "true but wrong" on this blog.) "Whales are giant fish" belongs to a biological taxonomy that sufficed for the deity in the Book of Job, and that book makes effective, memorable use of it. I have no problem with the "whales are giant fish" theory. It's just not useful for science, a predictive theory of what's out there in the phenomenal world and how it got there. People can wear different hats, you know. 

The lack of conjectural theory in LP led to Wittgenstein's private language argument, a kind of reductio ad absurdum of his verificationism, applying verificationism to the mind. Can experiential states be verified, he asks. Well, on a verificationsit model of truth, no. Rather than seeing this as a disproof of the  verificationist premise, and rather than seek a better conjecture, he oddly, and perversely, embraced the absurd result that the experiential is meaningless, and advocated for a kind of behaviorism that prevailed in philosophy and the sciences until Chomsky in 1956 demonstrated that such a behavioral program couldn't account for the productivity and inventiveness -- the creativeness -- of speech, that the mind played a necessary role in behavior. Chomsky's program was a better conjecture that led to a better understanding of the mind. 

The common view holds that Wittgenstein's later views is a rejection of his earlier logical positivism. I think that's another misconception. His later views are, I think, best understood as pushing his earlier views to their extreme and often counterintuitive and even absurd logical consequences, an insistence on biting the philosophical bullets one after another. It's a wonder he had any teeth left. 

So much for the strange beauty and the misconceptions. 

In sum, "It doesn't apply to itself" sounds like a clever dismissal of logical positivism from those who don't know much about it or don't want to know about it. Unfortunately, they miss everything interesting in it. There were a lot of flaws in logical positivism in its early efforts, but failure of reflexive application is not one. Those flaws are evident in Wittgenstein's early work and in his later work as well, where they produced logical absurdities when applied to the mind, leaving the philosophy of science in an impoverished behavioral model until Chomsky's 1956 Syntactic Structures. That that restrictive impoverishment led to extraordinary behavioral insights -- Ryle's criticism of the Cartesian ghost in the machine, Austin's speech act theory, among many many others -- might be a topic for another post. As Jerry Fodor thought and said, behaviorism was provably wrong, but brilliant. 

Sunday, March 16, 2025

the illogic of utopia, the danger of utopianism, and Popper's alternative

If the goal of life were to be happy, no one would have children. 

That's if life had a goal.

When I first saw As You Like It, I was rapt by the first scene in the Forest of Arden. A random gathering of refugees of uncertain future, with diverse talents, backgrounds, personalities and dispositions, including at least one of no apparent talent, all making the best of their lot in the now, together. One, of course, has a lute -- someone always has a guitar -- and sings resonant songs, one is a philosopher enthusiastically espousing deep thoughts, another a young impulsive romantic and a bit stupid, and one, talentless and apparently opinionless who sits on a big chair at the center presiding over their little make-shift society, the refugee Duke, of whom nothing is asked and who asks nothing of his pretend-subjects. 

It reminded me of my very first job. It was in a retail store in Times Square, 1971, all the night-shift employees randomly thrown together with various talents and dispositions, the lively yet gentle, delicate and beautiful Cecilia and her handsome and dashing boyfriend, his comical and trangressively vulgar brother; and there was a young out-of-towner trying to make it in the Big City, impressed with all the craziness of New Yorkers including the local streetwalker who'd regularly traipsed by to entertain us and just before leaving, pull off her t-shirt to shock the patrons with her bare b**bs then run out the door; and the night manager, a wry, gay composer who wrote musicals for Andy Warhol's transvestite Superstars -- all facing an uncertain future, but facing it in the now together. As the youngest -- sixteen, they were mostly in their mid twenties -- I was treated as the mascot. Despite our limited paycheck means, we spent Saturday evenings after work very late at dinner together in a local bar & grill. 

If I had to spend eternity somehow, I'd choose that job. The day-to-day concerns and not knowing what life would hold were enough to engage all of us, and we all cared about each other, not in any programmatic or moral way, but the way kids who hang out together know and care about each other. 

I had the same reaction to the old black and white 1937 movie Stage Door and the 1960's L-shaped Room, both about boarding house life in which the characters are thrown together randomly and live their uncertain lives together. 

That's my heaven, but it's not happy. The option of perpetual happiness sounds to me in no way different from drug addiction. Is that what people want? Once hooked, maybe, and AI threatens to hook us all into indolence incrementally like frogs in a heating pot, but who would choose that hook aforehand?

Besides the utopian inclination to sacrifice the now for a distant future that cannot be predicted, utopianism is itself a dilemma -- it is either an impoverishment of human well-being (material goods or accomplishments or statuses) or, if it recognizes the richness of human needs, it's not utopian because some of those needs cannot logically be met -- what makes those needs fulfilling is that they are motivating wants, not accomplishments or gains.

I'm sure you all can think of examples of such motivating needs and wants that lead to engagement, absorption and fulfillment. A big part of what makes hunter-gatherer culture -- the culture that homo sapiens were naturally selected for through at least two million years of human evolution -- so idyllic (egalitarian, gender-equal, cooperative, no disciplining of children) is the engagement required to obtain their basic survival needs. And what is most dysfunctional in our society is excess supplying of needs (too much sugar, for example, or convenient transport instead of a healthy walk or climb up the stairs) and the property that follows from the surplus. "To each according to his [sic] needs" has failed us. We have to struggle against our desires now instead of struggling to fulfill them.

Karl Popper proposed an alternative to utopianism: a procedural model rather than the usual policy or top-down planned model. The ideal society, for Popper, is one which is a) highly sensitive to informational feedback of its policies, whatever policies are chosen; has b) the flexibility to learn from its mistakes and the agility to fix them quickly; c) responds to the needs and interests of its constituents. He thought liberal democracy was such a procedural ideal. 

Recognizing that the technologies of the future cannot be predicted and since technology is a socio-economic game changer, staunch ideologies and top-down autocracies should have no place in government. Holding onto ideologies will obstruct the responsiveness required of policy-makers. 

In this he may have been wrong -- the CCP seems to be more flexible, informationally sensitive and responsive than the US, an ostensibly liberal democracy, is. And the periodic, and apparently global, fashion for fascism and the polarization of the political realm tells us that the members of the society are neither informed nor rational in their understanding of the world. In an autocracy there's little point in holding strong political views so there's less social polarization, and besides, everyone more or less agrees on whether the autocrat is succeeding or not. Egyptians all recognize that their state is ruled by a military dictatorship. This does not produce polarization or rebellion. It produces general agreement, not with the gov't, but with the people who have to get by with their uncertain future, together. 

So utopia is a stupid idea. Even an anti-ideological, bottom-up program like Popper's won't work. It's time to accept that the future depends on technology we cannot anticipate and that technology of the future will be a morality and personality and values game-changer. And above all, let's not sacrifice now for a future we can't understand much less predict. Take things as they come. The craziness of now suffices for the day. 

It's hard to accept that the future is a foreign country. Doctors seem to have trouble understanding that their knowledge of today will be all wrong tomorrow. Can you blame them? What a sorry profession, dedicated helping and never doing harm, yet doomed to doing harm. It's no wonder that deal has to be sweetened with so much money. 

student debt conspiracy theory and the "elites" fallacy

The belief that student debt was created by "the elites" in order to ensure that graduates would become yoked to the workplace as obedient, hard working labor, is not only widespread but is espoused even by prominent public intellectuals like Noam Chomsky. It shares two flaws of bad reasoning: ignoring the obvious and embracing positive evidence uncritically. 

[Disclaimer: I don't know whether their theory is true. Truth is almost always a bit of a mystery. What follows is an explanation why anyone who believes this theory hasn't bothered to think it through, and likely embraced it for the sake of some political or ideological bias, since the belief is contradictory and inconsistent in itself. IOW, it's a stupid belief, true or not, so you shouldn't buy it and if you do buy it it's time to reflect on why you're not thinking.]

First, the debtors, by assumption, must use their wages to pay off their debt rather than spend their wages elsewhere. So every dollar siphoned into paying debt is a dollar that is not being spent in the productive economy -- purchases of consumables or assets like a house. This debt servicing certainly benefits the financial interests, but not the interests of the productive economy or vendors of consumables or assets. The theory implies that corporations that make stuff and sell stuff are not among the elites. Musk, Bezos, Cook, the Kochs and the Waltons would then not be included among the elites. It also implies the "the elites" want to suppress the productive economy. No one who holds this theory has ever complained that the theory implies that "the elites" have as their intention to undermine the corporations that make and sell. But that's exactly what this theory entails.

Second, again by the theory's premise, if the debt did not exist, the graduate would simply work less. But is there any evidence that this is true? Would you work less, or would you spend more? Would you find an apartment without roommates or a larger living space or a nicer place in a more interesting neighborhood? Or would you stay where you are, deal with your four roommates, refrain from getting married and have kids, just to work less or take a pay cut for a more relaxed work environment? When choosing a job, which is more important, better pay or less work? Human desires are unbounded. Did the GI Bill result in lower work hours? On the contrary, veterans can't find jobs that match their level of education. They are not choosing to work for less, they're forced to.

Inflation, btw, belies the "more money, less work" prediction. It tells us, "more money, more spending". 

The underlying flaw in this conspiracy theory is the assumption that "the elites" are a monolith with a coherent program. That just isn't so. Debt benefits banks, but hurts the productive economy. Rents benefit landowners, but not the rest of the consuming economy -- that's an old Henry George observation. A burgeoning productive economy would benefit the banks, since there'd be more interest in investment and debt, and benefit real estate as well if wages rise. Everyone benefits from the productive economy -- the owners of capital, labor, finance and real estate. But student debt benefits only finance, not real estate, not the productive economy and certainly not wages or labor. So student debt is not a coherent program for anyone except banks, and banks would benefit without it anyway.

The divisions among "elites" extends further. Purchasing from Amazon is a loss to brick-and-mortar stores like Wal-Mart. Does that mean Wal-Mart is not among the elite? Or was it elite but no longer? GE used to be the giant among corporations, not anymore. And where are the railroad magnates today? They are not the owners of auto factories. Schumpeter described this ongoing shift among corporations, creative destruction. That's a good theory. And it's not a conspiracy of "elites". It's an observation of an emergent, distributed property, like the economy itself. Consumers like innovations that benefit them. No "elite" is forcing it on them. The market closely follows consumer desire, rarely the other way around. 

When someone blames whatever on "the elites", ask, "Which elites?" There are many. And the conspiracy theorist's favorite advice "follow the money" all too often looks only upwards, ignoring the vast aggregate distributed funds in the consumer's aggregate pocket.