Saturday, May 11, 2024

About

 first, why "deperplex"?


Like "Less Wrong" (my favorite blog title along with "Wait. What?") the implication is that even if truth can't be obtained with certainty, falsehoods can be. Also like "Less Wrong", my interests are more towards how we understand theories of the world and its systems -- explanations and thinking -- than the objects of thinking, the things in the world itself. I'm an academic linguist (dissertation on applied modal logic of uncertainty, "anepistemic modality" as I called it), so I'm interested not only in the what's so about reality, but how it is represented symbolically and interactively, where truth actually does play a procedural, game-theoretic role.


So the blog is less focused on what's so about the world, but more how we understand and misunderstand it and explain it or get it all wrong and why.


one Big question


A grossly simplified summary of the immediate motivation for this blog might be expressed in the question,


"Are AI's LLMs just the latest zombie revival of behavioral empiricism?"


To put it differently: Must we accept the many deflationary theories of mind, meaning and understanding AI implies? What's missing from AI's scaled-up emergence-driven empiricism?


Well, a lot. And it relates to all the items mentioned in the blog title, "explanation and understanding, reductive or emergent information, interactive intelligence, social discord, certainty and uncertainty, language, thought and mind".


The blog will bring together understandings of linguistics, both computational and distributed connectivist neural networks (AI), emergence vs reductionism and programaticism (including utopianism), philosophy of science and language and mind, behavioral and evolutionary psychology, modal logic and reasoning, deflationary theories of mind and understanding, interactive intelligence...


the Big Theme: "explanation"


What is an explanation? Is reductivism explanatory or merely descriptive? Why do we seek explanations, and does our evolutionary drive to explain lead us systematically to misunderstandings? How do the symbols by which we represent explanations distort our understanding? What about game theoretic explanations? Must emergent properties obey the laws of reductionist properties (the laws of physics)? Can we learn to understand better? Should the goal of explanation be truth or probability? What's the role of truth in interactive, game theoretic understanding -- holding theories and expressing them? Why does the social hierarchy divide between the virtue-signaling, utopianist class prone to believing propaganda from above through established media and the fatalist, distrustful conspiracists whose fabrications are circulated through distributed, emergent networks from below?


[The social discord theme may seem disjoint from the other themes. It's here because I created a course at the university where I teach on polarization, social media, conspiracy theories and propaganda. Explanation and conspiracy theory are obviously related, and distrustful conspiracy theories vs belief in government propaganda can be mapped onto the social hierarchy, correlating the levels of investment in social stability -- who gains the most from the society and who the least seems to have an effect on explanatory beliefs.]


linguistics?


AI and the computer both began in linguistics. Turing developed the modern computer for cryptology, which is machine translation from one linguistic code to another. AI also began with machine translation and now still learns through symbols and aggregates of symbols, that is, through language, the symbolic representations of meanings.


The questions of linguistics are questions of emergence & interactivity vs top-down programming, and those questions spread to all aspects of the human life-world, its evolution and well beyond.


I've got probable answers to some of those questions, speculations about others, and in some cases, truths and proofs.


more on the blog themes:


I like to contrast AI's big data with Turing's algorithmic computation, Wittgensteinian behaviorism and positivism with Kantian-Chomskian rationalism, reductionism and emergence on either side, game-theoretic systems vs structured systems, the structured and game-theoretic semiotic systems through which we understand all these, and the emotions and awareness -- that's a sample of blog topics that arise from that one question at the top.


AI has powerfully revived behaviorist-empiricism and deflationary concepts of mind, and their reductive and emergent explanations. Explanations raise all sorts of questions, many of which are either not well treated or not treated in one place. I'm hoping this blog will bring some of those together. Among them, the role of interactive intelligence -- game theoretic meaning -- limitations of the mind, its symbol systems, emergent laws beyond reductive laws...there's a longer list below


The success of LLMs has obscured the long debate between behaviorism (connectionist neural networks) and rationalism (computational theory), a debate that began at least as far as Kant vs Hume, but was revived by Chomsky, using Turing computation against the tradition begun by the logical positivists, especially Wittgenstein (both early and late) and Quine. The failures of LLMs raise even more interesting questions of interactivity and natural selection goal-orientations including emotions.


This blog reflects what I've learnt over the years from teaching linguistics, computer theory, philosophy of science, cognitive science, behavioral psychology, and anthropology, as well as what I've learnt from blogging on logic (also what I learnt from my Ph.D. thesis on applied modal logic) and thinking about the controversy over emergent systems vs algorithmic ones, economies vs ideologies -- policies vs unintended consequences -- and about a decade of reading up on economics following the 2008 financial collapse.


So here are some of the themes you'll find in this blog, in no order:

  • That reductionism is not an explanation
  • All explanations are emergent
  • Many pursuits, including AI, fail to account for interactivity, and interactivity is always emergent. That includes morality, language, mind, identity-personality, gender, reasoning and understanding; they are all as emergent and interactive as economies, ecologies and societies
  • Emergernt properties need not follow reductionist laws (laws of physics)
  • Emergent properites have laws of their own -- they are sciences
  • American notions of mind are misguided in all sorts of ways
  • Positive evidence is not worth considering
  • People generally don't know how to think: they are satisfied by positive evidence
  • People generally mistake positive evidence for science, which it is not
  • Symbols, including language, are peculiarly susceptible to misunderstanding and deception
  • The mind is experience -- the mind is not in your head, it's everything you experience of the world; it's the world as you experience it (rehashing Kant)
  • Beliefs generate information distinct from the environment (world or mind)
  • Symbol systems generate information distinct from their reference and meaning
  • Prestige can be a great aspirational motivator, and it can also motivate stupid ideas, but once possessed or achieved, prestige is a corrupting disease
  • utopianism is a convenience of virtue-signaling, but misleading and dangerous
  • conspiracy theories are respect-signals

and not exhaustive.


No comments:

Post a Comment

Musk again? A lesson in inefficiency and ambiguity

Originally published on Language and Philosophy, September 1, 2023 A friend, explaining why he admires Elon Musk, describes the efficiency o...