Showing posts with label Linguistics. Show all posts
Showing posts with label Linguistics. Show all posts

Thursday, May 16, 2024

NYU imperatives workshop

Originally published on Language and Philosophy, March 21, 2016

Do scientists from differing disciplines have the same goals in addressing the same facts? Linguists attempt to accommodate all the natural language intuitions in their theoretical frameworks. That may lead them to extralogical means. Logicians have often taken on one or another natural language intuition and attempt to augment the logic to accommodate that intuition. In both cases there’s a question of purview: why not accommodate all the intuitions through the logical system, or how much of logic should accommodate the intuitions?

This became the battle at a workshop on imperatives at NYU today. Craige Roberts incorporated pragmatics into her analysis of imperatives to include a wide variety of natural language intuitions, while Kit Fine and Peter Vranas developed new logics to deal with some, but not all, intuitions. They both seemed to ignore that traditional logics are not just inadequate for linguistic intuitions, but also inadequate to basic facts about reality. If we assume that development of logics is still in its infancy, the attempt to accommodate each outstanding challenge is a step towards a more inclusive, flexible and useful logic.

Think of Kratzer’s lumps of thought. She observed that a single event can be represented in multiple descriptions that in sentential logic would imply multiple events, or if not implied, then at least failed to imply that that the descriptions were of the same event: if Sally made a painting that was a portrait depicting her sister, sentential logic would imply three events (or at least not imply one event) “Sally made a painting and painted a portrait and painted a picture of her sister.” That these three descriptions are of one event is not a linguistic intuition, it’s a fact of what Sally did. To formulate a logic that can identify these conjuncts as three descriptions of one event would be progress for logic, not for linguistics.

So it seems to me, logic is justified in picking its challenges independent of the needs of linguistics. The real test would be between AI and neurolinguistics — how are imperatives represented in the brain, how can they best be represented in a robotic program? I didn’t see anything from the linguists giving a brain representation argument the way, say, Chomsky did with syntax. There doesn’t seem to be an experimental program to follow, as there was with generative syntax. The logicians, on the other hand, were always mindful of the algorithmic value of their logic, but that’s why they are logicians.

There was also an interesting exchange on whether the background conditions of an imperative are factual or relative to the speaker or addressee. So “if it’s raining, take an umbrella” can be evaluated on whether it’s actually raining or whether the speaker thinks it’s raining. Does it matter whether it’s actually raining for the force of the imperative to hold? Roberts, the linguist, wants it to be contextual information of the speaker; Vranas wants to take this as factual so that the entailments can be validated within his three-valued logic. At first they seem to be different views — why should it matter whether it’s actually raining, since the imperative is the speaker’s insistence. But if there are only beliefs, and no facts, both views are the same. The force of the imperative, shared by the speaker’s intention and the addressee’s understanding of it, will shift if she comes to believe that it’s not actually raining.

The two talked past each other for about an hour. The problem is a really tough one. The entailments of speakers’ assertions are trivial. Sally said “I’m lying” just in case she said it. So as an assertion, it’s true. But the content, indexed to a speaker, is a paradox. It’s worth remembering that three-valued logic began with an attempt to incorporate the epistemic into the logic. The result is a loss of a distinction between the factual and the epistemic. But there’s an underlying problem: no one knows what is factual; all we know are our beliefs. Deductions from our beliefs will always be trivial; deductions from facts will require extralogical overlays for the epistemic. I worked out the problem a few years back here. I’ve complained that trivalence flattens modality here.


Bossy jerk

Originally published on Language and Philosophy, February 9, 2016

Sheryl Sandberg, Corporate Operations Officer of Facebook, has created a Ban Bossy campaign to encourage girls to be leaders. Many celebrities have expressed support for the campaign and even advertisers have taken up the cause as a means to market to women.

 

Sandberg makes several distinct claims about the use and meaning of “bossy.” Some have merit, others are misleading. All of them are fruitful for understanding cultural roles, inequalities, and how they play into perception, attitude and emotional response. I want to take them separately and look at some data.

 

  • “bossy” is used more to describe females than for males

  • this disparity shows an inequality in our cultural stereotypes

  • cultural stereotypes influence our perception of behavior and our emotional response to behavior

  • the cultural role of boss is masculine so males can’t effectively be disparaged by “bossy”

  • the cultural feminine roles include nurturing roles, not boss roles, so females playing the boss role are perceived as inappropriate

  • the cultural masculine roles include boss, so when men abuse their authority or are pushy or bossy, their behavior is accepted as a norm

Evidence supports some of these claims but not others. A linguistic analysis leads to a more complex relationship between cultural roles/stereotypes/expectations and human attitudes/perceptions/emotional responses that may be independent of the culture. I’m using a beautiful data mine developed by Ben Schmidt. It mines Rate My Professor, an online website that allows students to review their professors. since the professor’s name is identified, the reviews can be sorted by professor’s sex, give or take a few ambiguous names. Professors are quintessential authorities, the reviews are perfectly suited to an understanding of the use and frequency of words like “bossy.”

 

First, the data clearly show that “bossy” is used more often for female profs than for male ones, although it is used substantially for male professors too. Does this imply that female professors are perceived as bossier than males? That is the Ban Bossy claim — women are rejected in positions of authority. A quick look at “jerk” seems to refute that claim.

“Jerk” is used exclusively for males and it appears in the corpus far more frequently than “bossy” — something like 35 times more frequently. That’s not a little. It’s a huge difference. Are there other negative epithets that might be used for women that are more frequent than “bossy”?

“Mean” is also used more frequently for females than for males. Does this support the Ban Bossy view?

The distribution of “jerk” implies that our language has gendered epithets. “Jerk” is for males, “bossy” for females. If that’s so, then the reason “bossy” is used more for females than for males implies nothing about the emotional response to female roles. It’s used more often because “jerk” is the preferred epithet for males.

The data actually show the opposite of the Ban Bossy view of emotional response to female/male role or expectation. Students object to male authority frequently, possibly more frequently than female authorities. The greater frequency of “mean” for females shows the same: why describe a male as “mean” when there are so many more, and more expressive, epithets for men, including not just “jerk” but “dick,” “douche,” “dickhead,” “prick,” “douchebag,” “son-of-a-bitch,” “bastard” and the declining “schmuck.” Rate My Professor no longer allows the most common epithet for males, “asshole,” but the data mine provides partial data — I assume that Rate My Professor closed below-the-belt epithets shortly after they appeared.

Couple of points here. The wealth of epithets for men imply that in our culture we freely object to male abuse of authority. It’s enshrined in the language. The frequency of their use demonstrates that we object to male abuse of authority. So the differential use of “bossy” is purely linguistic fact, not a fact about our perceptions influencing emotional response. We dislike abuse of authority whether the authority is male or female.

The data also show that our language is gendered. There seem to be many more epithets for male abuse of authority than for females, which does very much correlate with the social fact that men are mostly bosses, or that through the development of our language, bosses were mostly men.

Notice that both “bossy” and “mean” are not particularly gendered in themselves and are literally descriptive and not either metaphorical or metonymic. All the vulgar male epithets are metaphorical or metonymic or both: they refer to taboo body parts some of which metaphorically relate to acts of sexual violence, or they relate metaphorically to the social stigma of illegitimacy. In the context of Rate My Professor, “bossy” and “mean” may indicate a second choice after “bitch” which RMP will not accept as a review. Not exactly a euphemism, but a kind of nonce euphemism.

More important, there are many negative words for females, but they do not cover the abuse of authority. Several include “dits,” “airhead,” “twit” (used for both females and males), “bimbo.” I compare these with cultural female/male attire: pockets are the characteristic of male attire; not only are pants and jackets full of pockets and dresses, skirts and blouses largely devoid of them, but taking a minimal pair — men’s jeans and women’s jeans — you’ll find that women’s jeans’ pockets are often shallow and useless, whereas mens’ are deep and many. Pockets are utilitarian in the sense of of managing the outside world through tools. Pockets hold those tools. Womens’ wear is designed for attractiveness (whether for the male gaze or otherwise), not any other utility besides covering and warmth, and often inadequate for both of those.

Putting the attire next to the epithets a pattern emerges. The cultural roles for men are ones of control and manipulation of the world including other people. The response to their aggressive control is a wealth of epithets that object to male power. The cultural roles for women include aesthetic appeal. The negative epithets might be described as “pretty but useless.”

It seems to me important that the responses to authority in RMP shows that our attitudes towards these cultural roles do not numb our emotions. Any expectation that the boss will be male does not incline us to accept the abuse of authority or prevent us from objecting to it in the strongest terms. So we can distinguish between the cultural roles and the perceptions of them. The data implies to me that culture does not determine thought, it just gives us different ways to express our thoughts depending on cultural categories.

The Ban Bossy campaign has given us an important avenue of research to discover

a. the cultural roles embedded in our language

b. the independence of our responses to those roles

The feminist agenda is a fruitful lens with which to investigate not just the facts of our society — inequities of pay and power — but also of culture and attitude in our language and our perceptions.

Part II — Questions for further research

A more disturbing fact in the data is the disparity in use of “brilliant” and “genius.” These are not gendered words, yet they are used to describe males more frequently than for females, and “genius,” the more hyperbolic word is even more biased towards men than “brilliant.” Assuming that females are at least as bright as men if not brighter, how do we account for this disparity in perception?

In this case, I speculate that this is not a linguistic fact but a behavioral and perceptual reflex — exactly the opposite of the “bossy” analysis which is merely about the lack of available gendered words for female abuse of authority. If males are brought up in our culture to be special, competitive and superior, while females are brought up to be servants — the nurturer, the mother who serves her children,m the caretaker — it would be no surprise if the male instructor in class would present himself as special, competitive with his ideas and superior, while the female instructor would be focused on the students.


use

Originally published on Language and Philosophy, May 30, 2012

In the latest New Yorker Steven Pinker quotes his defense of the dictionary, “it is not just a matter of opinion that there is no such word misunderestimate, that the citizens of modern Greece are Greeks and not Grecians, and that divisive policies Balkanize rather than vulcanize societies.” Given that language is always in flux, on what principled ground can these be judged? Misunderestimate is redundant, but what of it? Language is full of redundancy, and if some underestimations are benign then maybe misunderestimating is not exactly redundant. If Grecians becomes current, then Greeks will be an anachronism; same with vulcanize. Stranger things have happened to English.

So what’s the purpose of a dictionary? Shouldn’t it be a source of scholarly information — about who uses Balkanize, vulcanize, Grecians and misunderestimating and why, and how their use came to be?  When did scholarly information include prescriptions on use? Shouldn’t that be left to the newly informed reader?

Whether you choose to avoid the intensified “same exact” for fear of someone (as another letter-writer in the same exact issue of the New Yorker) thinking that you are unthinking, that’s a choice you make between using language as if it were logical and systematic (which it can and maybe should be at points) rather than expressive (which it can and should be too). Most such complaints against illogical use are little more than gotcha‘s to show ones linguistic or logical accuity, which though admirable for its accuity is at least as deplorable for its smug, nit-pickity derisiveness.

After all, aint has its place for effective use and so has “Have you finished your homework, yet?” (also from that same letter-writer J.A.F.Hopkins — note the many names). I monitor my own use, but I have garnered not a few enemies for it. Not everyone loves a pedant, and some resent them deeply.

I do not embrace loss of linguistic distinctions. I regularly hear “it begs the question” meaning the uninteresting “it leads to another question,” and it’s been years since I’ve heard anyone use “beg the question” in its old sense of “that’s not an answer but a circuitous restatement of the problem” which was always such a clever rebuttal. But before I’d conclude that English is dying, I’d want to understand better exactly why the changes occur. In this case, it is not that speakers are losing the ability to recognize empty circular reasoning. Begging the question was a rare form belonging to philosophical discourse. The change has not been a loss of the expression, but a popularization. People outside of philosophy are using it, and they use it for their purpose. Within the philosophical community, the expression still thrives exactly as it was and no doubt with the same frequency.

Language is an accommodation to communication for the interchange of information and socialization. That’s what’s interesting in language — not that the language is abused, but why, what conditions of the language system that allows for those changes, and what pressures on expression drive those changes. Same exact is the familiar case of hyperbole that gave us terribly good and awesome and the British brilliant for “very useful.” And brilliant is itself a dead metaphor.

A twenty-something friend regularly writes “could of and would of” although he had an expensive education and fancies himself a writer, no less. His excuse is, “language is always changing.” But that’s clearly not relevant: he would never write, “I could certainly of, but ofn’t, and you would definitely not of, and in fact you ofn’t.”  So his language hasn’t changed, he’s just chosen to spell the word in one grammatical position as a different word. One response is: what an idiot — can’t he see that his own usage is inconsistent? But the interesting response is: what is it that hides his inconsistency from him? It’s not that he’s incapable of thinking about the use of “have.” Anyone can do that once it’s pointed out. It’s that this “have” is not really a verb at all. That’s where it gets interesting — asking, not judging.

Now what do people think they mean when they say “I could care less”? — especially since “I couldn’t care less” is so incisive and expressive.


Hypercorrection, schemata and UG

Originally published on Language and Philosophy, June 28, 2007 

A student asks why “she and I” sounds so much better than “I and she.” A simple, but resonant question — the bias for the former has the strength of a grammatical intuition, the stuff syntactic theories are made of. So it’s not a trivial question. Evidently the schemata that we learn, especially, I imagine, those we learn at an early age, embed themselves deeply — and inflexibly — alongside our original and much more flexible, productive grammar.

Corrections, including hypercorrections like just for you and I, fall into the category of memorized forms along with formulaic and schematic utterances. Their relation to the grammar of the language is incidental, but they are strongly imprinted on memory, in some ways more inflexibly than grammatically generated forms. They show up in places where the standard forms no longer carry any grammatical function. Along with formulae and schemata, they are a part of language deeply embedded, but agrammatical. They show some instructive contrasts with forms grammatically generated.

Maybe a word first about the “original” grammar. Such structures as “Me and my mom went to Disneyland” are frequent in many “non standard” dialects of English. Yet the same speakers who naturally utter them would never say “Me went to Disneyland.” That’s Tarzan-talk to them. Concluding that these speakers are speaking ungrammatically or failing to be consistent in their speech would miss the point utterly and entirely.

The point not to be missed is this: oblique case — or whatever you want to call the me form — is not really the accusative or dative grammarians claim it to be. The me form seems to be a reflex of distance. Conjunction (and), though it seems pretty simple, actually introduces significant distance between subject and verb, assuming the basic structure of the language to be a context-free grammar with some modifications (see below, “Syntax for the uncertain”). If the “me” form is induced by such distance, we have an explanation for the “me and her went” dialect, which seems to be the default mode for English, since it turns up untaught in so many dialectal varieties, whereas the “she and I” variety seems to turn up only in the taught versions of English.

In other words, “me and her went to Disneyland” reflects the natural grammar of English; “she and I went” reflects a crude human intervention, entirely ignorant of the underlying complexities — and power and beauty — of the grammatical machine structure.

Note an important contrast: in the untaught variety, “Her and me went to Disneyland” is also possible, though less likely; in standard, “I and she went to Disneyland” just doesn’t sound right. Sounds awful. Yet “I and she” is easy to understand. That’s one mark of memorized form as distinct from grammatical form: violations of grammar are usually uninterpretable gibberish, while violations of memorized forms may sound odd but still be comprehensible.

The difference between a grammatical reflex and a memorized scheme

There’s no question that we use formulae and schemata all the time in our speech. We repeat the same structures over and again with different words, sometimes with the same words. A lot of speech shows, disappointingly, little productivity. My friend Diana Sidtis is compiling a list of English schemata, and the list is getting long.

The prevalence of formulae and schemata has been used to diminish the importance of the generative program — quite wrongly, since the generative program is as much justified by the sentences that cannot be processed in a language as by the unbounded number it predicts can be processed (once again, see below, “Syntax for the uncertain”).

Hypercorrections fall into the category of memorized forms. They show up in places where the uncorrected forms no longer carry any grammatical function. The difference between “I” and “me” was strongy grammatical in Old English, but today it mostly marks a difference in style, not comprehension. There’s a wonderful sentence in the Anglo-Saxon Chronicle telling the story of the fate of St. Columba’s island after he died [here in modified transliteration]

There stowe habbeth yiet his ierfenumman.

The place still have his followers.

If I ask students what’s grammatically wrong with this sentence, they reply, 99% of the time, “have” is wrong; it should be “has”:

The place still has his followers.

Only once has someone suggested the subject and object need to be reversed:

His followers still have the place.

That’s, of course, the meaning of the chronicler. In Old English, “habbath” indicates a plural subject (“his followers,” not “the place”). Word order indicates nothing.

Today, word order (really order of syntactic category) provides all the grammatical relations. If “the place” comes first followed by the verb, “the place” must be the subject, regardless what form the verb takes. The difference between “have” and “has” indicates nothing grammatical at all. It typically indicates personal facts about the speaker like level of education or dialectal variety or style: “I has one/I have one,” “She have it in her room/she has it in her room.” These are not functions of grammar. Grammar is the brain’s means of processing and communicating content, not social status. With the exception of plural, progressive, past, comparative and superlative markers, inflections have lost grammatical function in English. Even possessive has been replaced with word order in ICE (inner city English):

They covered with they blood.

Pronoun+noun= possessive+noun

Into this space where the standard insists on retaining non functional forms, creep the hypercorrections: between you and I, which has spread recently among reasonably well-educated folks to for you and I. I hear both of these in film and TV, always scripted for the educated characters. Only working-class characters use the standard form from twenty years ago between you and me, for you and me.

Notice again that it is the conjunction and that allows the form for you and I among educated English speakers who would never dream of saying It’s just for I.

Hypercorrections (for you and I), like standard corrections (she and I left), are memorized forms.

Corrections and grammar

What I find suggestive here is that hypercorrections appear to be schemata: they are most likely memorized forms and they do not have much flexibility in contrast with generative grammar (“me and her went”) which is flexible and therefore not likely to be a memorized form. The suggestive conclusion — to spell it out: formulae and schemata needn’t be part of generative grammar at all. Memorization is as deeply rooted as grammar, but it is not grammatical. And vice versa, grammar is not memorized.

This cuts against both the Chomsky program and the anti-Chomskians. It means that much of the data of speech will contain deeply rooted non grammatical structures unrelated to universal grammar (UG, the innate grammar capacity which makes it possible for us to learn language as children just by hearing it — without having it taught to us), making the project of discovering UG all the more difficult. It also means that schemata don’t tell us anything interesting about grammar, though they do say something important about how the mind processes language: it has to be done with more than just the grammar processor. It’s got to use a simple template archive.

It’s not all bad news for the Chomskians. It leads to a diagnostic: if it’s inflexible, then maybe it’s not grammatical. Pare away all the inflexible structures of speech and you should be left with the original grammar. So the work that Sidtis is doing, collecting the schemata of English, though it is being gathered from the perspective of those who want to diminish the significance of generative grammar in speech, should be taken as an invaluable resource for discovering UG — specifically, what part of English speech must be removed before the grammar remains pure. It’s a tricky task, because no doubt some, probably most, of the schemata follow the grammar of the language. So there’s no guarantee anything will be left. But if flexibility or productivity is the test, pieces can be returned, one by one.

Well, the mind is a big and powerful place. I don’t see why anyone should be surprised that it uses many modes — a grammar fully flexible and productive within its machine limits; memory only minimally flexible: open only to lexical or phrasal substitutions.

In other words, generative grammar doesn’t need to worry about the order “she and I” vs. “I and she.” It can be left out without prejudice to the theory of UG.

Wherever syntacticians gather, they quibble over grammatical intuitions. Maybe we should start looking more carefully at our intuitions and separate the memorized schemes from the generative rules.


Saving Grice’s theory of ‘and’ (with Kratzer lumps!)

 Originally published on Language and Philosophy, June 11, 2007

I’ve always considered Grice’s theory of conversational implicature to be one of the most beautiful theories around. But nowhere is beauty so tightly yoked to truth as in the sciences, where beauty, in the form of simplicity, will decide the truth of two otherwise equally powerful theories. (It’s kind of remarkable when you think about it — truth and simplicity seem not only distinct, but unrelated, unlike say, truth and accuracy or consistency. A complex theory will cause more complexity in its relation to other theories, but if it’s still true, why should complexity ever matter? Is preference for simplicity just a bias?) Truth seems to be a necessary condition for the beauty of a theory in science, so if Grice’s theory isn’t true, its beauty all is lost. The application of conversational co-operation gets messy at and, impugning its truth. I’ve got an idea on how to clean up the mess and restore the symmetry of the structure.

Grice’s analysis of “and” goes like this:

Sometimes “and” is interpreted as simple logical conjunction

1. I brought cheese and bread and wine.

The order of conjuncts doesn’t change the meaning: I brought bread and cheese and wine; wine and cheese and bread; bread and wine and cheese; wine and bread and cheese; it’s all the same. This use of and is symmetric, exactly like the logical conjunction &: A&B<=>B&A

But sometimes and carries the sense of temporal order, “and then”

2. I took off my boots and climbed into bed.

(I think I got this example from Ed Bendix some years ago)

This conjunction is not symmetric: taking off your boots and then climbing into bed is not the same as climbing into bed and then taking off your boots, and the proof of the difference, you might say, comes out in the wash.

The difference in meaning, according to Grice, arises from the assumption that the speaker would not withhold relevant information or present it in a confusing form. If the order of events matters, the order of presentation will follow the order events, unless otherwise specifically indicated. So if I said

I climbed into bed and took off my boots

you’d be justified in surmising that I’d come home very late and very drunk.

The theory of conversational implicature avoids the undesirable circumstance that there might actually be two homonymic “and”s in English, one meaning “&” and the other meaning “and then.”

A problem for Grice was observed long ago by Bar-Lev and Palacas (1980, “Semantic command over pragmatic priority,” Lingua 51). They noted this wonderful minimal pair:

3. I stayed home. I got sick.
4. I stayed home and got sick.

If Grice is right, (3) should mean

3′. I stayed home and then got sick.

But it doesn’t. It means

3″. I got sick and therefore stayed home.

Now unless we are willing to say that the sentential boundary is a morpheme with meaning, we are compelled to drop Grice. Worse still, even though (3) means (3″), the sense of “and then” returns immediately we add “and” between the sentences. (4) means

4″. I stayed home and then I got sick.

even though that’s semantically unexpected. So it’s not about semantic bias, this violation of Grice’s principle. It’s a very real problem that Bar-Lev and Palacas pointed out.

So what’s with “and”?

Here’s my suggestion.

a. In order to use “and” you’ve got to be introducing something new. Think of Angelika Kratzer’s lumps of thought: you’d never say “I painted a portrait and my sister” if you’d only painted one portrait and it was of your sister. Information is structured in clumps of truths that the logical connectives don’t respect. Yes, a portrait was painted and a sister was painted, but if these two things were accomplished in the same act of painting a portrait of one’s sister, then they are in some sense the same fact, though two truths. Now notice the difference between :

“I painted a portrait. I painted my sister.”

Could be the same event. Not so easy to get the same-event interpretation from

“I painted a portrait and I painted my sister.”

The and implies a distinct, newly introduced fact not lumpable with the antecedent event.

b. Causal relations are internal to an event.

Put (a) and (b) together and you have an explanation for (3) and (4). I have a good deal more to say about this, but it’s really nice out, and I’ve been in all day.

More about and: a contextual, situational connective?

 

A few examples:

1. Pat washed her sweater and ruined it

2. Pat ruined her sweater and washed it.

3. Pat ruined her sweater. She washed it.

(1) means, I think, that by washing it Pat ruined it. The sentence allows and because washing doesn’t entail ruining; ruining is a consequence, not a cause.

(2) means that Pat ruined the sweater and then washed it presumably in an attempt to fix it, the outcome of which attempt the sentence doesn’t reveal. It can’t be read, as (3) can, to mean: Pat ruined her sweater by washing it.

Now, (3) can be read also as: she ruined her sweater then washed it. That’s not surprising. What’s surprising is that (3) has the grammatical-consequent-as-semantic-antecedent reading as well, while (2) doesn’t. So the explanation above has to be modified a bit:

a’) consequences are external to an event — they are new facts justifying and

a”) causes are internal to an event — they lump with their consequence and don’t justify and

Bar-Lev and Palacas use another example that goes something like this:

Napoleon took thousands of prisoners and defeated the army. (=and then)

Napoleon defeated the army and took thousands of prisoners (=and then)

Napoleon took thousands of prisoners. He defeated the army. (=backwards cause)

So even when the real-world knowledge bias leans in favor of backward cause, and prevents it.

Here’s another strong example against real-world experiential bias. In answer  to the question, “What did you do today?”:

I went to the store and I went out. (two unrelated round-trip forays outside, the latter possibly to a bar or club)

I went out and I went to the store. (two related events: one followed by a consequence: and=and then)
I went out. I went to the store. (One round-trip foray, the consequent explaining the antecedent)

I went to the store. I went out. (Two events: the consequent can’t explain the antecedent, so they are interpreted as two distinct events)

Given a context in which going out explains going to the store, this last sentence pair should reduce to one event, if this analysis of and is right. I think it does: if the question is, “Did you or did you not go out today?” the answer: “I went to the store. I went out,” indicates one event, the antecedent indicating the specific event and the consequent clause explaining how the antecedent is an answer to the question.

This last example also shows that it’s not just cause that is internal to an event, but anything that explains the antecedently described event. Explanation seems the informationally relevant function from utterance to acceptability. Explanations are internal to a fact. The next step in this investigation would be to figure out what kinds of information qualify as explanations / internal to the fact, and what kinds as additional, new information external to the fact.

And or &: ideas for a contextual logic

On one view of this analysis, it looks like Grice was partly right about and. There’s just one and. But he was wrong to equate English and with logical conjunction &. The one and in English carries a conventional implicature just as but does, but where the conventional implicature of but requires the denial of some association of the antecedent clause, the conventional implicature of and requires that the consequent add some information external to the antecedent clause. and always means something like and also, carrying the conventional implicature that what follows and is additional information external to what preceded and.

There’s an alternative to explore for fun. Suppose Grice was completely right that and means the logical connective &. It’s just that the logical connective & is not the familiar one. It’s truth values are dependent on the relationship of consequent with antecedent. I mean, why couldn’t we have a causal relations-based logic? It would be very different from familiar freshman logic, but it might be a lot of fun and useful too. This connective (I’ll use “+” to avoid confusion with traditional conjunction “&”) would not be symmetric:

a+b ≠ b+a

and there could be two ways of dealing with the truth tables:

if a and b are true and a causes b  then a+b =t

if a and b are true and b causes a then a+b=f

if a and b are true, and a and b denote distinct facts that are causally unrelated, then a+b=t,

otherwise a+b=f.

The last two clauses cover the “I painted a painting and painted a portrait” — two conjuncts denoting the same fact. That sentence will be false if denoting one fact/event, true if denoting two causally unrelated distinct facts/events (assuming that there is no causal relation in this sentence in either direction).

(Now, I’ve forgot the second way I was going to do this. Well, it’ll come to me.)

Ah, yes. [Two years later.] How about defining what is included in an event or using the connective to do that work?

a+b entails that b is not included in a

where “included” means either ‘denoting the same event’ or ‘causing’.

It may seem odd to contextualize truth values so that they depend on denotations and situational relations, but truth values are themselves semantic and denotational. We’re just shoving the contextualization deeper in the muddy murk. Why not have logical connectives that reflect the language or reflect thought?

One application would be to lumps of thought. The whole notion of lumps is model-dependent / context-dependent. Here’s a context-dependent (model-dependent) connective that reflects the lumping of reality.

I can think of some obvious objections to a context-dependent logic. It’s not really truth-functional in its syntax. The falsehood, for example, of a+b, does not entail either the falsehood of a or the falsehood of b. a+b could be false simply because b is included in a. But something like this is true of other familiar logical connectives. For example, the falsehood of avb does not entail the falsehood of a or the falsehood of b. It might be that b is true and a false, or a true and b false. The difference between + and v is that the truth value of v depends on the truth values of the statements it joins, while the value of + depends also on event/fact inclusion.

How are the connectives syntactically interdefined? How can deductions be proved syntactically? What would the laws of deduction look like?

Cliff-hanger.