Here are some papers about...
Links to book introductions, and essays on logical form, can be found elsewhere.
Good escape routes to work by others: PhilPapers and LingBuzz 
Or...back to my homepage

Semantic Internalism

One Word, Many Concepts (To appear in the Oxford Handbook to Contemporary Philosophy of Language, E.Lepore and U. Stojnić, eds.).

Polysemy runs deep. But if polysemy runs deep, meanings do not map contexts to extensions--and sentences do not express propositions, or mappings from contexts to propositions--not even if each sense of a polysemous expression maps contexts (of using the expression with that sense) to extensions or proposition-constituents. A word like 'window', which has more than one sense, can be used neutrally without using it to express any particular sense.

Fostering Liars (Topoi 40:5-25, 2021)
This paper--like I-Languages and T-sentences and a corresponding chapter in Conjoining Meanings--explores connections between two challenges for truth-theoretic semantics: Foster's Problem and contingent liar sentences. I think these challenges are best seen as two aspects of a deep objection, with potential responses to either aspect making it clear how hard it is to respond to both aspects in any plausible way.

Meanings via Syntactic Structures (Syntactic Structures after 60 Years, edited by N. Hornstein et.al., De Gruyter: Mouton 2017)
This short essay was prompted by teaching Syntactic Structures, in an undergraduate course, and paying attention to the (often ignored) remarks about meaning.

Semantic Internalism (The Cambridge Companion to Chomsky, edited by Jim McGilvray, CUP 2017)
This essay discusses some of Chomsky's views about meaning, contrasting them with some of Putnam's claims in "The Meaning of 'Meaning' ".

I-Languages and T-sentences (The Relevance of the Liar, edited by B. Armour-Garb, OUP 2017)
This paper, about the relevance of Liar Paradoxes for truth conditional semantics, and the paper below are companions. Bottom line for this one: sentences of a human language don't have truth conditions. No sentence of a human language is true. The previous sentence isn't true, and neither is this one. Snow is white isn't true, and neither is 'Snow is white.' is true if and only if snow is white. 

Framing Event Variables (Erkenntnis 80:31-60, 2015)
Slides for the talk, at a conference in Erfurt, can be found here. This paper is about the relevance of puzzles concerning event individuation for semantics. Bottom line: event analyses of 'Alvin chased Thedore' are good; truth-theoretic constuals of such analyses are bad. Together with the paper above, and Meaning Before Truth listed below, the larger conclusion is that Davidsonian conceptions of meaning are in big trouble. Even bracketing concerns about specific constructions, and focusing on cases that are supposed to motivate truth conditional semantics, foundational problems quickly emerge if you focus on truth, predication, or reference.

Concepts, Meanings, and Truth: First Nature, Second Nature, and Hard Work (Mind and Language 25: 247-78, 2010)
The idea is that lexical expressions of a human I-language let children use available concepts to introduce formally distinct "I-concepts," which can then be combined via operations invoked by phrasal syntax. So while "prelexical" concepts may not exhibit the kind of systematicity required for truth, I-concepts do. But various empirical considerations suggest that I-concepts are massively monadic, and that the relevant "I-operations" are fundamentally conjunctive. This, I claim, makes it implausible that I-concepts are true of language-independent things. Meanings can be viewed as instructions to assemble concepts that make it possible for humans to have truth-evaluable thoughts. But forming such concepts requires independent cognitive work, not just a language with a compositional semantics. This paper, which abstracts from the technical details, forms a pair with Minimal Semantic Instructions (listed under Compositional Semantics)

Minimalist Meaning, Internalist Interpretation (Biolinguistics 4: 317-41, 2008)
An attempt to link up some main themes in the work listed below, while also defending the idea of meanings as instructions to build concepts.

The Character of Natural Language Semantics (Epistemology of Language, edited by Alex Barber, OUP 2003).
This was my first attempt at arguing against the following idea: a compositional semantic theory for a natural language will have theorems that specify the truth-conditions of sentences (relative to contexts). Typically, sentences don't have truth-conditions, not even relative to (theoretically tractable) contexts. And typically, the truth-conditions of utterances are not compositionally determined. Truth conditions often depend on theoretically intractable aspects of conversational situations. So the meaning of a sentence isn't a function from contexts to truth-conditions. Linguistic meanings are, as Chomsky has long urged, more "internal" than many philosophers think.

Meaning Before Truth (Contextualism in Philosophy, edited by G. Preyer and G. Peters, OUP 2005).
This paper extends the line of thought in "The Character of Natural Language Semantics." A running theme is that Chomsky offers a conception of semantics that lets us preserve what is right about truth-conditional semantics--and this has less to do with truth than the usual rhetoric suggests--while also preserving late-Wittgensteinian/Austinian insights about the relation between truth, meaning, and context. There are three main sections: one about the relevance of negative facts (and nativism) for semantics, and why this tells against both "deflationary" conceptions of meaning and Quine-Davidson "interpretability" conceptions; one that reviews some familiar reasons for rejecting the hypothesis that names denote things in the environment; and one that concedes externalism about truth, while noting that externalism about linguistic meaning does not follow. The paper ends with a brief tour of some alternatives, and some familiar reasons for rejecting the hypothesis that predicates are satisfied by things in the environment. A handout elaborates this line of thought (in a handouty way).

Character Before Content (in Content and Modality: Themes from the Philosophy of Robert Stalnaker, edited by J. Thomson and A. Byrne, OUP, 2006).
This is another paper in the same family. The idea is that a Chomsky-style internalism about linguistic meaning is compatible with Stalnaker's view that the propositional contents of assertions are sets of possible worlds. Indeed, Stalnaker may offer all we need (and all we are likely to get) in terms a substantive language-independent notion of truth-conditions. And partly for this reason, we should reject the idea that semantics is conventionalized pragmatics. The middle portion of the paper starts to develop a conception of linguistic meanings as "concept construction instructions." I am currently trying to develop this conception in a monograph, Semantics Without Truth Values (to appear, eventually, with OUP).

Intuitings, Intuiteds, and I-Languages
These are the power point slides for a talk I gave in Oslo, at the Center for the Study of Mind in Nature, for a conference on linguistic intuitions. This was, to understate, work in progress. I hope to return to it at some point.

Compositional Semantics

Semantic Types: Two is Better than Too Many.
(New Frontiers in Artificial Intelligence, edited by M. Sakamoto et. al., Springer LNCS/LNAI 12331, 2020. LENLS-16 conference proceedings.)
In this paper and the one below, I discuss the motivations and prospects for the very spare semantic typology employed in Conjoining Meanings.

Semantic Typology and Composition (The Science of Meaning, edited by B. Rabern and D. Ball).
It is often said that expressions of a human language include (i) truth-evaluable sentences of a basic semantic type <t>, (ii) entity designators of a basic semantic type <e>, and (iii) unsaturated expressions whose semantic types are characterized by the recursive principle "if <α> and <β> are types, so is <α, β>." I think this hypothesis is wrong in three respects. 

Minimal Semantic Instructions (in the Oxford Handbook of Linguistic Minimalism, edited by Cedric Boeckx, 2011).
This is an attempt work out, for a range of basic constructions, the idea of meanings as "instructions to assemble conjunctive concepts." This paper, mainly devoted to technical details and minimalist reasoning, forms a pair with Concepts, Meanings, and Truth: First Nature, Second Nature, and Hard Work (listed under Semantic Internalism). And with regard to the syntactic details, I draw on the paper below.

Basic Operations, coauthored with Norbert Hornstein (Catalan Journal of Linguistics 8: 113-39, 2009).
This programmatic paper offers a minimalist conception of syntax and semantics, with the broader aim of idenifying the fundamental composition operations employed by the human faculty of language. With regard to syntax, the leading idea is that the operation MERGE can and should be decomposed into simpler operations of concatenation and labeling; where the latter may reflect what is distinctive about human language. With regard to semantics, the leading idea is that concatenation is an instruction to conjoin monadic concepts, while labels are vehicles for introducing thematic concepts in a constrained way.

Interrogatives, Instructions, and I-languages: an I-Semantics for Questions, coauthored with Terje Lohndal (Linguistic Analysis 37:459-510, 2011).
The basic idea is simple: an "instructionist" conception of meaning, along lines developed in the paper above, can easily accommodate an attractive internalist version of the old force/content distinction; and there are interesting implications for the syntax/semantics of relative clauses and "sentential" expressions. I never intended to have views about--much less write a paper about--interrogatives. But my co-author was persuasive.

Describing I-junction (In Language and Value, edited by J. Yi and E. Lepore, ProtoSociology 31: 121-137 (2014).
The meaning of a noun phrase like 'brown cow', or 'cow that ate grass', is somehow conjunctive. But conjunctive in what sense? Are the meanings of other phrases--e.g, 'ate quickly', 'ate grass', and 'at noon'--similarly conjunctive? I suggest a possible answer, in the context of a broader conception of natural language semantics.

Small Verbs, Complex Events: Analyticity without Synonymy
(in Chomsky and His Critics, edited [heroically] by Louise Antony and Norbert Hornstein, Blackwell 2003)
You may need to "Rotate View, Clockwise" to get the .pdf file to appear properly.
This paper was written in 1998, and so may be past its use-by date. Updated versions of various bits of the paper appear elsewhere; see note 1.
More Truth in Advertising: I'm not criticizing Chomsky; though I am being critical, and Chomsky does figure prominently.
The idea, as the subtitle suggests, is that there are analytic truths--even if the notion of synonymy is suspect. The trick involves (can you guess?) combining, in the right way, a neo-Davidsonian event semantics with a Minimalist syntax. Blatant Advertising: get hold of the entire book if only for Chomsky's replies; for anyone interested Chomsky's conception of meaning (and his semantic internalism), see especially his replies to Egan, Rey, Ludlow, Horwich, and Pietroski.

On Explaining That (Journal of Philosophy 97: 665-62, 2000)
How can a speaker can explain that P without explaining the fact that P, or explain the fact that P without explaining that P, even when it is true (and so a fact) that P? Or in formal mode: what is the semantic contribution of 'explain' such that 'She explained that P' can be true, while 'She explained the fact that P' is false (or vice versa), even when 'P' is true? The proposed answer is that 'explained' is a semantically monadic predicate, satisfied by events of explaining. But 'the fact that P' (a determiner phrase) and 'that P' (a complementizer phrase) get associated with different thematic roles, corresponding to the distinction between a thing explained and the content of a speech act.

The Undeflated Domain of Semantics (Sats: The Nordic Journal of Philosophy 1: 161-76, 2000).
A reply to Horwich-style "deflationary" conceptions of meaning, focussing on the importance of "negative" semantic facts.
An edited version of this paper appears in Reading Philosophy of Language, edited by Jennifer Hornsby and Guy Longworth.

Does Every Sentence Like This Exhibit A Scope Ambiguity? coauthored with Norbert Hornstein
(In Belief and Meaning, edited by W. Hinzen and H. Rott, Hansel-Hohenhausen 2002)
The answer is 'no'. Instances of 'every F likes some G' may not, after all, be examples of scope ambiguity.
Figuring out whether a given expression with multiple quantifiers is semantically ambiguous is hard.

Quantification and Second-Order Monadicity (Philosophical Perspectives 17: 259-298, 2003).
The first part of this paper reviews some developments regarding the apparent mismatch between the logical and grammatical forms of quantificational constructions like 'Pat kicked every bottle'. I suggest that (even given quantifier-raising) many current theories still posit an undesirable mismatch. But all is well if we can treat determiners (words like 'every', 'no', and 'most') as second-order monadic predicates without treating them as predicates satisfied by ordered pairs of sets. Drawing on George Boolos's construal of second-order quantification as plural quantification, I argue that we can and should view determiners as predicates satisfied (plurally) by ordered pairs each of which associates an entity with a truth-value (t or f). The idea is 'every' is satisfied by some pairs iff every one of them associates its entity with t. It turns out that this provides a kind of explanation for the "conservativity" of determiners. And it lets us say that concatenation signifies predicate-conjunction even in phrases like 'every bottle' and 'no brown dog'.

To Be a Value of  a Plural Variable, You Don't Have to Be Plural (You Just Have to Be)
This is something between a handout and a paper. It focusses on an idea, acquired from George Boolos, discussed in the papers immediately above and below. For purposes of giving a compositional semantic theory for a natural language, we can and should allow for genuinely plural variables; where a genuinely plural variable is one that has more than one value relative to each assignment of values to variables.

Induction and Comparison (Maryland Working Papers in Linguistics, 15: 157-90, 2006)
This speculative paper is an attempt to say why Frege's Theorem might bear, in interesting ways, on several issues in linguistics.

Function and Concatenation (in Logical Form, edited by G. Preyer and G. Peters, OUP 2002).
Explores the idea that concatenating natural language expressions corresponds to predicate-conjunction, as opposed to function-application. The proposal is developed in more detail in Events and Semantic Architecture (OUP 2005). But the paper gives the main idea, in the context of questions about how natural language syntax is related to Logical Form.

Interpreting Concatenation and Concatenates (Philosophical Issues 16:221-45, 2006).
This paper presents a slightly modified version of the compositional semantics proposed in Events and Semantic Architecture.
Some readers may find this shorter version, which ignores issues about vagueness and causal constructions, easier to digest. The emphasis is on the treatments of plurality and quantification, and I assume at least some familiarity with more standard approaches. Space constraints caused the final document to be considerably shorter than drafts with homophonous titles. The paper above (Systematicity via Monadicity) is a kind of companion piece, showing how to locate the proposed conception of semantic composition in the context of more general attempts to simplify (or "minimize") theories of linguistic competence, with the aim of isolating the distinctively human aspects of the human language faculty. There are points of contact with recent suggestions by Elizabeth Spelke and her colleagues; see also the BBS paper by Peter Carruthers, my colleague in philosophy at Maryland.

Comments on King
These are comments from a 2009 APA Pacific Session (in Vancouver), on Jeff King's book, The Nature and Structure of Content.
The number of requests I've had for the comments is a sure testament to the interest of Jeff's book.

Lexicalization and Human Concepts

Systematicity via Monadicity (Croatian Journal of Philosophy 7:343-374, 2007)
This is the written version of a conference presentation in Dubrovnik (Fall 2006). I argue that a "Conjunctivist" conception of semantic composition, of the sort articulated in some of the papers above, helps explain many otherwise puzzling features of natural language. More speculatively, a Conjunctivist language faculty might also help explain why human thought is as systematic as it is.

Semantic Monadicity with Conceptual Polyadicity (In the Oxford Handbook of Compositionality, M. Werning, W. Hinzen, and E. Machery, eds., 2012).
Another paper in the same vein. 

Language and Conceptual Reanalysis (In Towards a Biolinguistic Understanding of Grammar: Essays on Interfaces, edited by A. DiSciullo, John Benjamin 2012).
Like the paper above, but more detailed, and drawing some connections to Frege's notion of fruitful definitions.

Lexicalizing and Combining (In  Verb Concepts: Cognitive Science Perspectives on Verb Representation and Processing, edited by R. de Almeida and C. Manouilidou, Springer 2014).
This is the written version of a 2008 conference presentation in Montreal, focusing on verb meanings. The 2012 papers, immediately above, were written in between.

Meaning and Numerosity

Here is a video of a 2014 talk in the Defining Cognitive Science series at Simon Fraser University. My thanks to my hosts, especially Endre Begby. In the talk, I discuss some of the findings reported in the papers below. There are also pictures of my collaborators.


Observers efficiently extract the min and max in perceptual magnitudes sets: evidence for a bipartite format.
Authors:
Darko Odic, Tyler Knowlton, Alexis Wellwood, Paul Pietroski, Jeff Lidz, and Justin Halberda
To appear in Psychological Science.


Psycholinguistic evidence for restricted quantification.
Authors: Tyler Knowlton, Paul Pietroski, Alexander Williams, Justin Halberda, and Jeff Lidz.

Natural Language Semantics 31:219–251 (2023).


Individuals and Ensembles and each versus every: linguistic framing affects performance in a change detection task.
Authors: Tyler Knowlton, Justin Halberda, Paul Pietroski, and Jeff Lidz.

(Glossa Psycholinguistics 2 http://dx.doi.org/10.5070/G6011181 (2023).

Linguistic Meanings As Cognitive Instructions.
Authors: Tyler Knowlton, Tim Hunter, Darko Odic, Alexis Wellwood, Jeff Lidz, Paul Pietroski, and Justin Halberda

Annals of the New York Academy of Sciences 1500:134-144 (2021).
The sentences Most of the dots are blue and More of the dots are blue are equivalent as descriptions of scenes in which all the non-blue dots are the same color (e.g., red). But across a range of experimental settings--involving picture-sentence matching, scene creation, memory for visual features, and accuracy on judgments of truth/falsity under time pressure--the more/most contrast has significant effects, in ways which suggest that the meanings of more and most are mental representations that provide (different) detailed instructions to conceptual systems.

The Mental Representation Of Universal Quantifiers.

Authors:
Tyler Knowlton, Paul Pietroski, Justin Halberda, and Jeff Lidz
Linguistics and Philosophy 45: 911–941 (2022). https://doi.org/10.1007/s10988-021-09337-8.)

The sentences Each of the dots is blue and Every one of the dots is blue and All of the dots are blue illustrate distinct ways of expressing universal generalizations. But do the meanings of the words for universal quantification differ? And if so, is the difference between first-order and second-order quantification relevant? Answers: yes and yes.

Individuals and Non-Individuals in Cognition and Semantics: the Mass/Count Distinction and Quantity Representation.

Authors:
Darko Odic, Paul Pietroski, Tim Hunter, Jeff Lidz, and Justin Halberda
Glossa 3:1-20 (2018.)
The sentences Most of the dots are blue and Most of the paint is blue differ significantly: dots is a plural count noun, while paint is a "mass" noun. It can be tempting to think that count-nouns are somehow how more basic, semantically, and that mass-nous involve some extra layer of complexity. But we find evidence that pushes in the other direction.

Young Children's Understanding of 'more' and Discrimination of Number and Surface Area.
Authors: Darko Odic, Paul Pietroski, Tim Hunter, Jeff Lidz, and Justin Halberda
Journal of Experimental Psychology: Learning, Memory, and Cognition 39: 451-461 (2013).


Interface Transparency and the Psychosemantics of most: Jeff Lidz, Paul Pietroski, Tim Hunter, and Justin Halberda
Natural Language Semantics, 19: 227-56 (2011).
This paper extends the initial results obtained in the paper listed below. Here, we offer experimental evidence that adult speakers of English understand sentences like 'Most of the dots are blue' in a quite specific way that involves representing the cardinality of the blue dots, the cardinality of the dots, and subtracting the former from the latter--as opposed to, say, representing the cardinalities of the blue dots and the nonblue dots (as such). We also argue that this finding, together with independent studies of the visual system, provides some empirical support for a more general view about how meaningful expressions generated by the language faculty interface with other cognitive systems.

The Meaning of 'Most': semantics, numerosity, and psychology: Paul Pietroski, Jeff Lidz, Justin Halberda, and Tim Hunter
(Mind and Language, 24:554-85, 2009). The title is descriptive. We offer experimental evidence in support of a certain view about how the meaning of the English determiner 'most' is related to various psychological capacities potentially relevant to human capacities for counting and quantifying. In this first installment of an ongoing project, we offer experimental evidence that adult speakers of English do indeed understand sentences like 'Most of the dots are blue' in terms of cardinality comparison (as opposed to, say, one-to-one correspondence). We also make some tentative suggestions about how the meaning of 'most' is related to potential verification procedures and the "analog magnitude system" that humans share with other animals.

Seeing What you Mean, Mostly.
Authors: Paul Pietroski, Jeff Lidz, Justin Halberda, Tim Hunter, and Darko Odic
(Syntax and Semantics: Experiments at the Interfaces, edited by J. Runner, 37:187-224, 2011).
Another paper in the same vein, stressing that while our proposal is not a form of verificationism, meanings are related to verification strategies in empirically testable ways--at least with regard to "logical" vocabulary.

Poverty of Stimulus Arguments

Vocabulary Matters. This short essay appeared in a 2015 MIT Working Papers in Linguistics volume (edited by Angel Gallego and Dennis Ott), celebrating the 50th anniversary of Aspects of the Theory of Syntax.

Poverty of the Stimulus Revisited co-authored with Robert Berwick, Beracah Yankama, and Noam Chomsky. Cognitive Science 35: 1207-42 (2011
)
.

The Language Faculty coauthored with Stephen Crain, in The Handbook for Philosophy of Cognitive Science (edited by E.Margolis, S. Laurence, and S. Stich, OUP 2011). An essay on the language faculty, in keeping with the papers below, but also discussing some new material.

Think of the Children (Australasian Journal of Philosophy 86:657-669, 2009). This was a critical notice of Michael Devitt's book, Ignorance of Language. Michael's reply, which you might want to look at, appeared in the same issue.

Brass Tacks in Linguistic Theory coauthored with Stephen Crain and Andrea Gualmini
(In The Innate Mind: structure and contents, edited by S. Laurence, P. Carruthers, and S. Stich, 175-197, Oxford University Press, 2005).
Yes, still arguing for innate constraints on linguistic meanings. Here, we discuss in more detail some of the individual phenomena addressed in other papers. And we're not replying to anyone in particular.

Innate Ideas coauthored with Stephen Crain, in The Cambridge Companion to Chomsky (edited by James McGilvray, 164-180, Cambridge Univ. Press 2005). You may need to "Rotate View, Clockwise" to get the .pdf file to appear properly.
A more general discussion of innateness and universal grammar, in the context of Chomsky's version of rationalism.
Some of the examples mentioned here are discussed in more detail in the other papers.

Why Language Acquisition is a Snap coauthored with Stephen Crain (Linguistic Review, 19: 163-83, 2002).
Presents additional empirical arguments for universal grammar in reply to a target article by Pullum and Scholz. The main arguments concerns a cluster of semantic phenomenon concerning downward entailment, negative polarity, and the "pragmatic" implicature associated with disjunctive claims. 

Nature, Nurture, and Universal Grammar coauthored with Stephen Crain (Linguistics and Philosophy 24: 139-86, 2001).
Discusses the logic of "poverty of stimulus" arguments and some specific empirical premises, concerning both adults and children, in reply to recent empiricist conceptions of language acquisition--with particular focus on Cowie's book What's Within.

Twentieth Century Papers

Actions, Adjuncts, and Agency (Mind 107: 73-111, 1998)

Experiencing the Facts: critical notice of John McDowell's Mind and World (Canadian Journal of Philosophy 26: 613-36, 1996)

A Defense of Derangement (Canadian Journal of Philosophy 24: 95-118, 1994)

Prima Facie Obligations, Ceteris Paribus Laws in Moral Theory (Ethics 103: 489-515, 1993)

Intentionality and Teleological Error (Pacific Philosophical Quarterly 73: 267-82, 1992)