Bootcamps/Short course weekend

Saturday

This course is an accelerated introduction to applications of higher-order logic in linguistics and philosophy. We will focus in particular on problems connected to propositional attitudes such as belief. Roughly the first third of the course will introduce some standard higher-order logics and their model theory, presupposing nothing beyond a basic familiarity with first-order logic. We will then consider ways in which puzzles about propositional attitudes, such as Frege’s puzzle, Mates’s puzzle, and the problem of logical omniscience, might put pressure on standard logical principles governing identity, quantification, and the application of complex predicates.
Cian Dorr is Professor of Philosophy at New York University and Harvey Lederman is Assistant Professor of Philosophy at Princeton University.
Conversational topic shift, both gradual and sudden and disruptive will be the focus of this mini-course and will involve introducing novel computational notions of both coherence and relevance as well as looking at topic shift in relation to global discourse structure. In everyday conversation, talk flows from topic to topic as speakers discuss one thing and then, somehow, something entirely different . Most commonly, topic shift occurs gradually, almost imperceptibly. However, shift may also be sudden as questions under discussion in a next utterance share nothing with those of preceding talk. (Sacks, 1972 in Jefferson, 1984). For a “next” utterance to be interpreted as “appropriate” in both “cooperative” and adversarial conversations, that utterance must be seen as “relevant” to the conversational context and, it is often argued, to be “coherent” with preceding talk as well. In this mini-course we will discuss how, while “coherence” as a condition for the choice of an appropriate utterance is not always respected, relevance, always is.

To understand how the complementary phenomena of relevance and coherence contribute to orderly conversational interaction, a method will be introduced to compute “coherence” between utterances though they may or may not follow sequentially in the surface structure of the unfolding conversation (Polanyi et al 2004) Taken together, these computational methods will provide a basis of understanding gradual topic shift.

Cases in which “coherence” is disrupted but a “next” utterance which fails to cohere is still seen as appropriate is more complex and requires the introduction of an entirely new approach to relevance that departs from traditional theories (Sperber&Wilson, 1995). Conversational Relevance Evaluation, (CRE), based on the notion “Closer to Me”, explains why some next utterances, even those which may seem to have come “out of the blue”, though surprising or unexpected, appear entirely appropriate in a given context as defined by the time, place and individuals involved, while other apparently similar newly introduced topics may lead to confused incomprehension. Early efforts to integrate coherence calculation with Conversational Relevance Evaluation will be sketched.
Livia Polanyi is Consulting Professor of Linguistics, at Stanford University.
Tense morphemes are ubiquitous among the worlds’ languages. Yet there are also many languages, from distinct language families, that do not have to mark tense overtly: they either do not have tense morphemes or the presumed tense morphemes are optional. The question arises: is tense universal? The answer, within formal semantics, has so far been "yes". We will challenge this view. This course will present an introduction to the semantics of tense, with focus on universals and constrained variation among languages with overt tense morphemes. We will then discuss how semantic theories of tense have been extended to languages without overt tense morphemes. Approaches differ along two dimensions: how they accomplish reference to time intervals (e.g., via a syntactically represented covert pronoun or a purely semantic rule), and how they restrict the location of those time intervals (e.g., via covert lexical features or pragmatic constraints). Finally, we will discuss a different type of account altogether that does not rely on tense to derive temporal reference. Instead, evaluation time shift, a mechanism independently attested in the narrative present in languages with tense, is more widely used for encoding temporal meaning in the absence of tense. We will illustrate this account for Paraguayan Guarani and Cantonese (based on joint work with Maria Luisa Zubizarreta and Tommy Tsz-Ming Lee), identifying empirical advantages over accounts that employ tense. The broader consequence is an enriched typology of temporal systems. And particularly notably, tense is revealed to not be a linguistic universal.
Roumyana Pancheva is Professor of Linguistics & Slavic Languages & Literatures at the University of Southern California.

Sunday

This course will focus on propositional quantifiers in the context of modal logics, where they are especially useful. For example, in the context of a doxastic interpretation of modal logic, they allow us to make generalizations about what is and is not believed by an agent. With this, we can state that everything the agent believes is the case, that the agent believes that they believe something false, or that everything believed by one agent is believed by a second agent. Standard possible world models for modal logics can be extended straightforwardly to propositional quantifiers, by letting these quantifiers range over arbitrary sets of worlds. However, in many cases, this straightforward model theory leads to logics which are not recursively axiomatizable. In addition to these simple models, we will therefore consider a range of alternative models, including models based on complete Boolean algebras, and possible worlds models in which propositional quantifiers range over a restricted domain of sets of worlds. The aim of the course is to show the usefulness of propositional quantifiers in modal logics using examples, to provide a systematic overview of the work that has been done in this field, and to highlight some of the many interesting questions which remain open.
For course-related materials, click here.
Peter Fritz is Professor of Philosophy at the Dianoia Institute of Philosophy, at the Australian Catholic University.
In the past 5-10 years a geometric form of semantic representation, word vectors, has taken computational linguistics by storm. Mainstream linguistic semantics, Montague Grammar and its lineal descendants, has remained largely unreceptive to representing word and sentence meaning in finite-dimensional Euclidean space—the five-volume Wiley Blackwell Companion to Semantics (2021) does not even mention the idea. At the same time, major database collection efforts such as the Google and Microsoft knowledge graphs have amassed hundreds of billions of facts about the world. These efforts, relying on simple algebraic meaning representation methods using labeled graphs or relational triples, have also remained largely under the radar of logic-based formal semantics even though semantic search (information retrieval), information extraction, and the increasingly effective Semantic Web are all powered by a combination of the geometric and algebraic methods. This one-day short course will investigate the similarities and differences between the formula-based mainstream, the geometric, and the algebraic approaches. The focus will be on explaining the vector-based and graph-based approaches to people already familiar with logical semantics. We will describe some of the novel insights these approaches bring to such traditional concerns of linguistic semantics as meaning postulates, generics, temporal and spatial models, indexicals, lexical categorization, the meaning of bound morphemes, deep cases, negation, and implicature.
For course-related materials, click here.
Andras Kornai is Professor at the Budapest Institute of Technology, and Senior Scientific Advisor at the Computer and Automation Research Institute of the Hungarian Academy of Sciences.
Finite-state machines are widely used in text and speech processing, particularly as probabilistic models of string-to-string transductions. One major advantage of these finite-state models is that, unlike neural sequence models, finite-state machines can be combined using set-theoretic operations such as union and intersection, optimized using determinization and minimization, cascaded via composition, and searched using shortest-path algorithms, all in polynomial time. In this tutorial talk, I provide an introduction to finite-state text processing methods and software. I first provide a formal introduction to finite acceptors, which model sets of strings; finite transducers, representing relations between sets of strings; and weighted finite transducers, which represent weighted (e.g., probabilistic) relations between sets of strings. I then describe finite-state algorithms for constructing, optimizing, and searching transducers. I then introduce Pynini, a Python-based finite state grammar library based on the OpenFst toolkit, and compare and contrast its features with several other existing finite-state tools. Finally, I walk through several Pynini worked examples for spelling correction, pronunciation modeling (i.e., "g2p"), morphological analysis, and fuzzy string matching.
Course materials can be found here.
Kyle Gorman is Assistant Professor of Linguistics at the Graduate Center, City University of New York.

5-day courses/workshops

Group E

The course will introduce two main topics covered in the instructor’s 2018 MIT Press textbook Phonology: A formal introduction.

1. Segments, strings and rules: We first develop a simple syntax for phonological computation in which rules are functions mapping strings of segment to strings of segments. We illustrate the logic of phonological neutralization in terms of modus tollendo ponens and reductio ad absurdum. We provide a semantics for the rules. Then we show how the syntax can be enhanced to express rules deleting and inserting segments, and how that complicates the semantics. We then show how our basic reasoning gets obscured when we treat a phonological system as a composed function of several rules.

2. Segments as sets of features and unification logic: We next explore the implications of viewing segments as sets of features and defining natural classes as sets of sets of features. Using this basic idea, we illustrate the empirical motivation for standard phonological notions like underspecification, feature-filling and feature-changing rules. We shows how a simple version of a unification operator can solve problems that arise in other approaches.

We will use self-grading HW problems built on toy languages to help students learn the material.
For course-related materials, click here.
Charles Reiss is Professor of Linguistics and Founding Member of the Concordia Centre for Cognitive Science.
Because of its tight coupling of syntax and semantics, Combinatory Categorial Grammar (CCG) has become widely adopted in computational linguistics and natural language processing (NLP), particularly for applications in which semantics plays an important role, such as question answering, textual entailment, inference, and induction of semantic parsers from data consisting of paired sentences and logical forms. It has also been applied to modeling language acquisition from child-directed utterance. The course seeks to reexamine the significance of CCG as a linguistic theory of grammar. Despite adhering to strictly orthodox linguistic principles, CCG is not widely understood within mainstream linguistics. The reason may be that CCG is a revolutionary theory, seeming to require modification of long-held beliefs concerning the reality of surface syntactic structure as a representational level, and even the nature of grammatical constituency itself, in which all that was linguistically solid melts into air, and logical form is the only non-phonological representational level (Steedman, 2000, 2019). It is widely acknowledged that there is currently something of a crisis in theoretical linguistics. The ancillary disciplines of computational linguistics and psycholinguistics, and indeed some influential currents within mainstream linguistics itself, seem to have entirely given up on the idea that formal linguistic theory has anything to tell us about the use of language. It therefore seems timely to look more closely at CCG across a number of languages in comparison with other approaches that have recently been developed, including those within the Minimalist Program, and to propose a synthesis that preserves the linguistic insights of all in a form that can reconnect with a broad range of disciplines concerned with actual performance.
For course-related materials, click here.
Mark Steedman is Professor of Cognitive Science in the School of Informatics at the University of Edinburgh.
Natural languages are riddled with context-sensitivity. One and the same string of words can express many different meanings on different occasions of use, and yet we understand one another effortlessly, on the fly. How do we do so? What fixes the meaning of context-sensitive expressions, and how are we able to recover the meaning so effortlessly? Everyone agrees that what we can communicate is to some extent constrained by grammar, but most authors believe that the role of grammar is limited, and that resolution of context-sensitivity largely relies on extra-linguistic cues: speakers’ intentions and/or other non-linguistic features of utterance situation. Interpretation thus depends on general reasoning about speaker intentions and other non-linguistic cues. The idea that context-sensitivity resolution to a significant extent depends on extra-linguistic information is widely assumed in theorizing about the nature of content, context, and context-content interaction, both in linguistics and in philosophy of language. It is also relied upon in the literature on contextualism about various philosophically laden terms (e.g., "know", "good"). In this course, we shall critically examine these assumptions and their significance for formal models of context, and the dynamics of contexts-change. We shall explore their role in various arguments for some radical and surprising conclusions about the nature of content (e.g., the arguments for the non-propositionality of content expressed by, e.g., modal discourse), the dynamics of context, and the logical properties of natural language discourse (e.g., the apparent violations of various classical patterns of inference in the presence of modal vocabulary).
Una Stojnic is Assistant Professor of philosophy at the Department of Philosophy at Princeton University.
The question that runs through all of modern epistemology is how to demarcate what we can know from what we cannot. This question has received intense scrutiny not only in philosophy, but also in mathematical statistics, where a number of results have substantially advanced our understanding of the inherent limits of knowledge. This course will introduce students to some of the most astounding theorems of 20th-century statistics, many of which are not widely known among philosophers. In each session of the course, we will focus on a single inference problem and prove both positive results about the questions we can settle given enough data, and negative results about the questions we have to leave unanswered. Each of these results provide important and often surprising insights into the conditions of possibility of acquiring knowledge. Students will receive an intuitive introductions followed by rigorous proofs, with an emphasis on depth over breadth.
Mathias Winther Madsen is research engineer for Micropsi Industries GmbH.
We regret to inform you that this class has been cancelled due to COVID. For those interested, these are the materials for the ESSLLI edition of the course that Mathias taught last year (and here are some background readings).

Group P

This course focuses on quantification and measurement of events, from both a theoretical and a cross-linguistic perspective. Quantification over individuals is a familiar phenomenon, but can there also be quantification over events? Distributivity markers like English each, Korean -ssik, German je, reduplicated numerals in Telugu and Tlignit, and French Sign Language /alt/ and /rep/ motivate a 'yes' answer: With quantification over events, we can capture some of their empirical characteristics. The same kinds of tools are useful in accounting for pluractionality markers like the one in Seri, and, arguably, the English construction piece by piece, day by day, etc. Alongside quantification comes measuring. Individuals can be measured along various dimensions; can events be, too? Sure: just as there can be 2kg of gold, there can be 2 hours of running. In some cases, it is not easy to tell whether a given phenomenon involves quantification or measuring. Expressions like English per, as in 250 students register for this class per semester, have been analyzed as distributivity markers (hence, as quantificational), but they have also been analyzed as measuring events in terms of a ratio. Which analysis is right? This is the type of question that students will be in a position to address after taking this course.
For course-related materials, click here.
Elizabeth Coppock is Assistant Professor in the Department of Philosophy at the Boston University.
Our goal in this class is to explore the connections between modality, evidentiality and futurist reference by bringing together often disconnected strands of research from philosophy and linguistics, focusing especially on recent work in philosophy of language, formal semantics and formal pragmatics. Modal displacement—the ability to talk about how things could or must be—is a fundamental property of human language, and there is a host of approaches to the semantics, pragmatics and epistemology of modal claims. However, what constitutes modality is still an open question, both empirically and conceptually. We will address it by taking a close look at two phenomena that have been argued to be of modal nature: (1) evidentiality, a category that deals with an information source for an utterance, and (2) future reference and associated categories that deal with events that are yet to happen. We will discuss the distinction between direct and indirect evidence and how such distinctions are reflected in language, in particular, evidential restrictions on modal claims and evidential constraints on future-directed discourse. The class is structured as follows. Day 1 is a primer on mainstream theories of modality. Day 2 covers a variety of puzzles about the nature of evidence, modality and assertion. Day 3 is about evidence in language and assertions with evidentials. Day 4 is entirely devoted to the future. Day 5 talks about the Acquaintance Inference, a phenomenon whereby we call something "tasty" only if we have tried it, and conditions when this inference goes away.
Fabrizio Cariani is Associate Professor in the Department of Philosophy at the University of Maryland, College Park and Natasha Korotkova is Postdoctoral Fellow at the Linguistics Department at the University of Konstanz.
This introductory course covers major theoretical frameworks and state-of-the-art experimental investigations into scalar implicatures. After an introductory session on structural theories of implicature (e.g., Geurts, 2010; Chierchia et al., 2012), we discuss theoretical and experimental work on the role of alternatives, focusing on adjectival Horn scales (e.g., , Gotzner et al., 2018; Alexandropoulou & Gotzner, 2021). Two further sessions will cover implicatures triggered by sentences with multiple quantifiers. Starting from grammatical approaches, we move towards game-theoretic models (Franke, 2009; Benz, 2011) and recent suggestions of integrating exhaustivity operators within the Rational Speech Acts model (Bergen & Franke, 2020). In our final session, we introduce the interactive best response paradigm - a new paradigm testing implicatures in controlled dialogue experiments (Gotzner & Benz, 2018; Benz & Gotzner, 2020).
Anton Benz is Senior Researcher in the Research Area 4 'Semantics & Pragmatics' at the Leibniz-Centre General Linguistics (ZAS) and Nicole Gotzner is Director of the SPA Lab at the University of Potsdam.
The workshop on Subjectivity and Semantic Interpretation will comprise a series of talks presenting perspectives on whether and how subjectivity—sensitivity to the perspectives, opinions, or tastes of particular individuals or groups—is involved in semantic interpretation, addressing core questions such as: Where does subjectivity sit at the division between semantics and pragmatics? What sorts of linguistic forms are sensitive to subjectivity, and what is it about them that makes them subjective? The presentations will create dialogue between formal theoretical, psycholinguistic, and computationally-oriented perspectives perspectives on the topic, incorporating traditional, experimental, and corpus data sources. The workshop is organized by Elsi Kaiser and Deniz Rudin; the featured speakers are:

  • Pranav Anand
  • Jesse Harris
  • Chris Kennedy
  • Natasha Korotkova
  • Kyle Mahowald
  • Greg Scontras
  • Isidora Stojanovic
  • Malte Willer
For course-related materials, click here.
Elsi Kaiser is Associate Professor in the Department of Linguistics at the University of Southern California and Deniz Rudin is Assistant Professor in the Department of Linguistics at the University of Southern California.

Group S

This course will provide an introduction to Python, using applications to logic and semantics, especially from game-theoretic, lambda calculus, and neural perspectives. No previous introduction to programming will be assumed. The course will start with an introduction to basic constructs of procedural programming: data structures, loops, and control flow. We will apply these concepts to backward induction in game semantics by first learning the concepts of dynamic programming algorithms. Functional programming will then be introduced and applications to lambda-based interpretation will be discussed. If time permits, there will be a short introduction to object-oriented programming and the close relationship between logic and neural networks.
Khalil Iskarous is Associate Professor of Linguistics at the University of Southern California.
After giving an introduction to generalized quantifier theory, this course surveys a number of approaches to learning such quantifiers. We will look at attempts from formal language theory, from developmental psychology, and from contemporary machine learning. Each approach will be assessed through the lens of explaining semantic universals: why do natural languages only express certain types of generalized quantifiers? Students will be exposed to the application of mathematical and computational methods to natural language semantics in order to explain the fundamental properties of meanings cross-linguistically.
For course-related materials, click here.
Jakub Szymanik is Associate professor in the Institute for Logic, Language and Computation at the University of Amsterdam and Shane Steinert-Threlkeld is Assistant Professor in Linguistics at the University of Washington.
This course provides an introduction to what is involved in actually implementing, in a computational sense, a system of compositional semantics of the sort commonly assumed in theoretical linguistics and philosophy (see e.g. Szabó 2017). The target audience is students who have had introductory-level programming experience, as well as basic exposure to linguistic or logical semantics in some form, or have basic computational semantics experience; it is an introductory course that does not assume deep background knowledge in either area.
For course-related materials, click here.
Kyle Rawlins is Associate Professor and Director of Graduate Studies in the Cognitive Science Department at Johns Hopkins University.
This course will develop the idea that semantics is the study of one component of a modular language system. More specifically: semanticists are in the business of reverse engineering the proprietary database of a system that bridges the gap between syntax and sentence meaning in language perception and production. This system has severely limited access to our belief system and other cognitive systems, and its inner-workings and proprietary database are mostly off limits to he rest of cognition.

Day 1: Semantics and modularity
What is a modular input-output system? Why should we think that semantics is the study of a component of one? We will consider the evidence. A theme will be that the modularity of semantics best explains the ways in which semantics has been successful as a research program.

Day 2: Context sensitivity
If semantics is modular, then the part of your mind that computes meanings doesn’t have access to information about extralinguistic context. We will look at some strategies for adjusting formal-semantic theories to account for this.

Day 3: Polysemy, word meanings, and concepts
Most open-class vocabulary is polysemous. If semantics is modular, then it lacks access to the contextual information needed to choose senses for these expressions on particular occasions. How, then, should we formally model the meanings of polysemous expressions? We will consider some options. A theme will be that the relationship between word meanings and concepts is messier than normally assumed.

Day 4: Verbal working memory
If the language system is an input–output system, does that mean we only use it for communication? No! Here we consider its use for short-term information storage, tying in to the substantial cognitive-scientific literature on verbal working memory. We will also consider how this model can explain some of the ways in which language influences thought.

Day 5: Designing speech and thinking in language
We consider two puzzles for the modular view. First, we seem able to micromanage language production in a way that seemingly conflicts with the modular theory. Second, we sometimes seem to use language to think. Drawing on the fruits of day 4, I will develop an explanation of how we do these things by using our language systems for sub-vocal rehearsal.
For course-related materials, click here.
Daniel Harris is Assistant Professor of Philosophy at Hunter College, CUNY.

Group N

This practical Advanced Course aims to introduce students with computational linguistics backgrounds to incremental language processing for Spoken Dialogue Systems (SDS). Students will be shown the benefits of incrementality for improving speed, naturalness and fluidity of conversing with machines. Concretely, we will be looking at SDSs where processing information from user speech on a word-by-word basis is crucial. The course will cover how to deal with various natural, incremental phenomena in dialogue—such as spoken disfluencies, utterance continuations and interruptions—which standard dialogue systems cannot deal with, using incremental, semantically driven natural language understanding and generation models. Each session is divided into a lecture, and a practical. During the practicals students work gradually towards building their own fully incremental SDS in a small domain, using the technical tools and API that we will provide. Our aim is that by the end of the course, students will appreciate the multi-faceted complexity of real-time language processing in dialogue.
For course-related materials, click here.
Julian Hough is Lecturer at the School of Electronic Engineering and Computer Science at Queen Mary University of London and Arash Eshghi is Assistant Professor and member of the Interaction Lab at the Department of Computer Science, Heriot-Watt University.
Dialogue Structure models how individual contributions in a dialogue (such as autonomous communications and actions of dialogue participants) relate to each other and compose larger units such as conversations or segments, or serve to manage common resources, such as the conversation floor, initiative, and establishing common ground. Most computational and empirical corpus studies of dialogue have focused on the two-party (dyadic) situation.

In this course we will examine aspects of dialogue structure that either emerge only in conversational contexts with more participants, or for which the nature of structuring is qualitatively or quantitatively different with more participants. "Multiparty dialogue" involves more than two participants - so there are more than a single speaker and addressee who swap roles with every turn, and not every non-speaker listener is an addressee. Sometimes one can model aspects of multiparty interactions as a set of dyadic conversations among each pair of relevant participants, but one must still explain how these individual conversations relate to each other. On the other hand, some dialogue phenomena are not easily modelled in this way, and some emergent structural phenomena exist that are not regularly seen in dyadic conversation. Multi-floor dialogue has some elements of multiple conversations (different sets of participants, distinct floor resources), but also some elements of multiparty conversation (more than two participants, at least some topics and goals in common and some information flowing to all, across floors). We will examine taxonomic and computational approaches to modelling several kinds of multi-floor dialogues, including small group conversations, multi-floor teams, chatrooms and message boards and how they relate to and differ from dyadic conversation, attending to issues such as turn-taking, initiative, grounding, dialogue relations, intentional structure, and conversational thread disentanglement.
For course-related materials, click here.
David Traum is Research Professor in the Computer Science Department at USC and Director for Natural Language Research at USC’s Institute for Creative Technologies.
What are the desiderata for a theory of dialogue? The course will present two desiderata, one rather classical in its emphases, the other relating to more recent developments. The first desideratum can be related to the classical version of the Turing test—model/simulate the ability of an adult agent to participate in a conversation. Even a restricted version of the test—the ability to simulate the range of possible responses to a question is a significant challenge for all current theories of dialogue. In the first part of the course we will present a framework that has one of the most detailed attempts at meeting Turing’s challenge: the KoS framework, which is formally underpinned by a Type Theory with Records. KoS synthesizes: speech act theory, Wittgensteinian language games, formal semantics, and conversational analysis to yield a detailed theory of dialogical relevance from the micro-level (self-repairs, interjections) to the macro-level (the structure of complete conversations). This provides a theory of context that can underpin the analysis of a variety of dialogical phenomena such as non-sentential utterances, repair, and quotation. However, it also provides a new perspective on well trodden phenomena such as quantification and compositionality. We introduce a more challenging desideratum: dialogue across the lifespan—from interaction with infants at different developmental stages and concluding with the effects aging has on interaction. To address this desideratum, we will show how non-verbal social signals such as crying, laughter, and smiling, which are present both among non-human primates and infants and develop into highly complex behaviours among adults can be integrated into KoS. This requires us to integrate multimodality in the framework and to consider manual and head gestures. We will consider how to model the earliest grammars among children, where semantic complexity is achieved by exploiting both visual and interactive context. We will conclude by discussion of aging and how it requires us to confront forgetting, an aspect missing from contemporary work on dialogue.
For course-related materials, click here.
Jonathan Ginzburg is Professor of Linguistics at the Laboratoire de Linguistique Formelle-CNRS at the Université Paris-Diderot (Paris 7) and Andy Lücking is Postdoctoral Research Fellow at the Laboratoire de Linguistique Formelle (LLF) at the Université Paris-Diderot (Paris 7).
This course provides an introduction to the construction of annotated linguistic corpora to serve the dual purposes of theoretical linguistic analysis and machine learning for NLP. This is done via a detailed exploration of the design and early construction of the Brandeis-Simmons Corpus of English VP (Verb Phrase) Ellipsis: the first syntactically annotated ellipsis corpus primarily containing transcriptions of naturally occurring spoken dialogue, as opposed to constructed text from newswire, journalistic essays, or fiction.
For course-related materials, click here.
Lotus Goldberg is Professor of Linguistics at Brandeis University and Amber Stubbs is Associate Professor of Computer Science at Simmons University.

Group K

This course surveys recent work at the intersection of traditional epistemology, Bayesian epistemology, epistemic logic, belief revision theory, and non-monotonic reasoning. The common thread is the idea that some possibilities are more normal, or more plausible, than others, and these differences in normality/plausibility determine what we can know and rationally believe. We will begin by surveying a number of influential cases at the intersection of traditional epistemology and epistemic logic, and showing how different formal models in the literature can be subsumed within a normality-based approach. We will then turn to topics including: connections between normality/plausibility and probability; the context-dependence of knowledge and belief; general principles in epistemic and doxastic logic; and normality-based approaches to belief revision, dynamic epistemic logic, and non-monotonic reasoning. Throughout we will focus on applications of the framework to concrete test cases. No previous familiarity with epistemic logic is presupposed.
For course-related materials, click here.
Jeremy Goodman is Associate Professor in the Department of Philosophy at USC and Bernhard Salow is Associate Professor at the Oxford Philosophy Faculty, and Tutorial Fellow at Magdalen College.
This course is an introduction to topology and an exploration of some of its applications in epistemic logic. A passing familiarity with modal logic will be helpful, but is not essential; no background in topology is assumed. We'll begin by motivating and defining standard relational structure semantics for epistemic logic, and highlighting some classic correspondences between formulas in the language and properties of the structures. Next we'll introduce the notion of a topological space using a variety of metaphors and intuitions, and define topological semantics for the basic modal language. We'll examine the relationship between topological and relational semantics, establish the foundational result that S4 is “the logic of space” (i.e., sound and complete with respect to the class of all topological spaces), and discuss richer epistemic systems in which topology can be used to capture the distinction between the known and the knowable. Roughly speaking, the spatial notion of “nearness” can be co-opted as a means of representing uncertainty. This lays the groundwork to explore some more recent innovations in this area, such as topological models for evidence and justification, information update, and applications to the dynamics of program execution.
Adam Bjorndahl is Associate Professor in the Department of Philosophy at Carnegie Mellon University.
Possibility Semantics is a generalization of Possible World Semantics, based on partial possibilities instead of complete possible worlds. In recent years, this approach has been applied to the semantics of modal and non-classical logics, natural language semantics, and semi-constructive mathematics. In this course, we will provide: (Day 1) a more accessible introduction to Possibility Semantics than is available in the technical literature; in-depth sample applications of Possibility Semantics to (Day 2) the modeling of knowledge and awareness, (Day 3) the formal semantics of epistemic modals in natural language, and (Day 4) temporal logic and the openness of the future; and (Day 5) an introduction to propositional and first-order quantification in possibility semantics. No previous familiarity with Possibility Semantics will be assumed. Over the course of the week, we will suggest a number of open problems and avenues for future research. Please note: Days 1-3 will be in person as well as on Zoom, while Days 4-5 will be on Zoom only.
For course-related materials, click here.
Wes Holliday is Professor of Philosophy and Faculty Member of the Group in Logic and the Methodology of Science at the University of California, Berkeley.