Linguistics is full of boundary disputes. Empirical phenomena do not come neatly labeled
as “syntax”, “semantics”, etc. The components of linguistic theory should interact to
provide a complete account of the phenomena at hand, but often a syntactician will claim
“this phenomenon is really semantics” (or vice versa), without ensuring that a semantic
analysis of the phenomenon is viable.
This course, based on our forthcoming OUP survey monograph, is an attempt to develop
a complete analysis of coordination, a topic that spans syntax, formal semantics, and
discourse semantics/pragmatics. We focus particularly on patterns of unbounded
dependency formation in coordinate structures, which syntacticians claim require a
partially semantic analysis, often without reference to current theories of the semantics
Daniel Altshuler is Associate Professor of Semantics in the Faculty of Linguistics, Philology, and Phonetics at the University of Oxford and Robert Truswell is Senior Lecturer in Linguistics and English Language at the University of Edinburgh.
This introductory course covers major theoretical frameworks and state-of-the-art experimental investigations into scalar implicatures. After an introductory session on structural theories of implicature (e.g., Geurts, 2010; Chierchia et al., 2012), we discuss theoretical and experimental work on the role of alternatives, focusing on adjectival Horn scales (e.g., , Gotzner et al., 2018; Alexandropoulou & Gotzner, 2021). Two further sessions will cover implicatures triggered by sentences with multiple quantifiers. Starting from grammatical approaches, we move towards game-theoretic models (Franke, 2009; Benz, 2011) and recent suggestions of integrating exhaustivity operators within the Rational Speech Acts model (Bergen & Franke, 2020). In our final session, we introduce the interactive best response paradigm - a new paradigm testing implicatures in controlled dialogue experiments (Gotzner & Benz, 2018; Benz & Gotzner, 2020).
Anton Benz is Senior Researcher in the Research Area 4 'Semantics & Pragmatics' at the Leibniz-Centre General Linguistics (ZAS) and Nicole Gotzner is Director of the SPA Lab at the University of Potsdam.
This course is an introduction to topology and an exploration of some of its applications in epistemic logic. A passing familiarity with modal logic will be helpful, but is not essential; no background in topology is assumed. We'll begin by motivating and defining standard relational structure semantics for epistemic logic, and highlighting some classic correspondences between formulas in the language and properties of the structures. Next we'll introduce the notion of a topological space using a variety of metaphors and intuitions, and define topological semantics for the basic modal language. We'll examine the relationship between topological and relational semantics, establish the foundational result that S4 is “the logic of space” (i.e., sound and complete with respect to the class of all topological spaces), and discuss richer epistemic systems in which topology can be used to capture the distinction between the known and the knowable. Roughly speaking, the spatial notion of “nearness” can be co-opted as a means of representing uncertainty. This lays the groundwork to explore some more recent innovations in this area, such as topological models for evidence and justification, information update, and applications to the dynamics of program execution.
Adam Bjorndahl is Associate Professor in the Department of Philosophy at Carnegie Mellon University.
Our goal in this class is to explore the connections between modality, evidentiality and futurist reference by bringing together often disconnected strands of research from philosophy and linguistics, focusing especially on recent work in philosophy of language, formal semantics and formal pragmatics. Modal displacement --- the ability to talk about how things could or must be --- is a fundamental property of human language, and there is a host of approaches to the semantics, pragmatics and epistemology of modal claims. However, what constitutes modality is still an open question, both empirically and conceptually. We will address it by taking a close look at two phenomena that have been argued to be of modal nature: (1) evidentiality, a category that deals with an information source for an utterance, and (2) future reference and associated categories that deal with events that are yet to happen. We will discuss the distinction between direct and indirect evidence and how such distinctions are reflected in language, in particular, evidential restrictions on modal claims and evidential constraints on future-directed discourse. The class is structured as follows. Day 1 is a primer on mainstream theories of modality. Day 2 covers a variety of puzzles about the nature of evidence, modality and assertion. Day 3 is about evidence in language and assertions with evidentials. Day 4 is entirely devoted to the future. Day 5 talks about the Acquaintance Inference, a phenomenon whereby we call something "tasty" only if we have tried it, and conditions when this inference goes away.
Fabrizio Cariani is Associate Professor in the Department of Philosophy at the University of Maryland, College Park and Natasha Korotkova is Postdoctoral Fellow at the Linguistics Department at the University of Konstanz.
What are the desiderata for a theory of dialogue? The course will present two desiderata, one rather classical in its emphases, the other relating to more recent developments.The first desideratum can be related to the classical version of the Turing test—model/simulate the ability of an adult agent to participate in a conversation. Even a restricted version of the test—the ability to simulate the range of possible responses to a question is a significant challenge for all current theories of dialogue. In the first part of the course we will present a framework that has one of the most detailed attempts at meeting Turing’s challenge: the KoS framework, which is formally underpinned by a Type Theory with Records. KoS synthesizes: speech act theory, Wittgensteinian language games, formal semantics, and conversational analysis to yield a detailed theory of dialogical relevance from the micro-level (self-repairs, interjections) to the macro-level (the structure of complete conversations). This provides a theory of context that can underpin the analysis of a variety of dialogical phenomena such as non-sentential utterances, repair, and quotation. However, it also provides a new perspective on well trodden phenomena such as quantification and compositionality. We introduce a more challenging desideratum: dialogue across the lifespan—from interaction with infants at different developmental stages and concluding with the effects aging has on interaction. To address this desideratum, we will show how non-verbal social signals such as crying, laughter, and smiling, which are present both among non-human primates and infants and develop into highly complex behaviours among adults can be integrated into KoS. This requires us to integrate multimodality in the framework and to consider manual and head gestures. We will consider how to model the earliest grammars among children, where semantic complexity is achieved by exploiting both visual and interactive context. We will conclude by discussion of aging and how it requires us to confront forgetting, an aspect missing from contemporary work on dialogue.
Jonathan Ginzburg is Professor of Linguistics at the Laboratoire de Linguistique Formelle-CNRS at the Université Paris-Diderot (Paris 7) and Andy Lücking is Postdoctoral Research Fellow at the Laboratoire de Linguistique Formelle (LLF) at the Université Paris-Diderot (Paris 7).
This course provides an introduction to the construction of annotated linguistic corpora to serve the dual purposes of theoretical linguistic analysis and machine learning for NLP. This is done via a detailed exploration of the design and early construction of the Brandeis-Simmons Corpus of English VP (Verb Phrase) Ellipsis: the first syntactically annotated ellipsis corpus primarily containing transcriptions of naturally occurring spoken dialogue, as opposed to constructed text from newswire, journalistic essays, or fiction.
Lotus Goldberg is Professor of Linguistics at Brandeis University and Amber Stubbs is Associate Professor of Computer Science at Simmons University.
This course surveys recent work at the intersection of traditional epistemology, Bayesian epistemology, epistemic logic, belief revision theory, and non-monotonic reasoning. The common thread is the idea that some possibilities are more normal, or more plausible, than others, and these differences in normality/plausibility determine what we can know and rationally believe. We will begin by surveying a number of influential cases at the intersection of traditional epistemology and epistemic logic, and showing how different formal models in the literature can be subsumed within a normality-based approach. We will then turn to topics including: connections between normality/plausibility and probability; the context-dependence of knowledge and belief; general principles in epistemic and doxastic logic; and normality-based approaches to belief revision, dynamic epistemic logic, and non-monotonic reasoning. Throughout we will focus on applications of the framework to concrete test cases. No previous familiarity with epistemic logic is presupposed.
Jeremy Goodman is Associate Professor in the Department of Philosophy at USC and Bernhard Salow is Associate Professor at the Oxford Philosophy Faculty, and Tutorial Fellow at Magdalen College.
This course will develop the idea that semantics is the study of one component of a modular language system. More specifically: semanticists are in the business of reverse engineering the proprietary database of a system that bridges the gap between syntax and sentence meaning in language perception and production. This system has severely limited access to our belief system and other cognitive systems, and its inner-workings and proprietary database are mostly off limits to he rest of cognition.
Day 1: Semantics and modularity
What is a modular input-output system? Why should we think that semantics is the study of a component of one? We will consider the evidence. A theme will be that the modularity of semantics best explains the ways in which semantics has been successful as a research program.
Day 2: Context sensitivity
If semantics is modular, then the part of your mind that computes meanings doesn’t have access to information about extralinguistic context. We will look at some strategies for adjusting formal-semantic theories to account for this.
Day 3: Polysemy, word meanings, and concepts
Most open-class vocabulary is polysemous. If semantics is modular, then it lacks access to the contextual information needed to choose senses for these expressions on particular occasions. How, then, should we formally model the meanings of polysemous expressions? We will consider some options. A theme will be that the relationship between word meanings and concepts is messier than normally assumed.
Day 4: Verbal working memory
If the language system is an input–output system, does that mean we only use it for communication? No! Here we consider its use for short-term information storage, tying in to the substantial cognitive-scientific literature on verbal working memory. We will also consider how this model can explain some of the ways in which language influences thought.
Day 5: Designing speech and thinking in language
We consider two puzzles for the modular view. First, we seem able to micromanage language production in a way that seemingly conflicts with the modular theory. Second, we sometimes seem to use language to think. Drawing on the fruits of day 4, I will develop an explanation of how we do these things by using our language systems for sub-vocal rehearsal.
Daniel Harris is Assistant Professor of Philosophy at Hunter College, CUNY.
This practical Advanced Course aims to introduce students with computational linguistics backgrounds to incremental language processing for Spoken Dialogue Systems (SDS). Students will be shown the benefits of incrementality for improving speed, naturalness and fluidity of conversing with machines. Concretely, we will be looking at SDSs where processing information from user speech on a word-by-word basis is crucial. The course will cover how to deal with various natural, incremental phenomena in dialogue—such as spoken disfluencies, utterance continuations and interruptions—which standard dialogue systems cannot deal with, using incremental, semantically driven natural language understanding and generation models. Each session is divided into a lecture, and a practical. During the practicals students work gradually towards building their own fully incremental SDS in a small domain, using the technical tools and API that we will provide. Our aim is that by the end of the course, students will appreciate the multi-faceted complexity of real-time language processing in dialogue.
Julian Hough is Lecturer at the School of Electronic Engineering and Computer Science at Queen Mary University of London and Arash Eshghi is Assistant Professor and member of the Interaction Lab at the Department of Computer Science, Heriot-Watt University.
This course will provide an introduction to Python, using applications to logic and semantics, especially from game-theoretic, lambda calculus, and neural perspectives. No previous introduction to programming will be assumed. The course will start with an introduction to basic constructs of procedural programming: data structures, loops, and control flow. We will apply these concepts to backward induction in game semantics by first learning the concepts of dynamic programming algorithms. Functional programming will then be introduced and applications to lambda-based interpretation will be discussed. If time permits, there will be a short introduction to object-oriented programming and the close relationship between logic and neural networks.
Khalil Iskarous is Associate Professor of Linguistics at the University of Southern California.
The question that runs through all of modern epistemology is how to demarcate what we can know from what we cannot. This question has received intense scrutiny not only in philosophy, but also in mathematical statistics, where a number of results have substantially advanced our understanding of the inherent limits of knowledge. This course will introduce students to some of the most astounding theorems of 20th-century statistics, many of which are not widely known among philosophers. In each session of the course, we will focus on a single inference problem and prove both positive results about the questions we can settle given enough data, and negative results about the questions we have to leave unanswered. Each of these results provide important and often surprising insights into the conditions of possibility of acquiring knowledge. Students will receive an intuitive introductions followed by rigorous proofs, with an emphasis on depth over breadth.
Mathias Winther Madsen is research engineer for Micropsi Industries GmbH.
Modal logic is motivated by the need to formalize necessity and possibility, knowledge and belief, provability and many other properties that can be considered as operators on logical propositions (temporal, ethical etc.). Many logics used in applications in fields covered by numerous NASSLLI courses are in broad sense modal logics. This course aims to give an introduction to modal logic without assuming any background in logic or mathematics, with an emphasis on developing understanding of its basic semantics, that of relational structures. The aim is to prevent misunderstandings which occur when students without this background attend courses in related fields to which modal logic is applied. Most time will be spent on careful and mathematically rigorous definition of basic concepts, backed with numerous application-driven examples. We will start with some basic mathematical concepts: sets, relations and functions. In particular, we will focus on relations which occur in applications in related fields, like computer science and linguistics. There are various semantics for modal logics, but the course will focus on arguably the most natural, especially from the point of view of applications: Kripke relational semantics. Even so, we will consider two levels of this semantics: that of models and of frames. Basic model constructions will be presented, as well as the concept of modal definability, a way to reason about expressive power of modal logic. Time permitting, some proofs of usual fundamental logical results like completeness and decidability will be sketched, and some advanced topics in modal definability will be discussed.
Tin Perkov is Chair of Mathematics and Statistics at the Faculty of Teacher Education, University of Zagreb.
This course provides an introduction to what is involved in actually implementing, in a
computational sense, a system of compositional semantics of the sort commonly assumed in
theoretical linguistics and philosophy (see e.g. Szabó 2017). The target audience is students who
have had introductory-level programming experience, as well as basic exposure to linguistic or
logical semantics in some form, or have basic computational semantics experience; it is an
introductory course that does not assume deep background knowledge in either area.
Kyle Rawlins is Associate Professor and Director of Graduate Studies in the Cognitive Science Department at Johns Hopkins University.
The course will introduce two main topics covered in the instructor’s 2018 MIT Press textbook
Phonology: A formal introduction.
1. Segments, strings and rules: We first develop a simple syntax for phonological computation in
which rules are functions mapping strings of segment to strings of segments. We illustrate the logic of
phonological neutralization in terms of modus tollendo ponens and reductio ad absurdum. We provide
a semantics for the rules. Then we show how the syntax can be enhanced to express rules deleting and
inserting segments, and how that complicates the semantics. We then show how our basic reasoning
gets obscured when we treat a phonological system as a composed function of several rules.
2. Segments as sets of features and unification logic: We next explore the implications of viewing
segments as sets of features and defining natural classes as sets of sets of features. Using this basic
idea, we illustrate the empirical motivation for standard phonological notions like underspecification,
feature-filling and feature-changing rules. We shows how a simple version of a unification operator
can solve problems that arise in other approaches.
We will use self-grading HW problems built on toy languages to help students learn the material.
Charles Reiss is Professor of Linguistics and Founding Member of the Concordia Centre for Cognitive Science.
Because of its tight coupling of syntax and semantics, Combinatory Categorial Grammar (CCG) has become widely adopted in computational linguistics and natural language processing (NLP), particularly for applications in which semantics plays an important role, such as question answering, textual entailment, inference, and induction of semantic parsers from data consisting of paired sentences and logical forms. It has also been applied to modeling language acquisition from child-directed utterance. The course seeks to reexamine the significance of CCG as a linguistic theory of grammar. Despite adhering to strictly orthodox linguistic principles, CCG is not widely understood within mainstream linguistics. The reason may be that CCG is a revolutionary theory, seeming to require modification of long-held beliefs concerning the reality of surface syntactic structure as a representational level, and even the nature of grammatical constituency itself, in which all that was linguistically solid melts into air, and logical form is the only non-phonological representational level (Steedman , 2000, 2019).It is widely acknowledged that there is currently something of a crisis in theoretical linguistics. The ancillary disciplines of computational linguistics and psycholinguistics, and indeed some influential currents within mainstream linguistics itself, seem to have entirely given up on the idea that formal linguistic theory has anything to tell us about the use of language. It therefore seems timely to look more closely at CCG across a number of languages in comparison with other approaches that have recently been developed, including those within the Minimalist Program, and to propose a synthesis that preserves the linguistic insights of all in a form that can reconnect with a broad range of disciplines concerned with actual performance.
Mark Steedman is Professor of Cognitive Science in the School of Informatics at the University of Edinburgh.
Natural languages are riddled with context-sensitivity. One and the same string of words can express many different meanings on different occasions of use, and yet we understand one another effortlessly, on the fly. How do we do so? What fixes the meaning of context-sensitive expressions, and how are we able to recover the meaning so effortlessly? Everyone agrees that what we can communicate is to some extent constrained by grammar, but most authors believe that the role of grammar is limited, and that resolution of context-sensitivity largely relies on extra-linguistic cues: speakers’ intentions and/or other non-linguistic features of utterance situation. Interpretation thus depends on general reasoning about speaker intentions and other non-linguistic cues. The idea that context-sensitivity resolution to a significant extent depends on extra-linguistic information is widely assumed in theorizing about the nature of content, context, and context-content interaction, both in linguistics and in philosophy of language. It is also relied upon in the literature on contextualism about various philosophically laden terms (e.g., "know", "good"). In this course, we shall critically examine these assumptions and their significance for formal models of context, and the dynamics of contexts-change. We shall explore their role in various arguments for some radical and surprising conclusions about the nature of content (e.g., the arguments for the non-propositionality of content expressed by, e.g., modal discourse), the dynamics of context, and the logical properties of natural language discourse (e.g., the apparent violations of various classical patterns of inference in the presence of modal vocabulary).
Una Stojnic is Assistant Professor of philosophy at the Department of Philosophy at Princeton University.
After giving an introduction to generalized quantifier theory, this course surveys a number of
approaches to learning such quantifiers. We will look at attempts from formal language theory,
from developmental psychology, and from contemporary machine learning. Each approach will
be assessed through the lens of explaining semantic universals: why do natural languages only
express certain types of generalized quantifiers? Students will be exposed to the application of
mathematical and computational methods to natural language semantics in order to explain the
fundamental properties of meanings cross-linguistically.
Jakub Szymanik is Associate professor in the Institute for Logic, Language and Computation at the University of Amsterdam and Shane Steinert-Threlkeld is Assistant Professor in Linguistics at the University of Washington.
This course is an accelerated introduction to applications of higher-order logic in linguistics and philosophy. We will focus in particular on problems connected to propositional attitudes such as belief. Roughly the first third of the course will introduce some standard higher-order logics and their model theory, presupposing nothing beyond a basic familiarity with first-order logic. We will then consider ways in which puzzles about propositional attitudes, such as Frege’s puzzle, Mates’s puzzle, and the problem of logical omniscience, might put pressure on standard logical principles governing identity, quantification, and the application of complex predicates.
Cian Dorr is Professor of Philosophy at New York University and Harvey Lederman is Assistant Professor of philosophy at Princeton University.
Commonly reduced to set membership in a straightforward way,
predication can be analyzed in terms of labelled transitions between states, modelling cognitive processes constitutive of linguistic meaning. Requiring the set of transitions to be finite leads to a finite-state approach to predication, which we can refine by letting the set vary over larger and larger finite sets. Variations in these finite sets supports open-endedness in individual-level, stage-level and kind-level predication alike. One form of open-endedness is variable adicity, the raison d’etre of events in Davidson 1967. A second form of open-endedness arises from the choice of temporal propositions, changes in which determine a notion of time. We analyze open-endedness uniformly through model-theoretic notions of satisfaction formulated within institutions in the sense of Goguen and Burstall (1992). Models take the form of strings, as in the Buchi-Elgot-Trakhtenbrot theorem equating Monadic Second-Order Logic with regular languages, or of finite frames, understood as finite automata.
Tim Fernando is Lecturer at the Computer Science Department at Trinity College, Dublin.
This course will focus on propositional quantifiers in the context of modal logics, where they are especially useful. For example, in the context of a doxastic interpretation of modal logic, they allow us to make generalizations about what is and is not believed by an agent. With this, we can state that everything the agent believes is the case, that the agent believes that they believe something false, or that everything believed by one agent is believed by a second agent. Standard possible world models for modal logics can be extended straightforwardly to propositional quantifiers, by letting these quantifiers range over arbitrary sets of worlds. However, in many cases, this straightforward model theory leads to logics which are not recursively axiomatizable. In addition to these simple models, we will therefore consider a range of alternative models, including models based on complete Boolean algebras, and possible worlds models in which propositional quantifiers range over a restricted domain of sets of worlds. The aim of the course is to show the usefulness of propositional quantifiers in modal logics using examples, to provide a systematic overview of the work that has been done in this field, and to highlight some of the many interesting questions which remain open.
Peter Fritz is Professor of Philosophy at the Dianoia Institute of Philosophy, at the Australian Catholic University.
Finite-state machines are widely used in text and speech processing, particularly as probabilistic models of string-to-string transductions. One major advantage of these finite-state models is that, unlike neural sequence models, finite-state machines can be combined using set-theoretic operations such as union and intersection, optimized using determinization and minimization, cascaded via composition, and searched using shortest-path algorithms, all in polynomial time. In this tutorial talk, I provide an introduction to finite-state text processing methods and software. I first provide a formal introduction to finite acceptors, which model sets of strings; finite transducers, representing relations between sets of strings; and weighted finite transducers, which represent weighted (e.g., probabilistic) relations between sets of strings. I then describe finite-state algorithms for constructing, optimizing, and searching transducers. I then introduce Pynini, a Python-based finite state grammar library based on the OpenFst toolkit, and compare and contrast its features with several other existing finite-state tools. Finally, I walk through several Pynini worked examples for spelling correction, pronunciation modeling (i.e., "g2p"), morphological analysis, and fuzzy string matching.
Kyle Gorman is Assistant Professor of Linguistics at the Graduate Center, City University of New York.
In the past 5-10 years a geometric form of semantic representation, word vectors, has
taken computational linguistics by storm. Mainstream linguistic semantics, Montague
Grammar and its lineal descendants, has remained largely unreceptive to representing
word and sentence meaning in finite-dimensional Euclidean space -- the five-volume Wiley
Blackwell Companion to Semantics (2021) does not even mention the idea.
At the same time, major database collection efforts such as the Google and Microsoft
knowledge graphs have amassed hundreds of billions of facts about the world. These
efforts, relying on simple algebraic meaning representation methods using labeled graphs
or relational triples, have also remained largely under the radar of logic-based formal
semantics even though semantic search (information retrieval), information extraction,
and the increasingly effective Semantic Web are all powered by a combination of the
geometric and algebraic methods.
This one-day short course will investigate the similarities and differences between the
formula-based mainstream, the geometric, and the algebraic approaches. The focus will be
on explaining the vector-based and graph-based approaches to people already familiar
with logical semantics. We will describe some of the novel insights these approaches
bring to such traditional concerns of linguistic semantics as meaning postulates,
generics, temporal and spatial models, indexicals, lexical categorization, the meaning
of bound morphemes, deep cases, negation, and implicature.
Andras Kornai is Professor at the Budapest Institute of Technology, and Senior Scientific Advisor at the Computer and Automation Research Institute of the Hungarian Academy of Sciences.
Metaphysics in the past was considered mainly a pursuit of philosophers, asking questions
about being in most general terms. While some philosophers appealed to natural language,
others rejected such an appeal arguing that the ontology reflected in language diverges
significantly from what there really is. What is certain is that with the development of natural
language semantics (and syntax), the metaphysics reflected in natural language has become an
important object of study in itself, as the subject matter of natural language ontology or, more
generally, natural language metaphysics. This course gives an overview of the sorts of the
ways natural language reflects ontological notions and structures, of cases of discrepancies
between the ontology implicit in natural language and the reflective ontology of philosophers
or non-philosophers, and of the ways the relation of natural language metaphysics can be
conceived with respect to other projects in metaphysics. It also addresses the Chomskyan
skepticism as regards reference (and ontology) and the importance of recent developments in
(generative) syntax for natural language metaphysics. The course will also discuss some
developments in linguistic semantics and syntax that provide more generalizations to be taken
account for natural language metaphysics, in particular lexical theory, of the sort developed
by Pustejovsky and Asher, and by syntacticians such as Hale and Kayser.
Friederike Moltmann is Research Director (DR1) at the Centre Nationale de la Recherche Scientifique (CNRS).
Tense morphemes are ubiquitous among the worlds’ languages. Yet there are also many languages, from distinct language families, that do not have to mark tense overtly: they either do not have tense morphemes or the presumed tense morphemes are optional. The question arises: is tense universal? The answer, within formal semantics, has so far been "yes". We will challenge this view. This course will present an introduction to the semantics of tense, with focus on universals and constrained variation among languages with overt tense morphemes. We will then discuss how semantic theories of tense have been extended to languages without overt tense morphemes. Approaches differ along two dimensions: how they accomplish reference to time intervals (e.g., via a syntactically represented covert pronoun or a purely semantic rule), and how they restrict the location of those time intervals (e.g., via covert lexical features or pragmatic constraints). Finally, we will discuss a different type of account altogether that does not rely on tense to derive temporal reference. Instead, evaluation time shift, a mechanism independently attested in the narrative present in languages with tense, is more widely used for encoding temporal meaning in the absence of tense. We will illustrate this account for Paraguayan Guarani and Cantonese (based on joint work with Maria Luisa Zubizarreta and Tommy Tsz-Ming Lee), identifying empirical advantages over accounts that employ tense. The broader consequence is an enriched typology of temporal systems. And particularly notably, tense is revealed to not be a linguistic universal.
Roumyana Pancheva is Professor of Linguistics & Slavic Languages & Literatures at the University of Southern California.
Conversational topic shift, both gradual and sudden and disruptive will be the focus of this
mini-course and will involve introducing novel computational notions of both coherence and
relevance as well as looking at topic shift in relation to global discourse structure.
In everyday conversation, talk flows from topic to topic as speakers discuss one thing and
then, somehow, something entirely different . Most commonly, topic shift occurs gradually,
almost imperceptibly. However, shift may also be sudden as questions under discussion in a
next utterance share nothing with those of preceding talk. (Sacks, 1972 in Jefferson, 1984). For
a “next” utterance to be interpreted as “appropriate” in both “cooperative” and adversarial
conversations, that utterance must be seen as “relevant” to the conversational context and, it
is often argued, to be “coherent” with preceding talk as well. In this mini-course we will discuss
how, while “coherence” as a condition for the choice of an appropriate utterance is not always
respected, relevance, always is.
To understand how the complementary phenomena of relevance and coherence contribute to
orderly conversational interaction, a method will be introduced to compute “coherence”
between utterances though they may or may not follow sequentially in the surface structure of
the unfolding conversation (Polanyi et al 2004) Taken together, these computational methods
will provide a basis of understanding gradual topic shift.
Cases in which “coherence” is disrupted but a “next” utterance which fails to cohere is still
seen as appropriate is more complex and requires the introduction of an entirely new
approach to relevance that departs from traditional theories (Sperber&Wilson, 1995).
Conversational Relevance Evaluation, (CRE), based on the notion “Closer to Me”, explains why
some next utterances, even those which may seem to have come “out of the blue”, though
surprising or unexpected, appear entirely appropriate in a given context as defined by the
time, place and individuals involved, while other apparently similar newly introduced topics
may lead to confused incomprehension. Early efforts to integrate coherence calculation with
Conversational Relevance Evaluation will be sketched.
Livia Polanyi is Consulting Professor of Linguistics, at Stanford University.
The course will provide a systematic introduction to current research on the dynamics of belief. Specifically, it will comprehensively survey the philosophical and mathematical foundations of the most influential extant theories of belief revision and belief update from computer science and philosophy. After introducing these theories, it will then focus on the relationships between (i) the dynamics of belief and the norms of suppositional and conditional reasoning, and (ii) influential quantitative and qualitative models of uncertainty from the perspective of the ‘Lockean theory of belief’. Students can expect to gain a thorough overview of the formal and conceptual foundations of the theory of belief dynamics, and its place in current philosophy. The only pre-requisite for participation is knowledge of basic propositional logic.
Benjamin Eva is Assistant Professor of Philosophy at Duke University, Branden Fitelson is Distinguished Professor of Philosophy at Northeastern University and Ted Shear is Lecturer in Philosophy at the University of Colorado-Boulder.
The workshop on Subjectivity and Semantic Interpretation will comprise a series of talks presenting perspectives on whether and how subjectivity—sensitivity to the perspectives, opinions, or tastes of particular individuals or groups—is involved in semantic interpretation, addressing core questions such as: Where does subjectivity sit at the division between semantics and pragmatics? What sorts of linguistic forms are sensitive to subjectivity, and what is it about them that makes them subjective? The presentations will create dialogue between formal theoretical, psycholinguistic, and computationally-oriented perspectives perspectives on the topic, incorporating traditional, experimental, and corpus data sources. The workshop is organized by Elsi Kaiser and Deniz Rudin; the featured speakers are:
- Pranav Anand
- Daphna Heller
- Chris Kennedy
- Natasha Korotkova
- Kyle Mahowald
- Greg Scontras
- Isidora Stojanovic
- Malte Willer
Elsi Kaiser is Associate Professor in the Department of Linguistics at the University of Southern California and Deniz Rudin is Assistant Professor in the Department of Linguistics at the University of Southern California.