Category theory originated as a very abstract and high-level theory in mathematics (specifically in the field of algebraic topology). It studies the relations between entire mathematical universes. Its objects are not numbers, geometric figures or differential equations, but whole high-level structures. For this, it is sometimes called "the mathematics of mathematics".
It was later discovered that it can be applied to theoretical computer science as a model for data types and computable functions. It is "the algebra of functions" or, more concretely, of computer programs.
This course is a basic introduction to the fundamental concepts of Category Theory and its applications to computer science and mathematics. The focus is on concrete structures: we study different categories with a wide range of properties: sets, graphs, relations, domains, topological spaces, groups, ... When we introduce categorical notions, we always illustrate them on these examples, examining how the general idea instantiates to definite objects. This helps us to develop the intuition and connects abstract concepts to concrete applications.
Topics of the lectures:
References: Most of the course is based on Steve Awodey, "Category Theory". Other important references are Saunders Mac Lane, "Categories for the Working Mathematician" Michael Barr and Charles Wells, "Category Theory for Computer Science"
In this course we introduce Type Theory (sometimes called "dependent type theory") as an informal language for mathematical constructions in Computer Science and other disciplines. By Type Theory we mean the constructive fundation of Mathematics whose development was started by Per Martin-Löf in the 1970s based on the Curry-Howard equivalence of propositions and types. Because of this proximity to Computer Science, Type Theory has been the foundation of interactive theorem provers and programming languages such as NuPRL, Coq, Agda and Idris. While the calculi on which these systems are based are an important research topic, in this course we want to emphasise the "naive" use of Type Theory using just pen and paper. Indeed this is similar to the naive use of set theory which is usually applied informally without explicitly relating each constructions to the axioms of set theory (as in Halmos' book on "Naive Set Theory").
In this course we focus on the intuitive foundations of Type Theory and on basic constructions such as: universes, (dependent) function types, (dependent) products, inductive types and equality and their use in informal reasoning. If time permits we will also cover basic constructions from Homotopy Type Theory, such as the univalence axiom (isomorphism is equality), and some Higher Inductive Types (a generalisation of quotient types). A good reference for our course is chapter 1 of the book on Homotopy Type Theory.
In this course we will focus on the denotational semantics of the simply typed lambda calculus. We aim to show the following results:
Along the way we will discuss the substitution lemma, term models, logical relations, and basic domain theory.
The first part is based on Chapter 2 of the book "Semantics of Programming Languages" by Carl A. Gunter, MIT Press 1992. The other two on "Domain-Theoretic Foundations of Functional Programming" by Thomas Streicher, World Scientific 2006. Both books contain a lot more material than I can present in this course. The same is true for "Foundations for Programming Languages" by John C. Mitchell, MIT Press 1996. The common feature of these three works and my course is that the simply typed lambda calculus is taken as the starting point, whereas other texts on denotational semantics tend to begin with an (imperative) while-language.
A detailed outline of the course together with a rationale for the selection of topics has been published in the ACM SIGLOG News, vol. 1, nr. 2, October 2014, pages 25-37.
This course introduces a recently developed denotational framework for reasoning about the behavior of shared-memory concurrent programs assuming a weak memory model. The framework is designed to support compositional reasoning about partial and total correctness, safety and liveness. We provide some historical context, including a brief survey of traditional approaches and the problems that arise when dealing with weak memory. We also discuss alternative approaches, and potential avenues for further exploration and development. The relevant mathematical concepts and foundations will be introduced as we go.
Traditional semantics for shared-memory concurrency assumes sequential consistency, but modern implementations and machine architectures offer more relaxed or ``weak'' memory models. Traditional semantic models are usually trace-based and are incapable of representing weak memory behavior such as stale reads and the absence of a total store ordering. Instead our semantic design embodies ``true'' concurrency: a program denotes a set of pomsets (partially ordered multi-sets) of actions. Instead of global states, we take a ``local'' view of state and work with footprints, so we can detect potential race conditions and keep track of ownership and resources. We define a notion of pomset execution that embodies a weak memory model with the following characteristics: (i) Writes to the same variable appear to happen in the same order, to all threads; (ii) The reads of each thread always see the most recently written value; (iii) The atomic operations of each thread happen in program order; (iv) Non-atomic code can be re-ordered or optimized, without affecting execution results in race-free contexts. These properties are similar to those of the C11 release/acquire memory model underpinning recent logics such as GPS and RSL, so our semantics should be suitable for establishing soundness of, and relationships between, such logics.
Our semantic framework offers an alternative to ``execution graphs'', commonly used in operational or axiomatic formalizations of weak memory program behavior. An execution graph represents the behavior of an entire program, assumed to be run without interference, an approach that is inherently non-compositional and requires the construction of numerous relations (happens-before, reads-from, etc) constrained to satisfy a battery of axioms. An advantage is that our framework is denotational, so naturally supports compositional (syntax-directed) reasoning, and we can derive these auxiliary relations automatically from the internal structure of pomset executions.
Game semantics is a trace-like denotational semantics for programming languages indicating how a program interacts with its context. In this course we will take a system-level perspective on game semantics, first considering how they arise naturally out of the operational semantics of open terms, then giving combinatorial characterisation of these interactions. We will see how the combinatorial characterisations can be made precise enough so that they give a sound and complete characterisation of contextual equivalence (“fully abstract models”). Finally, we will briefly consider applications of the game-semantic approach to program verification, compilation and security.
The purpose of this course is to introduce the theory of coalgebra and its application to the modelling of dynamic and reactive systems. Ideally, the student will have seen the basic concepts of category theory already, but I will try to teach the course in such a way that it can also be understood as an introduction to category theory by the way of examples. The unifying theme of these lectures are coalgebras as generalised automata (or transition systems) and we will build on the assumption that students are familiar with the basic concepts of finite automata and regular and context-free languages. Accordingly, we start by reformulating these basic concepts in the language of category theory. Once the catgorical generalisation stands, we can start to vary the category theoretic `modelling parameters’ such as the `base category’ and the `functor’ and discuss, from a uniform perspective, a variety models such as weighted automata, nominal automata, or process algebras (such as the pi-calculus).
In this lecture course I will present the applied Pi-calculus, a process calculus specifically designed for the verification of security protocols. I will give example of security protocols and security properties and show how both the protocols and the properties can be modelled in the applied Pi-calculus. I will also briefly present ProVerif, which is a tool for automatically verifying security properties specified in the applied Pi-calculus.
This course provides an introduction to linear logic with an emphasis on its applications to computer science. We will study the proof theory of intuitionistic linear logic, and see how it can serve as a type theoretic language for safely extending functional programming with state, as well as how it can be used as a type discipline for concurrent programming.
The course will touch upon recent developements in the field of rigorous design and analysis of distributed applications. The focus will be on the so called communication-centric applications. Such applications are nowadays paramount; it is indeed hard to imagine stand-alone closed applications (or programming languages) that do not offer mechanisms for smooth integration. Such mechanisms can be suitably abstracted in terms of communications. The theoretical grounds of the course are choreographies, recently advocated as suitable framework for the design and analysis of communication-centric applications. Quoting W3C:
"[A] Choreography specification [...] describes, from a global viewpoint [...], observable behaviour of all the parties involved. Each party can then use the global definition to build and test [...] solutions that conform to it. The global specification is in turn realised by combination of the resulting local systems."
The course will first lay out the properties of interest that communication-centric software should guarantee. We will then show how formalisms featuring choreographies can be used to model robust communication-centric applications. In particular, we will review the so-called session types, show how they formalise the W3C 'top-down' approach (from global to local) quoted above, and discuss some of their limitations.
We will then move to a more flexible model based on communicating finite state machines (CFSM for short), a well known automata model introduced as a specification language of distributed protocols featuring message passing. CFSMs have recently been proved expressive enough (in fact, a generalisation of) session types. We will show how CFSMs can reverse the direction of the W3C approach (from local to global) when they are combined with global graphs (GG, for short), a concrete design visual language which allows to explicitly represent distributed interactions (from the global viewpoint).
This course can serve as both an introduction to some classic compiler theory and to abstract machines (in the style of the SECD machine, but simpler). The two main parser construction techniques will be covered: LL and LR. These techniques underlie modern parser generator tools such as ANTLR, yacc/bison, or Menhir. For both LL and LR, we construct a non-deterministic stack machine from a given grammar and then refine it (where possible) into a deterministic machine that can be implemented efficiently.