ABOUT 
Computer Science Seminars and Short CoursesUseful links:
Semester 2, 2012/13Seminar programme
Seminar detailsOn the Correctness of the SIMT Execution Model of GPUs
Alexander Knapp
(University of Augsburg) GPUs are becoming a primary resource of
computing power. They use a single instruction, multiple
threads (SIMT) execution model that executes batches of
threads in lockstep. If the control flow of threads within the
same batch diverges, the different execution paths are
scheduled sequentially; once the control flows reconverge, all
threads are executed in lockstep again. Several thread
batching mechanisms have been proposed, albeit without
establishing their semantic validity or their scheduling
properties. To increase the level of confidence in the
correctness of GPUaccelerated programs, we formalize the SIMT
execution model for a stackbased reconvergence mechanism in
an operational semantics and prove its correctness by
constructing a simulation between the SIMT semantics and a
standard interleaved multithread semantics. We briefly
discuss an implementation of the semantics in the K framework
and a formalization of the correctness proof in the theorem
prover KIV. We also demonstrate that the SIMT execution model
produces unfair schedules in some cases.
Coinduction, Equilibrium and Rationality of Escalation
Pierre Lescanne
(Ecole Normale Superieure de Lyon) Escalation is the behavior of players who play
forever in the same game. Such a situation can happen in
automatic systems for auction and negotiation as developed in
electronic commerce and, tightly associated with infinite
games, it is a typical field for application of coinduction
which is the tool conceived for reasoning in infinite
mathematical structures. In particular, we use coinduction to
study formally the game called dollar auction, which is
considered as the paradigm of escalation. Unlike what is
admitted since 1971, we show that, provided one assumes that
the other agent will always stop, bidding is rational, because
it results in a subgame perfect equilibrium. We show that
this is not the only rational strategy profile (the only
subgame perfect equilibrium). Indeed if an agent stops and
will stop at every step, whereas the other agent keeps
bidding, we claim that he is rational as well because this
corresponds to another subgame perfect equilibrium. In the
infinite dollar auction game, despite of the dogma, the
behavior in which both agents stop at each step is not a Nash
equilibrium, hence is not a subgame perfect equilibrium, hence
is not rational. Happily, the notion of rationality based on
coinduction one obtains fits with common sense and experience.
Finally the possibility of a rational escalation in an
arbitrary game can be expressed as a predicate on games and
the rationality of escalation in the dollar auction game can
be proved as a theorem. We even check the correctness in the
proof in the proof assitant COQ. In this talk we will recall
the principles of infinite extensive games, presented in the
framework of coinduction, then the concept of equilibrium
(Nash equilibrium, subgame perfect equilibrium). We will show
how one can prove that the two strategy profiles presented
above are equilibria and how this leads to a "rational"
escalation in the dollar auction.
When Computer Chess meets Science
Soren Riis
(Queen Mary, University of London) In 1997 Deep Blue won a sixgame match against
world champion Garry Kasparov where Deep Blue according to
Kasparov showed sign of "human intelligence" and he suggested
foul play through human intervention. How was it possible
that a number cruncher like Deep Blue could produce "real
intelligence"?
How far are computers from playing chess at a perfect level?
In the talk new and surprising answers to this question will
be discussed. The talk gives examples of how chess computers
have revolutionised ordinary endgame theory. In the talk, I
will also discuss interesting limitations in gameplaying
computers. Do these limitations highlight the uniqueness of
human intelligence?
The talk will also relate Computer Chess to topics like
innovation, reverse engineering, incentives to copy, market
leadership, and other currently important concepts.
The talk is aimed at a general audience who is interested in
understanding automated reasoning, as well as the limits and
strengths of human intelligence.
Is there always an algorithm? An introduction to computability theory
Rick Thomas
(University of Leicester) Computability theory is concerned with the
following type of question: given a problem, does there exist
an algorithm to solve that problem? This is of fundamental
importance in computing; there is no point trying to design an
algorithm to solve a problem if no such algorithm exists!
In this talk we will give a gentle introduction to
computability theory. We will give some idea of the history
of the subject, mentioning some of the key figures involved;
as we will see, this sort of question preceded the modern
computer. We will then give some examples of problems for
which no algorithm exists.
Chris Watkins
(Royal Holloway, University of London) Can we find a model of evolution which is simple
enough to analyse, and yet which may capture a significant
part of evolution's remarkable computational power? Evolution
is naturally modelled as a Markov chain, in which states are
generations, and the state transitions are inherent in the
procedures for breeding, selection, and mutation. Both
natural evolution and most genetic algorithms implement
irreversible Markov chains that are hard to analyse.
The main contribution outlined in this talk is to define a
`sexual' evolutionary algorithm that is a reversible Markov
chain (with detailed balance) for which the mutationselection
equilibrium can be defined explicitly as a Gibbs distribution
with a readily computable energy function.
These ideas are fairly simple: it is perhaps odd that genetic
algorithms were not originally defined in this way.
I will outline several unexpected research directions that
flow from this work. One is that modern MCMC methods devised
for Bayesian inference, can be applied to define Markov chains
that provably converge to the same distribution, but which may
mix faster than Gibbs sampling. (One might call these
`superevolutionary' algorithms.)
I will also review some familiar, and perhaps some other
unfamiliar evolutionary puzzles.
Tight CellProbe Bounds for Online Hamming Distance Computation
Benjamin Sach
(University of Warwick) In this talk we will discuss recent work on
time complexity lower bounds in the cellprobe model. The talk
will start with an overview of the cellprobe model and a
discussion of the main techniques for proving lower bounds in
this model. Lower bounds in the cellprobe model also hold in
many other models, including the widelyaccepted RAM model.
The main focus of the talk will be recent work where we showed
tight bounds for online Hamming distance computation in the
cellprobe model with word size w. The task is to output the
Hamming distance between a fixed string of length n and the
last n symbols of a stream. We give a lower bound of
Omega((d/w)*log n) time on average per output, where d is the
number of bits needed to represent an input symbol. We argue
that this bound is tight within the model. The lower bound
holds under randomisation and amortisation. This is joint work
with Markus Jalsenius and Raphael Clifford from the University
of Bristol, UK which appeared in SODA 2013.
The Art of Decision Making in Software Engineering
Joost Noppen
(University of East Anglia) The development of software systems, as any
other complex development process, is driven by decisions on
all kinds of subjects, ranging from deciding how to interpret
requirements and the budget that is available to which
architecture to build and which languages and implementation
platforms to use. Naturally good decisions lead to good
software systems, but the question is how to make good
decisions. And what exactly is a good decision, or a good
software system for that matter? In this presentation I will
examine decision making and how it is done in software
engineering. In particular I will focus on the aspects that
make it difficult to come to good decisions, such as partial
knowledge, ambiguous information or a lack of experience and
insight. I will tie this to research efforts in software
engineering at the University of East Anglia to illustrate how
we believe computational analysis of various types of
information can support developers in their decision making.
An introduction to Homotopy Type Theory
Nicola Gambino
(University of Palermo and University of Leeds) Type theories are the core of computer systems,
such as Coq and Agda, that provide both a tool for
computerassisted formalisation of mathematics and an
integrated environment for writing computer programs and
verifying their correctness. Research on type theories
has received new impetus in recent years thanks to the
discovery of fascinating connections with homotopy theory, a
branch of pure mathematics concerned with the classification
of spaces up to a suitable notion of similarity. I will
give an introduction to this area of research without assuming
any knowledge of type theory or homotopy theory. In
particular, I will explain how the connections between type
theory and homotopy theory led to the formulation of
Voevodsky's Univalent Foundations of Mathematics programme, a
new approach to the computerassisted development of
mathematics in type theory.
Heterogeneous Asynchronous Networks of Timed Machines
Jose Fiadeiro
(Royal Holloway)
We present a component algebra for asynchronous networks of
timed machines. A machine is a discrete timed input/output
automa ton that executes according to the clock granularity
of the network node in which it is placed. We define a
composition operator through which networks can be
interconnected and investigate properties of such net
works, namely conditions that guarantee consistency and
feasibility, i.e., that the interconnected machines can
collectively generate a nonempty behaviour, no matter in
what environment they are placed.
XML on the Web: Is it still relevant
O'Neil Delpratt
(Saxonica) In this talk we discuss what it means by the
term XML on the Web and how this relates to the browser. The
success of XSLT in the browser has so far been underwhelming,
and we examine the reasons for this and consider whether the
situation might change. We describe the capabilities of the
first XSLT 2.0 processor; namely Saxon Client Edition
(SaxonCE), which is designed to run within web browsers,
bringing not just the extra capabilities of a new version of
XSLT, but also a new way of thinking about how XSLT can be
used to create interactive clientside applications. We
describe two usecase applications to illustrate the
capabilities of SaxonCE. The first is a technical
documentation application, which permits browsing and
searching in a intuitive way. We then present a multiplayer
chess game application; using the same XSLT 2.0 processor as
the first application, it is in fact very different in purpose
and design in that we provide multiuser interaction on the
GUI and implement communication via a social media network:
namely Twitter.
Coverage Analysis for Model Transformations
Jochen Kuester
(IBM Research) Model transformations are one important aspect
of modeldriven software engineering as they can be used for
model evolution, model generation, model comparison, model
migration and model refinement. As a consequence, the quality
of model transformations must be systematically ensured. When
developing a model transformation, a software engineer usually
designs a test suite consisting of test cases where each test
case consists of one or several models. In order to ensure a
high quality of such a test suite, coverage achieved by the
test cases with regards to the system under test must be
systematically measured. Using results from coverage analysis,
missing test cases and redundant test cases can be identified
and thereby the quality of the test suite and the model
transformation can be improved. In this talk, we present a
coverage analysis approach for measuring test suite quality
for model transformations and we also report on the Test Suite
Analyzer tool which implements our approach.
Semester 1, 2012/13The seminar programme for Semester 1 can be viewed here. 

Author: Alexander Kurz (kurz mcs le ac uk), T: 0116 252 5356. 