Computational Cognitive Science

A list of potential topics for PhD students in the area of Computational Cognitive Science.

Neural Network Models of Human Language and Visual Processing

Supervisor: Frank Keller

Recent neural models have used attention mechanisms as a way of focusing the processing of a neural networks on certain parts of the input. This has proved successful for diverse applications such as image description, question answering, or machine translation. Attention is also a natural way of understanding human cognitive processing: during language processing, humans attend words in a certain order; during visual processing, they view image regions in a certain sequence. Crucially, human attention can be captured precisely using an eye-tracker, a device that measures which parts of the input the eye fixates, and for how long. Projects within this area will leverage neural attention mechanisms to model aspects of human attention. Examples include reading: when reading text, humans systematically skip words, spend more time on difficult words, and sometimes re-read passages. Another example is visual search: when looking for a target, human make a sequence of fixations which depend a diverse range of factors, such as visual salience, scene type, and object context. Neural attention models that capture such behaviors need to combine different types of knowledge, while also offering a cognitively plausible story how such knowledge is acquired, often based on only small amounts of training data.

 

Topics in morphology (NLP or cognitive modelling)

Supevisor:  Sharon Goldwater

Many NLP systems developed for English ignore the morphological structure of words and (mostly) get away with it. Yet morphology is far more important in many other languages. Handling morphology appropriately can reduce sparse data problems in NLP, and understanding human knowledge of morphology is a long-standing scientific question in cognitive science. New methods in both probabilistic modeling and neural networks have the potential to improve word representations for downstream NLP tasks and perhaps to shed light on human morphological acquisition and processing. Projects in this area could involve working to combine distributional syntactic/semantic information with morphological information to improve word representations for low-resource languages or sparse datasets, evaluating new or existing models of morphology against human behavioral benchmarks, or related topics.

Cognitive models of speech perception and learning

Supervisor: Sharon Goldwater

Deep speech models are the first artificial systems to come anywhere close to human performance on recognizing the words in fluent speech. In some ways, they are clearly different from human learners: for example, the quantity and type of training data, and details of learning mechanisms and architectures. Yet in other respects these models share similarities with humans: for example in using prediction as a learning signal, and using high-dimensional distributed representations. Projects in this area will explore the extent to which deep speech models are useful tools to understand human speech processing; the kinds of representations that support successful speech processing more generally; and what allows those representations to arise during learning. We will look at results from human behavioural and/or brain imaging studies and consider how models might be able to provide additional insights, for example by manipulating models in ways that may not be possible for humans.