Skip to main content

CoSTAR National Lab Doctoral Programme

CoSTAR National Lab Doctoral Programme

Applications now closed for CoSTAR PhDs starting in September/October 2025

CoSTAR National Lab is pleased to offer funding for up to 9 PhD students. 

We welcome applications from talented individuals to be based at one of three higher education (HEI) project partners, Royal Holloway University of London, University of Surrey/Surrey Institute for People-Centred AI, and Abertay University. The world-class research and training environments of these institutions are enhanced by the creative industries partners of the CoSTAR National Lab, including Pinewood Studios, BT, Disguise, and the National Film & Television School. We are looking for PhD candidates demonstrating academic excellence, a passion for research, able to work effectively as part of a team. 

Applicants should contact the Lead Supervisors and work with them to develop an Expression of Interest by Friday March 14th, 2025. More information on how to apply and on funding can be found below.

Applications will open in Autumn 2025 for September/October 2026 start. 

 

The Six CoSTAR Futures

These PhD topic areas align with the wider CoSTAR National Lab research programmes - called ‘Futures’ - addressing challenges set by industry, and helping to build research connections across these areas:

Creative Futures takes the best of sector creativity to enable the application of emergent technologies to current and futures opportunities in screen and performance, allowing storytelling to reach into the world and help shape our understanding of it. 

Business Futures focuses on developing our understanding of ‘life-centric’ experiences for customers, adapted to their ever-changing needs and priorities, and allowing customers to co-create value and personalised services. 

AI Futures will embed cutting-edge and foundational AI into creative industry pipelines, helping to transform the creation, production, delivery, and personalised experience of media content, providing more intuitive and creative control.  

Createch Futures is seeking rich, distributed and connected interactive virtual environments, advance real-time rendering and simulation in virtual production and associated realtime workflows, optimisation and Generative AI. 

User Futures applies the understanding of human factors, human cognition, emotion and user preferences to the creation of inclusive, accessible, intuitive and engaging technologies and experiences of lasting value. 

Inclusive Futures explores principles of inclusive innovation and social justice in creative technology for marginalised users, seeking universal accessibility through distributed, democratised, and sustainable advanced production tools and methods. 

 

Funding

There are 18 PhD opportunities available, and from these opportunities 9 will be offered places once interviews have taken place. 

These opportunities are for UK-based students only with the exception of those listed for Abertay University.  

Awards cover UK tuition fees and provide a stipend at the UKRI rate for a period of 3 years. The UK fees and stipend for 2025/26 have not yet been announced but for the academic year of 2024/25 the UK fees (paid directly to the Institution) are £4,786 and the stipend (Tax-free maintenance payment based on the UK Research and Innovation minimum rate) is £20,780 from 1 Oct 2025 (£22,780 if London weighting applies).  

As a doctoral student, you may be able to access added funding to cover the cost of other related training and development opportunities.  

 

Applying

Stage 1: Please contact the Lead Supervisor of the PhD opportunity via email. You will then need to work with the Lead Supervisor and supervisory team to send your CV and 500-word Expression of Interest by Friday March 14th, 2025.  

Please label your files clearly with your surname plus PhD ID reference code plus your surname, for example: SURNAME_RHUL1. Please quote the relevant PhD topic in all correspondence.  

Stage 2: Interviews will be held before Thursday March 27th, 2025. 

Stage 3: Offers will be made by Friday April 25th, 2025.

*Incomplete or late applications will not be considered*  

 

The Research Areas

The PhD topics are listed below by institution. Lead Supervisors and co-supervisors will be provided by the host institution. The supervision team will also include Expert Mentors from partner institutions, bringing additional expertise from across CoSTAR.

Royal Holloway Opportunities

Lead Supervisor: Dr. Andy Woods Andy.Woods@rhul.ac.uk

Futures: Users

Emerging technologies are creating new and innovative ways to stimulate the senses during live events and experiences. The successful candidate will contribute to the Multisensory Experience pillar led by the User’s team by investigating how these new sensory technologies impact the creation of 'wow'. This work will be grounded in established theories of multisensory integration and will build on industry leading research (https://www.storyfutures.com/resources) undertaken by the Users team. By systematically including and excluding each technology, the individual effects of each technology may be able to be isolated and thus quantified. This project would be suitable for applicants with a strong academic background in Experimental Psychology or related disciplines.

Lead Supervisor: Dr. Laryssa Whittaker Laryssa.Whittaker@rhul.ac.uk

Futures: Users

This project focuses on how audiences experience liveness in virtual music events (real-time with avatars or metahumans) or hybrid events (with in-person and virtual or technology-enhanced elements) to understand the impact of technology on the live music experience. The successful candidate will have the opportunity to conduct research with audiences and artists at such events through CoSTAR projects on the future of live music and the ethical use of AI in performance. Potential areas of focus might include understanding cultures of performance and similarities/differences between the way liveness is experienced in different genres and scenes as they move into digital spaces; how the affordances of digital spaces influence establishment of performer/audience connections; innovation in affective and narrative communication; innovation in methods for enhancing the experience of digital as live. This opportunity might be of interest to candidates with backgrounds in subjects such as Music, Psychology, Sociology, Game Design or Human-Computer Interaction.

Lead Supervisor: Prof. Angela Chan Angela.Chan@rhul.ac.uk

Futures: Inclusive / Users

This PhD explores use of assistive technologies for use in emerging performance environments. At an exciting time for convergent screen experiences, technology can be used to enhance the experiences of marginalised audiences but it can also present novel forms of exclusion. Working with participatory methodologies and a range of user groups this research will investigate potential risks and benefits of deploying immersive technologies and AI. The PhD can be approached from either a Media Arts or software engineering perspective, depending on the skill set of the selected applicant.

Lead Supervisor: Prof. Jen Parker-Starbuck Jen.Parker-Starbuck@rhul.ac.uk

Futures: Worldbuilding

A practice-based study of storytelling in virtual production across any or all of the domains of film, theatre and live performance where advanced production is being used or developed. The research employs ethnographic and other methods to record and reflect on the practices and techniques of storytelling and worldbuilding in VP, whether narrative-led or technologically driven.

Lead Supervisor: Prof. Peter Richardson Peter.Richardson@rhul.ac.uk

Futures: Worldbuilding / Createch

Auto-matte high quality intelligent auto rotoscope of FG elements against volumes in advanced production. This research is an exploration and definition of a key virtual production research question, as set by director and VFX expert Paul Franklin, and offers researchers the opportunity to work with leading-edge real-time technology practitioners including experts at Disguise.

Lead Supervisor: Prof. Mark Lycett Mark.Lycett@rhul.ac.uk

Futures: Business

This project aims to investigate (and develop) future business models and operational processes for creative organisations in the face of emerging technologies. Technologies such as AI, immersive, and blockchain in the context of changes in demography, evolving consumer tastes, and the like create challenges related to value capture, value creation, and the value propositions creative companies offer. Drawing on several in-depth case studies, the work here will explore new business models that open new opportunities for creative organisations to thrive.

Lead Supervisor: Prof. Mark Lycett Mark.Lycett@rhul.ac.uk

Futures: Business

This project aims to explore the future of digital identity, specifically, how people present themselves—and are represented—in immersive digital environments. Building on the notion of ‘self-sovereign identity’ (and related concepts such as ‘verifiable credentials’ and ‘decentralised identifiers’), this work will conceptualise, implement, and test governance models and smart contract designs to manage digital assets, specifically the rights associated with people’s digital (i.e., audio-visual) likenesses.

Lead Supervisor: Prof. Mark Lycett Mark.Lycett@rhul.ac.uk

Futures: Business

This project explores the development and use of design fictions as an effective means for prototyping immersive experiences. Drawing heavily on emerging generative AI models the work will explore the creative possibilities of design fictions, examining aspects such as: (a) Boundary conditions, constraints to effectiveness, such as context, technical sophistication, etc.; (b) audience effects, such as narratives influencing audiences’ perception, engagement, and/or feedback; and (c) knowledge generation, exploring both the concrete insights emerging alongside a more critical view of design fictions as a methodology.

Lead Supervisor: Prof. Nuno Barreiro Nuno.Barreiro@rhul.ac.uk

Futures: Createch / Standards

The project will focus on the interoperability between game engines. This is a challenging problem as each game engine has specificities that are hard, if not impossible, to translate across different platforms. As the state-of-the-art stands, each application is created on a particular platform at a particular moment in time, with future developments requiring bespoke translation and adaptation. As a consequence, past investments and assets, which are potentially viable for other projects, frequently have to be discarded and started from scratch, causing significant losses and inefficiencies. By exploring the use of an open-source game engine (e.g., Godot), this project will establish an open pipeline for the creation of immersive content, including real time and AI co-pilots, which will have a good level of interoperability with other game engines, namely Unreal Engine. 

Some of the potential research topics are:

  • Adapt plugins from Unreal Engine to an open-source game engine
  • Translate projects across game engines
  • Develop API to integrate AI tools via plugins
  • Integrate audio standards in open-source game engine
  • Establish synchronisation mechanisms across several renders (e.g., cloud sync)

Lead Supervisor: Prof. Nuno Barreiro Nuno.Barreiro@rhul.ac.uk 

Futures: Createch / Standards

The widespread use of immersive audio is hampered by inconsistently delivered experiences and the laboriousness of attempting to convert between one configuration in an author’s production environment and an alternative spatial arrangement in a theatre preview or in a performance space, such as a Virtual Production studio. To progress beyond conventional production that is locked into fixed arrangements or time-consuming bespoke installations, we envisage a turnkey advanced audio system that is positioned, calibrated, synchronised and configured to be compatible for rendering from pre-prepared audio and audio-visual assets, automation and interactivity. The integration and extension of standards are investigated for loudspeaker and microphone arrangements, acoustical measurement, audio-over-IP networking, exchange between channel-based/ambisonics and object-based contributions, audio FX, real-time interaction and configuration to production-specific settings, with a focus on breaking down inter-operability barriers between the configurations and workflows practiced across industry sectors (e.g., broadcast / streaming, games / film / music) to bring new affordances to the performance context including virtual and active 3D sound. This project will provide the ground for in-situ experiments linked to the proposal “Creating and refining immersive, adaptive audio tools to enable real-time interaction with music and sound in VP” from Abertay University.

Lead Supervisor: Prof. Adam Ganz Adam.Ganz@rhul.ac.uk 

Futures: Worldbuilding / Inclusive

A critical and practice-led PhD driving inquiry into how convergent performance can be used for collective, creative and multisensory experiences enhancing inclusion by fostering perspective-taking. The PhD will aim to investigate how immersive and AI technologies can be built into creative practice, developing research methodologies to foster participatory approaches to screen-based performance. By combining advancements in AI-generated virtual environments, dynamic soundscapes, and storytelling tools, the researcher will seek to develop new frameworks for interactive experiences that bridge screen and performance disciplines. The research will take a case study approach working alongside CoSTAR's Worldbuilding team, exploring uses of real-time generative AI in performance, developing improvisatory forms (such as Live Action Role Playing) as part of their research methodology.

Surrey Opportunities

Lead Supervisor: Dr. Marco Volino m.volino@surrey.ac.uk

Futures: AI / Createch 

This research aims to address the challenges of single view human reconstruction in large uncontrolled capture volumes, where factors such as occlusion, lighting variations, and low-resolution degrade performance. The primary focus will be on developing robust learnt representations of humans leveraging the latest advances in AI and data generation techniques to accurately reconstruct human shape and appearance from monocular inputs.

Expected contributions include:

  • Temporally stable single view human geometry reconstruction from unconstrained low-resolution video 
  • A learnt model of human appearance from unconstrained low-resolution video
  • A unified model of human shape and appearance from unconstrained low-resolution video 
Ultimately this work aims to democratise human reconstruction technology by enabling its deployment in settings with minimal resources, dramatically lowering barriers to entry, and broadening its impact across industries and communities.

Lead Supervisor: Dr. Armin Mustafa armin.mustafa@surrey.ac.uk

Futures: AI / Createch 

Monocular 4D reconstruction of social scenes is an open challenging problem because the dynamic elements in social scenes change their shape, location, lighting, and backgrounds with time, making it extremely difficult to track 3D points of each person/object in time. 
3D models will be estimated and will be extended to learn per-pixel motion in dynamic scenes, using pre-annotated sparse temporal 2D/3D labels for 4D reconstruction from monocular video. This captures both spatial (3D) and temporal (4D) evolution for a deeper understanding of dynamic interactions. Current per-pixel motion estimation methods often suffer from unreliable correlations and accumulated inaccuracies. Diffusion models can enhance the correlation reliability and resilience to noise due to its denoising intrinsic and are useful to model long-range dependencies for spatial-temporal 4D semantic reconstruction of videos with multiple interacting people. A novel uncertainty-aware diffusion probabilistic model will learn temporally consistent features for 4D temporal correspondence. A novel temporal consistency loss and hybrid representation will allow the processing of multiple video frames to create temporally coherent 4D reconstructions. 4D temporal reasoning will be integrated in the model using graphs to model relationships between people and objects to capture both short-term and long-term dynamics, for temporally coherent reconstruction. 

Lead Supervisor: Dr. Jean-Yves Guillemaut j.guillemaut@surrey.ac.uk

Futures: AI / Createch 

This PhD project will investigate and develop novel approaches for neural rendering of dynamic scenes with an emphasis on high-fidelity editing and rendering of human performance suitable for use by the creative industries. Recent advances in neural rendering have opened the possibility of achieving photorealistic manipulation of image content, however, the extension to the video domain remains challenging due to the need to maintain temporal coherence and due to scalability issues. Research in this project will investigate how generative AI techniques can be tailored to enable manipulation of human performance from monocular video input for applications such as relighting, shadow casting, appearance/material editing and real-time compositing. The PhD will provide an opportunity to interface with industry partners to collaboratively evaluate the tools developed and contribute to their integration into a production pipeline.

Lead Supervisor: Prof. Yi Zhe Song y.song@surrey.ac.uk 

Futures: AI / Createch / Inclusive

This PhD explores novel interfaces for video generation through intuitive storyboarding approaches, enabling users to create high-quality video content from simple sketched keyframes. The research will develop multimodal control systems that allow creators to guide video generation through a combination of rough sketches, text descriptions, and gesture inputs, transforming traditional storyboarding workflows into interactive real-time video creation tools. This work aims to democratise video production by bridging the gap between simple visual ideation and sophisticated AI-driven video synthesis.

Abertay University Opportunities

Lead Supervisor: Erin Hughes e.hughes@abertay.ac.uk 

Futures: Createch / AI

This PhD research aims to push the boundaries of neural rendering by adapting advanced techniques, such as neural ray reconstruction and importance sampling, beyond their current applications in ray reconstruction and frame generation. The focus will be on extending these technologies to other critical areas of reconstruction and generation, such as texture enhancement and more sophisticated dynamic lighting and shadow effects. By leveraging AI-driven methods, this PhD seeks to improve the realism and efficiency of 3D scene rendering, particularly in applications like virtual reality (VR), augmented reality (AR), and real-time visual effects (VFX). This work will contribute to the development of more immersive and interactive virtual environments, providing advancements in the fields of computer graphics and interactive visual computing. 

Lead Supervisor: Dr. Christos Michalakos c.michalakos@abertay.ac.uk

Futures: Createch & Standards

The research will focus on gestural interaction systems and procedural audio generation, advancing the ability for creators to manipulate sound and music dynamically within virtual worlds.

The gestural component will investigate the use of motion tracking, hand gestures, and embodied interactions as intuitive tools for shaping soundscapes and musical structures. These systems will allow users to "perform" sound and music interactively within virtual spaces, leveraging the precision and responsiveness of cutting-edge XR platforms. By incorporating adaptive machine learning algorithms, the tools will respond in real-time to user movements, dynamically modifying audio textures, spatial placement, and musical patterns based on user intent and environmental context. 

On the audio side, this research will build advanced sound generation pipelines that generate dynamic soundscapes and music tailored to the needs of virtual production. These systems will include the generation of context-aware ambient sound, real-time music composition, and seamless integration with visual effects pipelines (UE & opensource), which can be demonstrated for example within the Futures Studio. Such tools will offer filmmakers, game designers, and XR developers highly customizable and responsive audio solutions that adapt to the narrative and aesthetic demands of their projects.

Lead Supervisor: Dr. Laith Al-Jobouri l.al-jobouri@abertay.ac.uk 

Futures: Createch

The convergence of 5G, edge computing, cloud and XR technologies is still in its early stages, but it holds great potential for creating more democratised, interactive and collaborative virtual production environments. For example, by leveraging the ultra-low latency and high bandwidth of 5G networks, complex compute bound volumetric effects and lighting adjustments can be processed and rendered in real-time using on prem or network edge servers bringing postprocessing from the back room to on-set live. This allows for immediate on-set adjustments, providing directors with instant visual feedback and enabling a more dynamic and efficient creative process. The research project will explore contrasting VP compute use cases and develop an overarching flexible framework for offloading (on prem edge, network edge, cloud) based on the characteristics of the problem/use case. 

Explore Royal Holloway