Skip to main content
School of Electronic Engineering and Computer Science

CSC PhD Studentships in Electronic Engineering and Computer Science

About the Studentships 

The school of Electronic Engineering and Computer Science of the Queen Mary University of London is inviting applications for several PhD Studentships in specific areas in Electronic Engineering and Computer Science co-funded by the China Scholarship Council (CSC). CSC is offering a monthly stipend to cover living expenses and QMUL is waiving fees and hosting the student. These scholarships are available only for Chinese candidates. For details on the available projects, please see below

 

About the School of Electronic Engineering and Computer Science at Queen Mary 

The PhD Studentship will be based in the School of Electronic Engineering and Computer Science (EECS) at Queen Mary University of London. As a multidisciplinary School, we are well known for our pioneering research and pride ourselves on our world-class projects. We are 8th in the UK for computer science research (REF 2021) and 7th in the UK for engineering research (REF 2021). The School is a dynamic community of approximately 350 PhD students and 80 research assistants working on research centred around a number of research groups in several areas, including Antennas and Electromagnetics, Computing and Data Science, Communication Systems, Computer Vision, Cognitive Science, Digital Music, Games and AI, Multimedia and Vision, Networks, Risk and Information Management, Robotics and Theory. 

For further information about research in the school of Electronic Engineering and Computer Science, please visit: http://eecs.qmul.ac.uk/research/. 

 

Who can apply 

Queen Mary is on the lookout for the best and brightest students. A typical successful candidate:  

  • Should hold, or is expected to obtain an MSc in the Electronic Engineering, Computer Science, or a closely related discipline 
  • Having obtained distinction or first-class level degree is highly desirable 

Eligibility criteria and details of the scheme 

https://www.qmul.ac.uk/scholarships/items/china-scholarship-council-scholarships.html 

 

How to apply 

Queen Mary is interested in developing the next generation of outstanding researchers and decided to invest in specific research areas. For further information about potential PhD projects and supervisors please see below. 

Applicants should work with their prospective supervisor and submit their application following the instructions at: http://eecs.qmul.ac.uk/phd/how-to-apply/  

The application should include the following: 

  • CV (max 2 pages)  
  • Cover letter (max 4,500 characters) stating clearly in the first page whether you are eligible for a scholarship as a UK resident (see the link below)   
  • Research proposal (max 500 words) 
  • 2 References  
  • Certificate of English Language (for students whose first language is not English)  
  • Other Certificates  

 

Application Deadline 

The deadline for applications is the 31st January 2024. 

For general enquiries contact Mrs. Melissa Yeo m.yeo@qmul.ac.uk (administrative enquiries) or Dr Arkaitz Zubiaga a.zubiaga@qmul.ac.uk (academic enquiries) with the subject “EECS-CSC 2024 PhD scholarships enquiry”. 

 

Supervisor: Dr Ahmed M. A. Sayed

AI/ML systems are becoming an integral part of user products and applications as well as the main revenue driver for most organizations. This resulted in shifting the focus to bringing the intelligence towards where the data are produced including training the models on these data. Existing approaches operate as follows: 1) the data is collected on multiple servers and processed in parallel (e.g., Distributed Data-Parallel); 2) the server coordinates the training rounds and collects model updates from the clients (e.g., Federated Learning); 3) the server splits the model training between the clients and the server (e.g., Split Learning); or 4) the clients coordinate among themselves via gossip protocols (i.e., Decentralized Training). The challenges that manifest themselves are the highly heterogeneous learners, configurations, environment, communication and synchronization overheads, fairness and bias, and privacy and security. Therefore, existing approaches fail to scale with a large number of learners and produce models with low qualities and high bias at prolonged training times. It is imperative to build systems that provide high-quality models in a timely manner. This project addresses this gap by exploring novel ideas and proposing efficient and scalable ML systems for decentralized data.

Supervisor: Dr. Ahmed M. A. Sayed

In the rapidly evolving landscape of artificial intelligence, the development of sophisticated Generative AI and Large Language Models (LLMs) has become pivotal for various applications, ranging from natural language processing to creative content generation. However, the training of these models is computationally intensive, often requiring substantial time and resources. This project will study and propose system and algorithmic optimizations to accelerate the training process for Generative AI and LLMs, addressing the challenges posed by the complexity of these models. The core focus of this research lies in the exploration and implementation of advanced parallel computing techniques, leveraging the power of distributed systems and specialized hardware accelerators. By optimizing algorithms, employing parallelization strategies, and harnessing the capabilities of GPUs, TPUs, or emerging AI-specific hardware, this project aims to significantly reduce the training time of Generative AI and LLMs, making the process more efficient and cost-effective. Furthermore, the study delves into the realm of transfer learning and explores techniques to enhance model convergence and accuracy. By leveraging pre-trained models and developing novel transfer learning methodologies, the research intends to minimize the amount of data and computational resources required for training, thereby democratizing access to cutting-edge AI technologies.

Primary supervisors: Professor Akram Alomainy and Dr SaeJune Park

Second supervisor: Dr Riccardo Degl'Innocenti

Free-space microwave/terahertz metamaterials have been intensively investigated to develop highly sensitive biosensors for the last couple of decades owing to their unique applications such as strong coupling and biosensing. However, examining free-space metamaterials becomes challenging when the investigation is needed for unit-cell level coupling and the volume of the target material is small as free-space metamaterials form an array of the unit cell to fully interact with the incident electromagnetic waves. On the other hand, waveguides allow us to investigate individual unit cells of the metamaterials (meta-atoms) owing to the in-plane nature of the waveguides. In this project, we will fabricate metamaterials integrated microwave/terahertz waveguides and seek potential applications of the proposed idea in strong coupling and biosensing applications. The programme utilises the vector network analysers in the Antenna Measurement Laboratory in EECS to investigate the S-parameters of the developed waveguides in both microwave and sub-terahertz frequency ranges. At the same time, finite-difference time-domain simulation will be performed to confirm the experimental results and to improve our understanding by monitoring the surface current and electromagnetic field distributions near the waveguides.

Primary supervisor: Professor Akram Alomainy and Dr Riccardo Degl’Innocenti

Second supervisor: Dr SaeJune Park

Terahertz graphene devices have the unique potential to deliver significant impact in the field of 6G wireless communication [1] by providing the ultrafast (> GHz reconfiguration speed) efficient response needed for the many applications supported by this technology, e.g. digital twins, holography, telemedicine. In this project we aim to exploit the interplay between graphene, a 2D semimetal whose conductivity can be actively tuned by electrostatic gating, and complex AI generated metasurfaces, capable to engineer a wider range of photonic emission with respect to already explored routes, as schematically presented in Figure 1. These subwavelength objects grant unmatched efficiency, low power consumption and design flexibility Beam condensers and beam steering devices [2] are needed for efficient directive beam transmission, e.g. compensating for propagation losses, and out-of-sight wireless transmission. The target consists in the realization and experimental demonstration of an integrated beam steering device operating between 300 GHz and 1 THz with GHz reconfiguration speed. This project is based on the unique infrastructure and facilities, e.g. VNA, THz-TDS systems, available in the Antenna&EM group, as well as on the expertise acquired in designing amplitude [3], frequency [4], polarization modulators [5, 6] based on this approach.

  1. Elayan et al. IEEE open j. Commun. Soc. 1, 1-32, 2020.
  2. X Fu et al Adv. Optical Mater. 8, 1900628, 2020.
  3. Zaman et al. IEEE Trans. Terahertz Sci. 12(5), 520-526, 2022.
  4. Kindness et al Adv. Optical Mater. 1800570, 2018.
  5. Kindness et al ACS Photon. 6(6), 1547-1555, 2019.
  6. Kindness et al Adv. Opt Mater. 2000581, 2020.

Primary supervisor: Professor Andrea Cavallaro

Second supervisor: Dr Changjae Oh

Generative AI models can support human-robot interactions for example by controlling end-effectors to complete a task through visual perception. However, precisely perceiving articulated objects and objects unseen during training is still challenging in scenarios where humans and robots share the same workspace. This PhD project will investigate foundation models and systems for embodied AI agents, such as virtual and physical robots, to perceive real-world hand-held objects (their shape, material and content) and learn how to safely grasp them. Specifically, the goal is to investigate generative models to perceive human hands (and bodies) and hand-held objects in 3D, improve the 3D perception with the guidance of large multi-modal models, and integrate perception and robotic manipulation for human-to-robot handovers.

Primary supervisor: Professor Andrea Cavallaro

Second supervisor: Dr Changjae Oh

Embodied AI agents (i.e. robots) should effectively perceive their environment to execute actions to achieve their goal(s). However, embodied AI agents are sensitive to out-of-distribution events, such as scene and illumination changes, noises in sensory inputs or physical force applied to the agents, which frequently happen in the real world and affect their perception-policy loops. This PhD project will investigate natural and adversarial examples in embodied AI agents where the perturbations may mislead vision-based perception and learned policy for robot actions. The project aims to understand various types of adversarial examples and to improve the robustness of the perception and policies of the embodied agent. The final objective is to devise defence mechanisms to guarantee the safe operation of embodied agents. The project will use vision-based robot manipulators as specific embodied agents to investigate the problems and address them in a real-world setting.

Primary supervisor: Dr Arkaitz Zubiaga

Moderation of online content is crucial to protect users from being exposed to content they may find offensive. While there has been significant progress in the development of automated methods for offensive language detection using natural language processing, recent research shows the challenges posed by the different preferences of users towards content moderation. What some people find offensive is deemed appropriate by others, and therefore some of this content should not be blocked or removed for everyone. The objective of this project is to investigate novel natural language processing and social data science methods, including the use of generative AI and large language models, to enable personalisation of online content moderation. The use of large language models poses additional challenges as it is known that these large models have a tendency to make biased predictions favouring some communities and exacerbating the vulnerability of other communities. This project will investigate the interplay between personalisation and debiasing of models towards more effective content moderation.

Primary supervisor: Professor Arumugam Nallanathan

Second supervisor: Dr Fatma Benkhelifa

Sensor fusion via wireless channels is important for 6G integrated sensing and communication (ISaC) systems as radiofrequency (RF) sensing has relatively low range resolution. However, different sensor belongs to different companies. The sensor fusion needs to consider both the multi-modal and privacy-protection problems. Existing works uses auto-encoder and federated average algorithms to solve this issue, but the auto-encoder structure design needs to manually access the local dataset on different agent and the federated average algorithm costs huge communication resources. This project will first design a semi-supervised auto-encoder via variational Autoencoders to avoid leaking users’ privacy. Then, this project will integrate the proposed auto-encoder algorithm into a new federated learning structure, where communication load via wireless channels will be minimized by considering the pruning and partial model transmitting.

Primary supervisor: Professor Arumugam Nallanathan

Second supervisor: Dr Fatma Benkhelifa

Multi-objective optimization (MOO) plays a pivotal role in the realm of airport flight planning, and its relevance extends to the domain of UAV Swarm Trajectory Planning in NTNs. In addition to conventional objectives, e.g., communication quality, energy consumption, and collision avoidance, UAV swarm introduces a new optimisation domain, i.e., swarm structures. During the planning phase, it is important to acknowledge the full potential of the considered system. The Pareto front (PF) becomes a pivotal indicator, with each point along this front representing an optimal solution. At the execution stage, we can choose the desired solution based on changeable constraints. Traditional MOO solution transforms the problem into a weighted single-objective optimisation problem, which only achieves one point on the PF. Some recent works use deep deterministic policy gradient (DDPG) algorithm to optimize this MOO problem, but the swarm structure optimisation and Pareto front are not included. The first task of this project is to design UAV swarm structures, where multi-agent reinforcement learning (MA-RL) can be developed to dynamically control the swarm by interacting with the environment. Based on the proposed swarm solution, the second task is to propose the PF via MO-soft actor and critic (SAC) algorithms.

Primary supervisor: Dr Athen Ma

Second supervisor: Dr Pavel Kratina (SBBS)

Global Environmental Change (GEC) is having profound effects on our natural environment, with declining biodiversity as a result extreme climatic events such as heatwaves. Every species is part of a network of interactions that is integral to how ecosystems function and the loss of even one species can cause effects to ripple through the entire food web. Existing ecological assessment of GEC primarily focuses on the population-level response of a few key species, but little is known about network organisation at finer scales. For example, sub-structures have been observed in many different types of artificial network, and their significance for governing dynamics is widely acknowledged in network science. In ecology, sub-network structures have revealed important fine scale changes in food webs exposed to drought, but this line of research is largely unexplored.

This project aims to develop novel network science techniques to gauges the effects of GEC-stressors on ecosystems by examining ~600 high-quality food webs obtained from marine, freshwater, and terrestrial ecosystems. We will assess how topological organisation and dynamics of species interactions have been altered by GEC at finer scales across different ecosystems, and uncover the key principles that govern their reassembly following an external perturbation.

Primary supervisor: Dr Athen Ma

Second supervisor: Dr Pavel Kratina (SBBS)

Impacts of climate change are escalating around the world, as extreme climatic events have led to the loss of many species. There is a pressing need for accurate information on the risks to our ecosystems so that we can manage or even mitigate any further environmental degradation in the coming years. Ecological networks are essential for anticipating responses to climate change because a population decline of one species often alters the populations of its predators or prey, spreading through the network as species adjust their diet under changing environment. Unfortunately, current understanding on these domino effects in ecological networks are very limited, which greatly limits our ability to forecast ecosystem responses.

The project aims to explore machine learning techniques to unravel the structural and re-assembly principles of ecological networks so that we can accurately predict how species adapt and rewire under climate change. We will use advances in graph data science to learn topological characteristics in ecological networks and refer to a range of features to gauge structural similarity. Machine learning techniques will be used to predict potential new links in an ecosystem following an environmental stressor which will help generate realistic link reassembly and forecast whole-system level responses.

Primary Supervisor: Dr Diego Perez

Secondary Supervisor: Professor Simon Lucas

Tabletop Games (TTG) are multi-player games played on a table or flat surface, such as board, card, dice or tile-based games. Examples are Settlers of Catan, Ticket to Ride, Terraforming Mars, and Pandemic. TTGs require deep tactical reasoning, economy control, resource management, conflict resolution and high levels of player interaction. As a testament to the level of challenge of TTGs, the cooperative game Hanabi has recently been identified as the new frontier for AI research [1].

This project consists of investigating AI agents in the context of collaborative TTGs, creating action-decision algorithms that are able to cooperate in complex games such as Pandemic and The Resistance. The objective is to investigate how competitive algorithms such as Monte Carlo Tree Search for multi-player games [2] can incorporate innovative use of Large Language Models (LLMs)[3] for tabletop games. Intelligent decision making and conversational abilities will feed each other, allowing players to bluff, be deceptive, form alliances and negotiate.

This work will use the Tabletop Games framework [4], with the possibility of synergising with the activities of QMUL’s spin out Tabletop R&D [5], which could allow the candidate to apply their findings to real use-cases in the games industry.

====

[1] N. Bard, J. N. Foerster, S. Chandar, N. Burch, M. Lanctot, H. F. Song, E. Parisotto, V. Dumoulin,

S. Moitra, E. Hughes et al. ‘The hanabi challenge: A new frontier for ai research’. In:

Artificial Intelligence 280 (2020)

[2] J. Goodman, D. Perez and S. Lucas. ‘MultiTree MCTS in Tabletop Games’. In:

IEEE Conference on Games. 2022

[3] J. S. Park, J. C. O’Brien, C. J. Cai, M. R. Morris, P. Liang and M. S. Bernstein. ‘Generative agents:

Interactive simulacra of human behavior’. In: arXiv preprint arXiv:2304.03442 (2023). [4] https://github.com/GAIGResearch/TabletopGames [5] https://www.tabletoprnd.co.uk/

Primary Supervisor: Dr Diego Perez

Secondary Supervisor: Dr Raluca Gaina

Tabletop Games (TTG) are multi-player games played on a table or flat surface, such as board, card, dice or tile-based games, such as Settlers of Catan, Ticket to Ride and Terraforming Mars. Automatic play-testing consists of a process by which autonomous AI agents play these games thousands of times in order to generate data that can be analysed and provided to the game designer for them to produce robust games [1].

In human playtesting sessions, designers can observe how players interact with their game, being able to pinpoint interesting parts of the game or flaws. Although thousands of games resulting from AI data can still be visualised and replayed, most would likely not be interesting to watch, and the value returned to designers is diminished if the key gameplay information cannot be easily summarised. The objective of this research is to develop AI algorithms capable of summarising and extracting valuable information from thousands of gameplay logs. The new methods must be able to recognize valuable game states in which interesting situations are found, which include specific features of interest, or could explain flaws in the design or the availability of high-quality moves and strategies. These highlight game states would be able to efficiently summarise the interesting gameplay information for fast visualisation and designer feedback. The designer will also be able to observe decisions made by AI agents, together with explanations that make such moves understandable.

This work will use the Tabletop Games framework [2], with the possibility of synergising with the activities of QMUL’s spin out Tabletop R&D [3], which could allow the candidate to apply their findings to real use-cases in the games industry.

====

[1] Goodman, J., Wallat, A., Perez-Liebana, D., & Lucas, S. A case study in AI-assisted board game design. IEEE Conference on Games, 2023. [2] https://github.com/GAIGResearch/TabletopGames [3] https://www.tabletoprnd.co.uk/

Primary supervisor: Dr. Haim Dubossarsky

Second supervisor: Professor Mark Sandler

The field of Machine Learning has witnessed remarkable growth in recent years, largely fueled by the emergence of Pretrained Foundation Models (PFMs), such as ChatGPT. However, despite their widespread adoption, these advancements lack robust analysis methods that could enhance our understanding of their inner workings, leaving us in the dark about why these models perform well and how to improve them. The prevailing approach is simply making larger models, with more parameters and training data, hoping that "bigger is better," but without a theoretical framework to guide research.

This project seeks to remedy this lacuna in Machine Learning by bringing together theories and methodologies from Signal Processing, Statistical Physics and Mathematical Topology. These research disciplines, which investigate PFMs and their parameters (i.e., their weights) from different approaches and with various methods, have recently started to converge in their finding and conclusions. This convergence paves the way for the development of emerging tools that approach the same problems from distinct angles, and is particularly relevant for model convergence, transferability to other tasks/languages, and optimal parameter-to-training size ratio. These fresh insights will facilitate the development of a new learning theory and macro-analysis methods, and promise a novel approach for measuring and analyzing PFMs.

Primary supervisor: Dr. Haim Dubossarsky

Second supervisor: Professor Omer Bobrowski (School of Maths)

The ability of Large Language Models (LLMs) (AKA Pretrained Foundation Models) to capture essential linguistic features, whether it is syntactic, semantic, or others, remains largely mysterious. This is in part because the tools currently used to investigate LLMs are too basic to analyze the intricate geometry of the embeddings produced by the models’ huge number of parameters. We therefore propose to re-think the way we analyze the embedding spaces and to develop tools that are better suited for the task.

Topological Data Analysis (TDA) is a collection of data-driven methods based on algebraic topology. Persistent Homology (PH) is the most popular TDA method, representing structural information related to connected components in the embedding space (holes, bubbles, etc.), and is commonly used to extract topological features underlying point-clouds.

We plan to analyze embedding spaces using PH and other TDA techniques, and to develop new methods and measures to better describe embedding spaces and extract the information they encode.

This endeavour is likely to shed light on the inner-working of LLMs, their training regimes, and the type of information they encode in their topological structures, linguistic or otherwise, and will provide novel topological approach that describes the “shape of words”.

Primary supervisor: Dr James Kelly

Second supervisor: Dr Hasan Sagor

Most antennas in service today are fix, that means that their performance cannot be altered after they have been manufactured. Reconfigurable antennas, on the other hand, can adjust their performance parameters (e.g. radiation pattern shape, frequency, polarization, etc.) dynamically whilst in service. This enables them to adapt to suit changing user requirements (e.g. need for more or less bandwidth at any one moment in time) or evolving network conditions (e.g. increased interference, signal attenuation, signal blockage, etc.). A range of different techniques can be used to initiate reconfiguration. The most popular techniques including the use of semiconductor switches/ tuning devices or mechanical actuators. Mechanical actuation provides excellent performance from the radio frequency (RF) perspective, including: reduced signal distortion, reduced scan loss, and improved power handling. However, the use of mechanical approaches necessitates the need for periodic maintenance and replacement of actuator components. This PhD will explore a range of emerging techniques for mechanically reconfiguring antennas that do not require maintenance. Candidate techniques could include use of the following actuators: shape memory alloy; piezoelectric actuators (including inkjet printed types such as P(VDF-TrFE-CTFE)); liquid crystal elastomer; bistable composite laminates. The project may also explore the related topics of Origami and Kirigami.

Primary supervisor: Dr James Kelly

Second supervisor: Dr Yasir Alfadhl

If diabetics could monitor their blood sugar levels directly then, annually in the UK, we could prevent: almost 9600 leg, toe or foot amputations; 1,700 people from suffering serious sight loss; and 700 premature deaths. Half of all people with diabetes, in the UK, are aged over 65-years and a quarter are over 75. It is critical to keep blood sugar levels within the recommended range to avoid long term and irreversible damage to nerves and blood vessels, leading to: heart attacks and strokes, as well as the need for lower limb amputations. Currently it is impossible to measure blood sugar levels in a continuous and direct way. In 2020, researchers reported a revolutionary new approach, based on radio waves, for measuring blood sugar levels directly. However, the prototype devices were incapable of stretching with the human body. Consequently, during human trials, the volunteers were asked to remain perfectly still. The PhD will develop a stretchable prototype based on a metal, reminiscent of that seen in the terminator movies, that is liquid at room temperature.

Primary supervisor: Dr Khalid Rajab

Remote monitoring in healthcare environments is important for providing real-time information to clinicians, and to allow carers and relatives a means to monitor the health of their loved ones. An approach using millimetre-wave (mmWave) radar provides an exciting and effective means of monitoring conditions, without the use of intrusive wearables or invasive cameras.

This project will involve working with an experienced team, who have successfully commercialised mmWave technology, and who have ongoing projects working with healthcare partners. The purpose of the project will be to innovate new techniques by applying machine learning and artificial intelligence techniques to mmWave radar data, including micro-Doppler signatures and other parameters.

Our current toolbox includes AI/ML techniques for vital signs detection, falls monitoring and falls risk prediction, sleep monitoring, and many others. With your help, we seek to further push the boundaries of ambient and wearable-free healthcare sensin

Primary supervisor: Professor Maria Liakata

Second supervisor: Dr Julia Ive

There is an abundance of healthcare data in unstructured form, particularly free text in EHRs and patient fora but also multi-modal data such as images. Such data hold vital information that could greatly benefit clinical experits and social prescribers for patient monitoring for diagnosis, prognosis and treatment choice. Unfortunately such information is not currently accessible to experts for a number of different reasons: (i) unstructured data are complex and hard to analyse. While the latest AI technology holds great promise, it difficult to interpret model decisions (ii) the development of analysis methodologies is generally handicapped by access restrictions, mainly due to privacy concerns.

The goal of the project is to develop methods for improving the interpretability of clinical models processing free text and multi-modal data.

The main milestones of the project include: (1) Create a text generation methodology to create clinically valid synthetic data guided by external clinical knowledge; (2) Devise interpretable AI models for clinical analysis that will rely on valid clinical knowledge as expressed in synthetic data (e.g., relying on symptoms from ICD); (3) Devising the protocols to monitor performance of those AI models.

Primary supervisor: Professor Maria Liakata

Second supervisor: Dr Julia Ive

Large language models pre-trained on large-scale datasets offer great opportunities for downstream applications that can leverage them and adapt them for various use cases, often with minimal fine-tuning or even zero- or few-shot prompting, especially in low resource scenarios. Health and mental health constitute cases of low resource domains in that available training data (e.g. in the form of therapy sessions) and ground truth information (e.g. diagnosis labels) are hard to obtain. Moreover the data often contain private and sensitive information. Current work on LLMs for health focusses on online medical question-answering without considering personalisation or longer interactions between patients and experts.

This project will investigate personalisation, fine-tuning & prompting strategies as well as controlled generation focusing on temporal reasoning and the combination between LLM encoders and unsupervised models for abstractive summarisation of long documents.

Main project milestones include: (1) Developing robust fine-tuning and prompting strategies to obtain automated annotations; (3) Creating personalised LLMs via fine-tuning and pre-training strategies; (2) Creating parallel inter-connected data streams to model the progression of an individual; (4) Creating temporally aware summaries of long documents such as user timelines on social media.

Primary supervisor: Professor Massimo Poesio

Second supervisor: Dr Juntao Yu

Large language models (LLMs) such as ChatGPT, LLAMA, and Bard showed impressive performance in their language understanding and summarisation abilities. However, their ability on coreference resolution remains leg behind the state-of-the-art models trained specifically for coreference tasks (Gan et al., 2023). Current research on using LLMs for coreference resolution (e.g. Bohnet et al., 2023, Zhang et al., 2023) remains on fine-tuning the medium-size LLMs (e.g. T5) to generate a copy of input sentence with their coreference annotations. Despite achieving new SOTA results on major coreference benchmarks, such methods do not apply to the latest LLMs due to the requirement of task-specific outputs. In contrast, the outputs of the latest LLMs are conversational utterances. To improve their coreference ability, we needed to reformulate the coreference task conversationally. The objective of the proposed project is thus to push forward the LLMs' coreferential ability using existing coreference benchmarks and synthetic data generated by LLMs. This research will take part in the context of an ongoing EPSRC project on enabling the conversational agents' coreference and reference ability.

Primary supervisor: Professor Matthew Purver

Second supervisor: Professor Pat Healey / Dr Juntao Yu

Large Language Models (LLMs) have become the foundation for much of AI, in natural language processing (NLP) and beyond. By being trained simply to predict word and/or action sequences observed in large collections of data, they show excellent ability to model the meaning and structure of language, both in general and in specific contexts. Through fine-tuning via reinforcement learning with human feedback, they can also form the basis of capable interactive systems. However, they still lack some key features of human-like behaviour, and one of those is the ability to co-ordinate meaning on the fly. Humans often fail to completely understand one another, and much of our effort in natural interaction goes into maintaining our understanding: clarifying what our interlocutor intended, explaining what we meant, and making temporary agreements on how to talk about things. This project will look at ways to use LLMs within interactive systems that can exhibit these behaviours: not only generating suitable clarification questions in situations that require them, but using the resulting user responses to update their understanding of how to interact thereafter.

Primary supervisor: Dr Mona Jaber

Second supervisor: Professor Greg Slabaugh

Digital twins are virtual replicas of a physical asset which, in this project, consists of urban mobility in a defined geographical area. Such virtual replica enables non-disruptive and accelerated optimisation toward goals that could include reducing car emissions, increasing the uptake of active travel, improving road safety, or others. Digital twins are empowered by streaming data from connected sensors (IoT devices). These data are interpreted by Artificial Intelligence models to replicate the target aspect of the physical asset; i.e. urban mobility of the given area. This project investigates continual learning models that update their representation to reflect the dynamically changing data that might be caused by a change in the physical asset (road network, related policies, nodes of transport, etc.).

Primary supervisor: Dr Mona Jaber

Second supervisor: Dr Jun Chen

In the dawn of autonomous vehicles and their co-existence with other multi-modes of transport that include human-driven and driver-assisted sharing the road space with adopters of active travel (e.g., pedestrians, cyclists, micro-mobility), the transportation sector needs to be upgraded to ensure road safety for all road users. This project leverages the digital twin paradigm to design anticipatory road safety solutions based on a physical testbed. This will entail mining Internet of Things data collected from the testbed in addition to model-based mobility and geospatial information to create a virtual replica of the transportation scene. The aim is to identify pertinent upgrades to the road network and related policies that will enhance the public safety of commuters.

Primary supervisor: Dr Richard Clegg

This PhD is about constructing synthetic versions of complex networks. It is a good project for any student with an interest in data science, anaylsis of systems or the mathematics of complex systems. You will learn about how to anaylse data and create new models of those systems.

A core problem in the research of complex networks is that many data sets are too sensitive to be studied properly. Consider some important networks data sets: networks showing who buys what items online (for studying how to recommend new items); how people transfer their money between bank accounts (for studying fraud perhaps); or how people meet in the real life (for example for studying disease transmission). These would be invaluable data sets but they are extremely private. Very few people would want this information about them made public and companies or organisations involved would consider it proprietary information.

This type of data is very hard to make anonymous (NetFlix, the video streaming company, tried to do so and failed). It is important to study the nature of privacy in networks, how we can guarantee privacy and what properties of networks expose risk. The ultimate aim is to create a system that can produce a synthetic version of a real network that guarantees the network data produced is private. The project can be computational or mathematical or a mix of both depending on the skills of the student. Please contact Richard Clegg (r.clegg@qmul.ac.uk) for more details.

Primary supervisor: Professor Simon Dixon

Large-scale automatic analysis of features from music databases allows us to study relationships between pieces across time and location, and to trace how musical ideas and styles spread, grow and develop. At one level, this provides evidence of influence between musicians or composers, which can be seen in imitation of patterns and styles. At another level, it could facilitate the linking of external events in the surrounding culture to developments in the music of that culture. Recent work has led to the annotation of a growing corpus of jazz recordings for performing research on this topic. Applicants are invited to develop the ideas from this line of work and propose a PhD project that aligns with their interests and knowledge. The project could involve enabling technologies such as transcription, instrument identification, performer style modelling, or modelling networks of influence

Primary supervisor: Professor Simon Dixon

Deep learning has significantly boosted the accuracy of analytic and generative music models, providing a powerful framework for extracting knowledge from data. However, the paradigm shift from feature-engineering and logic to latent spaces and trainable operations comes at the cost of interpretability and expressive power. We invite you to submit a project proposal for a PhD exploring the combination of deep learning and the rich body of prior knowledge that we can derive from music theory. We imagine that prior knowledge can be used to complement or constrain the kind of patterns that are learnable, leading to more interpretable models and lower data requirements. Depending on your interests, you might choose to focus on analytic or generative models in the audio or symbolic music domain.

Primary supervisor: Professor Xiaodong Chen

Second supervisor: Dr Jin Zhang

With a renewed interest in space based solar power worldwide to meet the demands of renewable energy and net zero emission, considerable efforts have been put into the research on the gigascale transmitting arrays in the GEO/LEO orbits. In order to compensate the severe deformation of the array panel due to the impacts of gravitation, thermal and space weather and achieving a large beam-steering angle, a new concept in antenna technology, the vector-phased array (VPA) is proposed for the SBSP transmitting antenna array. The basic idea of a VPA is to dynamically adjust the direction of maximum gain of each antenna element, (the beam field-vector direction), to align with the steered direction of the synthesized beam. The VPA is much dependent on the suitable beam-steering or pattern-reconfigurable antennas as the array element. This project aims to design a beam-steerable antenna on an artificial magnetic conductor (AMC) surface by controlling the phase and amplitude of feeding signal by means of a digital beamforming circuit. The AMC surface is to suppress the surface current in order to minimize the side lobes when steering the beam. A small scale of VPA will be developed and tested based on this antenna element.

Primary supervisor: Professor Xiaodong Chen

Second supervisor: Dr Jin Zhang

To meet the demand of clean and sustainable power sources, the fusion nuclear power is particularly attractive due to the abundance of fuel, high capability of power generation, no pollution, and no greenhouse gas emission. For the nuclear fusion to happen in a controlled way, the plasma needs to be confined and be heated to millions of degrees by using high power millimetre-wave sources. The only source current available is gyrotrons, but a high cost (around £1m each), a low efficiency (typical efficiency 30%) and a low production rate (roughly 10 products a year by CPI – the only reliable supplier worldwide). The project aims is to develop a high efficient and low cost millimetre-wave magnetron (around £10k each with typical efficiency 80%) suitable for massive production to provide an alternative sources, making fusion power production more commercially viable. The candidate will work in close collaboration with industrial partners to synchronise the efforts to achieve the project objectives.

Primary supervisor: Professor Yang Hao

Second supervisor: Dr Henry Giddens

In recent years, the integration of wireless communication and sensing technologies with healthcare has transformed medical diagnosis and monitoring. Skin antennas and sensors are emerging as ground-breaking tools that have the potential to revolutionize medical applications. These devices can provide real-time, non-invasive monitoring and data transmission, enabling better patient care, early disease detection, and improved healthcare efficiency. In this study, we aim to develop and implement innovative solutions that will enhance patient care, streamline medical processes, and ultimately improve overall healthcare outcomes. Objectives include

  1. Conducting research to develop state-of-the-art skin antennas and sensors that are capable of various biomedical measurements, including vital signs monitoring, glucose level tracking, drug delivery, and more.
  2. Collaborating with healthcare institutions to integrate skin antennas and sensors into existing medical practices, ensuring that they adhere to regulatory standards and clinical requirements.
  3. Developing cost-effective solutions that can be adopted by healthcare providers without significantly increasing the cost of patient care.

Leveraging skin antennas and sensors in medical applications is a transformative opportunity for healthcare. This proposal outlines a strategic plan to harness this technology for the benefit of patients, healthcare providers, and the entire medical industry.

Primary supervisor: Professor Yang Hao

Second supervisor: Dr Henry Giddens

Antennas are a fundamental component of modern communication systems and emerging technologies. As the demand for wireless communication and IoT devices continues to grow, the materials and resources required for antenna construction pose environmental challenges. This proposal outlines a study aimed at developing and optimizing antennas constructed from recycled materials. The research aims to address environmental concerns, reduce waste, and promote sustainable practices in antenna design and manufacturing. Research objectives include

  1. Identifying and evaluating suitable recycled materials for antenna construction based on their electromagnetic properties, durability, and environmental impact.
  2. Developing antenna designs that maximize performance while utilizing recycled materials and minimizing resource consumption.
  3. Conducting a life cycle analysis (LCA) to evaluate the environmental benefits and drawbacks of using recycled materials in antennas compared to traditional materials.
  4. Constructing prototype antennas using the selected recycled materials and assess their electromagnetic performance, durability, and longevity under various conditions.

The proposed study addresses a pressing need in the field of antenna design and manufacturing – the incorporation of recycled materials to reduce environmental impact. By supporting this research, we can drive the development of sustainable practices in the antenna industry, leading to reduced waste, lower resource consumption, and a positive impact on the environment.

Primary supervisor: Dr Yuanwei Liu

Second supervisor: Dr Yixuan Zou

Extremely large-scale antenna arrays, tremendously high frequencies, and new types of antennas are three clear trends in multi-antenna technology for supporting the 6G networks. To properly account for the new characteristics introduced by these three trends in communication system design, the near-field spherical-wave propagation model needs to be used, which differs from the classical far-field planar-wave one. This fundamental change opens up new opportunities as well as challenges for wireless communication designs. This project aims to carry out the fundamental performance analysis and propose efficient near-field communication solutions to exploit the advantages, such as enhanced degrees-of-freedom (DoFs) and beam focusing capabilities, in the areas of beamforming design, beam training, multiple access design, and resource allocation. The employed mathematical methods include convex optimization, machine learning, and stochastic geometry.

Primary supervisor: Dr Yuanwei Liu

Second supervisor: Dr Yixuan Zou

Reconfigurable intelligent surfaces (RISs) are promising technologies for beneficially modifying the propagation of wireless signals and improving the wireless performance in a cost- and energy-efficient way. This project aims to investigate a novel concept of simultaneously transmitting and reflecting surfaces (STARS), which can not only reflect the incident signals but also transmit the incident signals, i.e., facilitating full-space smart radio environment. This project will focus on the employment of STARS in wireless communications. Theoretical analysis and experiments will be carried out to investigate the performance gain of STARS over conventional reflecting-only RISs, in terms of capacity, coverage, and power consumption. The employed mathematical methods include stochastic geometry, convex optimization, and machine learning.

First supervisor: Dr Evangelia Kyrimi 

Second supervisor: Dr Anthony Constantinou 

As more data becomes available that can be leveraged by modern AI algorithms, more complex systems will be developed. As the complexity of the system increases, its transparency becomes less clear, meaning an accurate prediction alone is not sufficient to make an AI-based solution truly useful. For the development of healthcare systems this raises new issues for accountability and safety. The need for transparent AI has led to the rise of the XAI. Medical AI is termed as a high-risk AI application in the proposed European legislation. Therefore, having an explanation for why and how some conclusions were reached by the system is important. Current XAI work focuses on explaining “mainstream” ML approaches, where reasoning about interventions and retrospection is not possible. This project focuses on explaining causal graphical probabilistic models. These models have an algorithmic transparency. However, transparency is not sufficient to guarantee that a model is explainable. The successful candidate will be responsible for developing explanation algorithms for health-AI that can produce meaningful explanations for various types of reasoning, such as observational, interventional and counterfactual reasoning. The produced explanations should be (1) causal and (2) incremental to enable mimicking the dynamic nature of clinical decision making.

Primary Supervisor: Professor Pasquale Malacaria

The objective of this project is to develop a hybrid analytical tool that combines the formal verification strength of TLA+ with the contextual processing capabilities of Large Language Models (LLMs) to enhance security analysis.

With the increasing complexity of digital artefacts, traditional verification methods like TLA+ model checking are essential but can be complemented by the nuanced understanding of context provided by LLMs. The proposed research aims to automate specification related to protocols, algorithms, code and provide interpretation of verification results, offering a more dynamic and insightful security analysis.

Methodology:

  1. Framework Development: Create an interface for TLA+ and LLM interaction.
  2. Specification Automation: Use LLMs to translate protocol descriptions into TLA+ specifications.
  3. Result Interpretation: Develop algorithms for LLMs to provide explanations of TLA+ results.
  4. Validation: Test the tool against standard protocols and compare with existing methods.

Outcomes:

  1. A functional tool that brings together TLA+ model checking and LLMs.
  2. Enhanced efficiency and depth in security protocol analysis.
  3. A potential reduction in the time and expertise required for protocols/algorithms verification.

Supervisor: Professor Simon Lucas

2nd Supervisor: Raluca Gaina

The fusion of statistical forward planning (SFP) algorithms, such as Monte Carlo Tree Search (MCTS) and Rolling Horizon Evolution, with Large Language Models (LLMs) presents a promising avenue for advancing decision-making capabilities. This research proposal aims to explore the synergy between these two domains, focusing on using LLMs to generate domain-specific value and policy functions and assist in constructing fast, approximate forward models. 

The core of this research will involve the development and integration of LLMs with existing SFP algorithms. Initially, LLMs will be used zero-shot or trained on domain-specific data to generate relevant value and policy functions, which are crucial for informed decision-making in various contexts. Subsequently, these functions will be integrated into MCTS or similar algorithms to guide their search and exploration strategies. Additionally, the research will focus on harnessing the LLMs to build fast, approximate forward models, enabling quicker and more efficient simulation of future states, a critical aspect of planning algorithms like MCTS.  

The anticipated outcome of this research is a significant enhancement in the efficiency and accuracy of decision-making processes in AI systems. By integrating the contextual intelligence of LLMs with the strategic planning capabilities of algorithms like MCTS, we expect to achieve a more robust and adaptive decision-making framework. This framework could find applications in a variety of domains, including but not limited to, autonomous systems, strategic game playing, and complex problem-solving tasks. 

Supervisor: Professor Simon Lucas

In real-time strategy (RTS) games, controlling multiple units simultaneously presents a challenging combinatorial problem. This PhD project aims to advance the current state of the art in multi-agent systems employing statistical forward planning algorithms, with a focus on Monte Carlo Tree Search (MCTS) and Rolling Horizon Evolutionary Algorithms (RHEA). The core challenge addressed is the efficient handling of combinatorial action spaces that emerge when determining concurrent actions for multiple units. The project introduces a novel approach wherein plans are generated semi-independently for each unit. This approach depends on adapting the level of inter-dependence based on the specific scenario and the intricacies it presents.

A pivotal element of this research is the use of forward models that disclose dependency information. Two main avenues are explored: one where dependency data is readily provided by the models, and another where such dependencies are inferred from observational data. By leveraging these forward models, the project aims to balance the trade-off between independent and coordinated multi-agent planning, ensuring efficiency and adaptability.

The success of the proposed method will be evaluated by its application to various RTS games, serving as benchmarks. The primary metric will be the improvement in efficiency and performance in multi-agent unit control, as measured against a number of baseline and SOTA agents.  The outcomes of this research could significantly enhance multi-agent decision-making strategies in RTS games, offering insights into both the fields of artificial intelligence and gaming.

Back to top