Skip to main content
School of Electronic Engineering and Computer Science

Principal and EPSRC-DTP PhD Studentships in Electronic Engineering and Computer Science

Level: PhD 

Country: Please see eligibility criteria below 

Value: Tuition fees and a London stipend of £19,668 per year 

No. of awards: 9 

Deadline: January 31st 2023 

About the Studentships 

The school of Electronic Engineering and Computer Science of the Queen Mary University of London is inviting applications for up to 9 PhD Studentships in specific areas in Electronic Engineering and Computer Science (please see the list of projects at the end of this page). The PhD studentships will cover tuition fees and offer a London stipend of £19,668 per year. The scholarships are open to both home and international candidates (please see below the eligibility criteria and the details on the tuition fees depending on the applicant status). 

 
About the School of Electronic Engineering and Computer Science at Queen Mary 

The PhD Studentship will be based in the School of Electronic Engineering and Computer Science (EECS) at Queen Mary University of London. As a multidisciplinary School, we are well known for our pioneering research and pride ourselves on our world-class projects. We are 8th in the UK for computer science research (REF 2021) and 7th in the UK for engineering research (REF 2021). The School is a dynamic community of approximately 350 PhD students and 80 research assistants working on research centred around a number of research groups in several areas, including Antennas and Electromagnetics, Computing and Data Science, Communication Systems, Computer Vision, Cognitive Science,  Digital Music, Games and AI, Multimedia and Vision, Networks, Risk and Information Management, Robotics and Theory 

For further information about research in the school of Electronic Engineering and Computer Science, please visit: http://eecs.qmul.ac.uk/research/. 

 

Who can apply 

Queen Mary is on the lookout for the best and brightest students. A typical successful candidate:  

  • Should hold, or is expected to obtain an MSc in the Electronic Engineering, Computer Science, or a closely related discipline 
  • Having obtained distinction or first class level degree is highly desirable 

 

Eligibility criteria and details of the different schemes 

EPSRC-DTP:  

  • 3.5 years stipend and fees 
  • Details: Open to home and international students. Please note that the number of students with International fee status which can be recruited is capped according to the EPSRC terms and conditions so competition for International places is particularly strong. 
  • Expected start date: September 2023 

Principal Scholarships: 

  • 3 years stipend and fees 
  • Details: Open to home students. Please note that the scheme covers stipend and home tuition fees – for candidates with international fee status the difference needs to be covered from other sources. 
  • Expected start date: September 2023 

 

How to apply 

Queen Mary is interested in developing the next generation of outstanding researchers and decided to invest in specific research areas. For further information about potential PhD projects and supervisors please see the list of the projects at the end of this page. 

 

Applicants should work with their prospective supervisor and submit their application following the instructions at: http://eecs.qmul.ac.uk/phd/how-to-apply/  

 
The application should include the following: 

  • CV (max 2 pages)  
  • Cover letter (max 4,500 characters) stating clearly in the first page whether you are eligible for a scholarship as a UK resident (see the link below) 
  • Research proposal (max 500 words) 
  • 2 References  
  • Certificate of English Language (for students whose first language is not English)  
  • Other Certificates  

Please note that in order to qualify as a home student for the purpose of the scholarships, a student must have no restrictions on how long they can stay in the UK and have been ordinarily resident in the UK for at least 3 years prior to the start of the studentship. For more information please see: https://www.ukri.org/what-we-offer/developing-people-and-skills/esrc/funding-for-postgraduate-training-and-development/eligibility-for-studentship-funding/ 

Application Deadline 

The deadline for applications is the 31st of January 2023. 

For general enquiries contact Mrs. Melissa Yeo m.yeo@qmul.ac.uk (administrative enquiries) or Professor Ioannis Patras i.patras@qmul.ac.uk (academic enquiries) with the subject “EECS 2023 PhD scholarships enquiry”. 

List of available projects and corresponding academics: 

Scheme: Principal's
Affective computing has been largely restricted in the past in terms of available data resources. In the last few years, large affective datasets have been generated and deep neural architectures have been developed for affect recognition using these datasets. Nevertheless, limited research has been devoted to detection of negative affect. The generated in-the-wild databases have few data showing negative affective states and deep learning models show the worst performance on them. Nevertheless, detecting negative affect in-the-wild is quite crucial in many contexts. This proposal aims to develop a system that accurately, efficiently and in a fair way detects negative affect and behaviours. The system will include novel deep learning methods, improving the state-of-the-art, considering fairness and explainability in its decision-making and incorporating self-supervised learning to exploit abundance of unannotated datasets. The system will include multiple modalities, such as facial expressions-eye gaze, audio-speech, text, context, body pose-posture and hand gestures, since they can enhance the model's affective capabilities. The system's performance will be evaluated in various real-life use cases.

Supervisor: Prof Joshua Reiss

Scheme: Principal's

Physical models of sound generating phenomena are widely used in digital musical instruments, noise and vibration modelling, and creation of sound effects. But they often have a large number of free parameters that may not be specified just from an understanding of the phenomenon.

Machine learning from sample libraries could be the key to improving the physical models and speeding up the design process. Optimisation approaches can find parameter values such that the output of the model matches recorded samples, and the accuracy of such an approach will provide insight into the limitations of a model. It also provides the opportunity to explore the overall performance of different physical modelling approaches, and find out whether a model can be generalised to cover a large number of sounds.

This work will explore such approaches. Existing physical models will be used, with parameter optimisation based on gradient descent. Measurement of errors in this feature matching will allow us to assess the overall quality of the sound synthesis models. Performance will be compared against neural audio synthesis approaches, that often provide high quality synthesis but lack a physical basis. In the longer term, analysis of performance across a large number of sound synthesis models will allow us to measure the extent to which entire sample libraries could be replaced by a small number of physical models with parameters set to match the samples in the library.

Delivering better sound models, almost indistinguishable from a high quality recording, may be the key to transforming sound design and removing the reliance on inflexible sample libraries. Beyond sound, physics-based computer simulation of acoustics in other domains could benefit greatly from automated methods that require less know-how while offering greater portability, flexibility, and extension.

Supervisor: Prof Mark Sandler

Scheme: Principal's

Artificial Neuroscience (AN) is a new concept concerned with the inner workings of Artificial Neural Networks, (ANNs) similar to the way that Biological Neuroscience (BN or just Neuroscience) is concerned with the inner workings of the natural brain. ANNs were originally inspired by the structure of mammalian brains and neurons: by analogy the underlying principle of Artificial Neuroscience is to take the methodologies and practices of Neuroscience and map them onto Artificial Intelligence (AI).

In this PhD, that mapping will research into tools that are constructed using Linear Algebra which can variously enable us to understand and explain how Deep Learning ANNs work, playing a similar role to EEG and MRI in Biological Neuroscience. It will explore these concepts in the context of Neural Audio, which is the application of DL to Music and Audio.

We focus on applying Linear Algebra, Topology and Signal Processing to the measurement, analysis and control of the dynamics of Neural Networks. Specifically, DL network layers, weights and loss functions are represented as matrices for which Singular Valued Decompositions (SVD) can be computed to more simply represent the geometry of the network. These can be used to track evolution of the weights during training. This will help us to understand how AI works and contribute to the field known as Explainable AI (or XAI).

The PhD will go further, using SVD to simplify the layers and weights by approximating them as low rank matrices. It is expected that this approach will accelerate training and inference. This will further enhance the NNs explainability, and simplify ways to control them.

The PhD will explore these principles primarily in the context of music Source Separation, in which recordings of simultaneous instruments are de-constructed into constituent parts. This is chosen because it is well-studied: many papers have been published, and many data sets and DL models exist. Note: the goal is not to chase State-of-the-Art results in Source Separation (though this is still highly likely) but to understand and control DL.

Applicants will need a background in Linear Algebra, and hence a degree in Mathematics or Physics will be as welcome as one in Electronic Engineering or Computer Science. An interest in audio and music is desirable.

Contact Principal Supervisor, Professor Mark Sandler (mark.sandler@qmul.ac.uk) for further details. Second supervisor is Dr Primoz Skraba in the School of Mathematical Sciences.

1. Jere, Malhar, Maghav Kumar, and Farinaz Koushanfar. ‘A Singular Value Perspective on Model Robustness’. arXiv, 7 December 2020. http://arxiv.org/abs/2012.03516.

2. Jia, Kui. ‘Improving Training of Deep Neural Networks via Singular Value Bounding’. arXiv, 18 March 2017. http://arxiv.org/abs/1611.06013.

3. Yang, Huanrui, Minxue Tang, Wei Wen, Feng Yan, Daniel Hu, Ang Li, Hai Li, and Yiran Chen. ‘Learning Low-Rank Deep Neural Networks via Singular Vector Orthogonality Regularization and Singular Value Sparsification’. arXiv, 19 April 2020. http://arxiv.org/abs/2004.09031.

4. Bermeitinger, Bernhard, Tomas Hrycej, and Siegfried Handschuh. ‘Singular Value Decomposition and Neural Networks’, 27 June 2019. https://doi.org/10.1007/978-3-030-30484-3_13.

5. Zhang, Jiong, Qi Lei, and Inderjit S. Dhillon. ‘Stabilizing Gradients for Deep Neural Networks via Efficient SVD Parameterization’. arXiv, 25 March 2018. http://arxiv.org/abs/1803.09327.

6. Praggastis, Brenda, Davis Brown, Carlos Ortiz Marrero, Emilie Purvine, Madelyn Shapiro, and Bei Wang. ‘The SVD of Convolutional Weights: A CNN Interpretability Framework’. arXiv, 14 August 2022. http://arxiv.org/abs/2208.06894.

Supervisor: Prof Greg Slabaugh

Scheme: EPSRC DTP

Video surveillance is an important tool to promote safety in public and private spaces.  In London alone there are an estimated 942,562 cameras used by Met Police, TFL, local authorities and private operators.  There is far too much data to be analysed manually by humans, necessitating an AI approach. 

This project seeks to an develop automatic, accurate, and efficient solutions to violence detection from video captured by a stereovision camera. Stereovision relates to cameras that have multiple imaging sensors, mimicking the human visual system.  Stereovision systems are better able to capture depth than monocular cameras and therefore have improved potential for inference particularly for cluttered scenes with occluding objects or people.  There has been considerable work in violence detection in monocular videos, however, stereovision is a largely unexplored topic.  Our hypothesis is the additional view will provide improved results compared to monocular vision.  We will also explore inclusion of audio data to produce a multi-modal violence detection algorithm. 

The project is in collaboration with Remark AI UK, an SME developing innovative computer vision applications for public safety, transport and construction sectors.  The PhD student would benefit from involvement of industry in supervision and the company would provide a minimum of a three-month internship. 

Supervisor: Prof Sean Gong

Scheme: EPSRC DTP

Deep learning in computer vision requires labelled datasets for model training. They are poor when deployed in a new target domain when it is statistically different from the source domain. Unsupervised domain adaptation (UDA) methods have been studied for addressing this problem. In UDA, both (labelled) source and (unlabelled) target data must be provided during training. This means that the target domain must be known in advance and a sufficient quantity of target data must be available. This is not always possible in data-centric machine learning scenarios. This research will investigate fine-grained selective knowledge transfer between unlabelled target domains and labelled source domains.

For more information, please contact Prof Gong by email, and visit http://www.eecs.qmul.ac.uk/~sgg/.

The student will be based in the Computer Vision Group at the School of Electronic Engineering and Computer Science.

References:

1. J. Hu, H. Zhong, F. Yang, S. Gong, G. Wu, J. Yan. “Learning Unbiased Transferability for Domain Adaptation by Uncertainty Modelling”. In Proc. European Conference on Computer Vision, Tel Aviv, Israel, October 2022.

2. P. Li, S. Gong, C. Wang, Y. Fu. “Ranking Distance Calibration for Cross-Domain Few-Shot Learning”. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, New Orleans, Louisiana, USA, June 2022.

3. G. Wu, S. Gong. “Collaborative Optimisation and Aggregation for Decentralised Domain Generalisation and Adaptation”. In Proc. IEEE International Conference on Computer Vision, Montreal, Canada, October 2021.

Supervisor: Dr Kamyar Mehran

Scheme: EPSRC DTP

The project is to implement energy-efficient electric propulsion systems for marine vessels and heavy-duty road vehicles. These vessels are power intense, so the powertrain must respond quickly to sharp changes in power demands in an uncertain environment. This project will seek to develop a systematic globally optimum cyber-physical design for the energy-efficient propulsion system and elevate the current design approach for pre-existing electric powertrains and the associated energy management system.

The project is industrially assigned and requires a strong background in various specialised areas within electric propulsion design, electric drive systems, power electronics, control systems, machine learning and deep reinforcement learning. Experience in building electric powertrains, power electronics test rigs, and associated practical skills in hardware-in-the-loop control systems is mandatory.

The project will be based at the RPCS Laboratory within the School of Electronic Engineering and Computer Science.

Secondary Supervisors:

Dr Shady Gadoue (EECS)

Dr Mahdieh S. Sadabadi (EECS)

Supervisor: Ignacio Castro

Scheme: Principal's

This project departs from the individual-centric approach that has traditionally dominated much of the analysis on social networks and instead, considers ideas a first-class-citizen. The project will look at how ideas emerge and disappear through the social interaction of individuals in online discourse. The project will study how and when an idea gains traction, how the idea evolves and morphs and how ideas eventually lose support and fade.

To identify and track ideas and how individuals relate to them, the project will use a variety of tools spanning from social network analysis and graph theory to Machine Learning and Natural Language Processing. Using available rich datasets, this project will seek to explore how online discourse evolves and will explore how the evolution of ideas can help to better understand well-known aspects of online discourse such as online-harms, polarisation or the impact of de-platforming.

Supervisor: Prof Matthew Purver

Scheme: EPSRC DTP

One of the usual assumptions in natural language processing (NLP) is that words/sentences have one agreed interpretation; however, meanings actually tend to be negotiated between people, and change over time. People in conversation spend much of their (often unconscious) effort coordinating meaning, clarifying it and correcting others, eventually arriving at some degree of mutual understanding. The success (or otherwise) of this can be a strong indicator of the quality of interaction, and can help in (amongst other things) improving diagnosis and treatment effectiveness in healthcare, particularly for treatments based on talking therapies. Previous work at EECS has developed  models which show that language features can be used in these ways (Howes et al 2014, Nasreen et al 2019, 2020, 2021a,b, Tabak & Purver 2020, Rohanian et al 2019, 2020, 2021), but rely on shallow machine learning models. This studentship would extend them to deeper models, for example incorporating dialogue structure using graph neural networks to improve effectiveness and clinical interpretability, and/or adapting recent models of meaning change to dialogue settings.

Supervisor: Dr Haim Dubossarsky 

Scheme: Principal's

Word meaning can change overtime: ‘cell’ used to denote a confined physical space, but overtime acquired senses related to living organisms, spreadsheet forms, and mobile telephony. This dynamic nature of meaning poses a significant challenge for many NLP applications (e.g., handling new offensive words, translating historical texts, understanding slang and dialects). Consequently, numerous NLP methods were developed for detecting meaning change (Dubossarsky et al. 2017; Dubossarsky et al. 2019, Tsakalidis et al. 2020). However, these methods are still limited, and remained largely within the NLP community primarily because they do not report meaning change in a sensible way to other text-based research disciplines.

This studentship aims to bring the next-generation of language change modelling to NLP, and to make it compatible with other research disciplines. This will be done by developing new and improved methods for nuanced report of meaning change, tailored to the requirements of researchers in other disciplines. These new methods will include a combo of advanced mathematical analysis of the models’ representations (Dubossarsky et al. 2020), and special transfer learning techniques coupled with enhanced fine-tuning tasks to enable an improved and enriched models of change. A large emphasis will be put on developing methods for analyzing meaning change over short timescales, using multiple time points, and based on small corpora, by using anomaly detection methods that identify moments of change (Tsakalidis et al., 2022).

 

This PhD will be supervised by Dr Haim Dubossarsky & Prof Maria Liakata. For more information, please contact Dr Dubossarsky at h.dubossarsky@qmul.ac.uk.

The student will be based in the Cognitive Science Group at the School of Electronic Engineering and Computer Science.

 

References

Dubossarsky, H., Grossman, E., & Weinshall, D. (2017). Outta Control: Laws of Semantic Change and Inherent Biases in Word Representation Models. EMNLP, 1147–1156.

Dubossarsky, H., Hengchen, S., Tahmasebi, N., & Schlechtweg, D. (2019). Time-Out: Temporal Referencing for Robust Modeling of Lexical Semantic Change. ACL, 457–470.

Dubossarsky, H., Vulić, I., Reichart, R., & Korhonen, A. (2020). The Secret is in the Spectra: Predicting Cross-lingual Task Performance with Spectral Similarity Measures. EMNLP, 2377–2390.

Tsakalidis, A., & Liakata, M. (2020). Sequential Modelling of the Evolution of Word Representations for Semantic Change Detection. EMNLP, 8485–8497.

Tsakalidis, A., Nanni, F., Hills, A., Chim, J., Song, J., & Liakata, M. (2022). Identifying Moments of Change from Longitudinal User Text. ACL, 4647-4660.

Back to top