Ahmed Mohamed Abdelmoniem Sayed

alt text 

Lecturer (Assistant Professor)
Head of SAYED Systems Lab,
School of Electronic Engineering and Computer Science,
Queen Mary University of London, UK

Office E153a, Engineering Building
Queen Mary University of London,
Mile End, London, UK

QMUL Email: ahmed.sayed [@] qmul [DOT] ac [DOT] uk
KAUST Email: ahmed.sayed [@] kaust [DOT] edu [DOT] sa
AUN Email: ahmedcs [@] aun [DOT] edu [DOT] eg
UST Email: amas [@] cse [DOT] ust [DOT] hk

Links: Home   GoogleScholar   CV   LinkedIn   Github   SAYED Systems Lab   Blog   Disqus

About me

I am Ahmed M. A. Sayed (aka. Ahmed M. Abdelmoniem), Assistant Professor at Queen Mary University of London. I lead Scalable Adaptive Yet Efficient Distributed (SAYED) Systems Lab. I have a PhD in Computer Science and Engineering from the Hong Kong University of Science and Technology (HKUST) advised by Prof. Prof. Brahim Bensaou. I held the positions of Senior Researcher at Future Networks Lab, Huawei Research, Hong Kong and Research Scientist at SANDS Lab, KAUST, Saudi Arabia. My early research involved optimizing networked systems to improve the performance of applications in wireless and data center networks and proposing efficient and practical systems for distributed machine learning. My current research focus involves designing and prototyping Networked and Distributed Systems of the Future. In particular, I am interested in developing methods and techniques to improve and enhance the performance of networked and distributed systems. I am currently focusing on developing scalable and efficient systems supporting distributed machine Learning (esp., distributed privacy-preserving machine learning aka. Federated Learning).

Prospective Students

I’m always looking for bright and enthusiastic people to join my research group. If you are looking to do a PhD with me, thank you for your interest, but please READ THIS FIRST and then reach out to me via Email along with your CV, transcripts, and research statement/proposal (if possible).

Vacancies and Opportunities

  1. Check out the different types of scholarships available for you by searching QMUL scholarship database

  2. Data-Centric Engineering Doctoral Training Center, a project is offered on Resource Efficiency in Federated Learning EcoSystems. For application, check Applications and Eligibility

News

Please check the publications webpage for full list of the publications along with its PDF.

  1. [16-Aug-2022] Our paper “REFL: Resource Efficient Federated Learning", is accepted in ACM EuroSys, 2023.

  2. [7-Aug-2022] Our paper “EAFL: Energy-Aware Federated Learning Framework on Battery-Powered Clients”, is accepted in ACM FedEdge workshop at MobiCom, 2022.

  3. [9-June-2022] Our paper “Towards Efficient and Practical Federated Learning”, is accepted in ACM CrossFL workshop at MLSys, 2022.

  4. [5-Apr-2022] Our paper “Empirical analysis of federated learning in heterogeneous environments”, is published in ACM EuroMLSys workshop at EuroSys, 2022.

  5. [6-Dec-2021] Our paper “Rethinking gradient sparsification as total error minimization”, is published in the most prestigious AI/ML conference NeurIPS as a spotlight paper, 2021.

  6. [7-Jul-2021] Our paper “Grace: A compressed communication framework for distributed machine learning”, is published in IEEE ICDCS, 2021.

  7. [7-Jun-2021] Our paper “A Two-tiered Caching Scheme for Information-Centric Networks”, is published in IEEE HPSR, 2021.

  8. [7-Apr-2021] Our paper “Towards mitigating device heterogeneity in federated learning via adaptive model quantization”, is published in ACM EuroMLSys workshop at EuroSys, 2021.

  9. [22-Jan-2021] Our paper “T-RACKs: A Faster Recovery Mechanism for TCP in Data Center Networks”, is published in the IEEE/ACM Transactions on Networking (ToN), 2021.

  10. [18-Jan-2021] Our paper “An Efficient Statistical-based Gradient Compression Technique for Distributed Training Systems”, is published in International Conference on ML Systems (MLSys) 2021.

  11. [5-Dec-2020] Our paper “DC2: Delay-aware Compression Control for Distributed Machine Learning” is published in IEEE INFOCOM 2021.

  12. [11-Oct-2020] Our paper “Huffman Coding Based Encoding Techniques for Fast Distributed Deep Learning” is published in DistributedML workshop of ACM CoNEXT, 2020.