Ahmed Mohamed Abdelmoniem Sayed

Photo of Ahmed Sayed 

Lecturer (Assistant Professor) & Director of MSc BDS Programme
Lead of SAYED Systems Group
School of Electronic Engineering and Computer Science,
Queen Mary University of London, UK

Address: Office E153a, Engineering Building
Queen Mary University of London,
Mile End, London, UK

Email: ahmed.sayed@qmul.ac.uk

Links: GoogleScholar   CV   OrcID   LinkedIn   Github   Proposal Guide   Proposal Template

About me

I am Ahmed M. A. Sayed (aka. Ahmed M. Abdelmoniem), a Lecturer (Assistant Professor) and the Director of the MSC Big Data Science Programme at School of EECS, Queen Mary University of London, UK. I lead SAYED Systems Group where we strive to design and build Scalable Adaptive Yet Efficient Distributed systems of the Future. I am the Principal Investigator of a grant funded by UKRI-EPSRC New Investigator Award for Project KUber in partnership with major industrial players (i.e., Nokia Bell Labs, Samsung AI, IBM Research). I have a PhD in Computer Science and Engineering from the Hong Kong University of Science and Technology (HKUST) advised by Brahim Bensaou. I held the positions of Senior Researcher at Future Networks Lab, Huawei Research, Hong Kong and Research Scientist at SANDS Lab, KAUST, Saudi Arabia working with Marco Canini. My early research involved optimizing networked systems to improve the performance of applications in both wireless and data-center networks and proposing efficient and practical systems for distributed machine learning. My current research focus involves designing and prototyping Networked and Distributed Systems of the Future. In particular, I am interested in developing methods and techniques to enhance the performance of networked and distributed systems. I am currently focusing on developing scalable and efficient systems supporting distributed machine Learning (esp., distributed privacy-preserving ML aka. Federated Learning).

Prospective Students and PostDocs

I’m always looking for bright and enthusiastic people to join my research group

  1. If you are looking to do a PhD with me, thank you for your interest, but please READ THIS FIRST and then reach out to me via Email along with your CV, transcripts, and research statement/proposal.

  2. For PostDocs, please observe deadlines for PostDoc Fellowships List and reach to discuss a proposal for MSCA Fellowship, Royal Fellowship, Schlumberger Fellowship, and Turing Fellowship.

Vacancies and Opportunities

  1. Accepting candidates applying/awarded with COMMONWEALTH PHD SCHOLARSHIPS for Eligible Applicants, check HERE for more details and reach out to me before the Deadline 17-Oct-2023.

  2. There are various scholarships available, check out the eligible scholarships for you by searching QMUL scholarship database and then get in touch with me.

Grants and Funding

  1. Start 2024 UKRI-EPSRC (New Investigator Award), Knowledge Delivery System for Machine Learning at Scale (KUber), 650K GBP.

  2. 2022-Now EPSRC (REPHRAIN Center), Moderation in Decentralised Social Networks (DSNmod), with Ignacio Castro and Gareth Tyson (QMUL), 81K GBP.

  3. 2022-Now HKRGC (GRF), ML Congestion Control in SDN-based Data Center Networks, with Brahim Bensaou (HKUST), 600K HKD.

  4. 2021-Now KAUST (CRG), Machine Learning Architecture for Information Transfer, with Marco Canini (KAUST) and Marco Chiesa (KTH), 400K USD.

Editorial and Organisation

  1. I have access to JISC OA agreements to help with publishing free open-access in wide range of publishers, see List of 58 Publishers. Please reach out to me for more details.

  2. I have access to Springer-Egypt OA agreement to help with publishing free open-access in all Open-Access and Hybrid journals in Springer and SpringerNature. Please reach out to me for more details.

  3. I am co-editing for Frontiers in HPC on the research topic of HPC for AI in Big Model Era, looking forward to your best submission - Abstract Deadline 05-Jan-2023.

  4. I am co-organising the 5th International Workshop on Embedded and Mobile Deep learning as part of ACM MobiSys 2021, looking forward to your best submission - Deadline 07-May-2021.


Please check the publications webpage for full list of the publications along with its PDF.

  1. [4-Jul-2023] A joint paper with Prof. Chen Wang titled “Knowledge Representation of Training Data with Adversarial Examples Supporting Decision Boundary” was accepted to IEEE transactions on Information Forensics and Security (IEEE TIFS), 2023.

  2. [2-Jul-2023] I have presented our work “REFL: Resource-Efficent Federated Learing” at the Fifth UK Mobile, Wearable and Ubiquitous Systems Research Symposium (MobiUK), 2023. [Abstract]

  3. [28-June-2023] Gave a Keynote Speech based on invitation by SAILINGS Lab at Harbin Institiue of Technology, China on “Towards Practical and Efficient Federated Learning”. [Event Link]

  4. [21-June-2023] Gave a Keynote Speech based on invitation by Super User Network Summit in London, UK on “Big Data, Machine Learning and Federated Learning”. [Event Link]

  5. [25-May-2023] Had a podcast on Systems Research and Federated Learning (inc. our ACM EuroSys work REFL) at the Disseminate: The Computer Science Research Podcast invited by the host Jack Waudby.

  6. [17-May-2023] Gave a talk on Practical and Efficient Federated Learning at the Institute of Communication Systems, University of Surrey invited by Ahmed Elzanaty.

  7. [26-Feb-2023] Our paper “Enhancing TCP via Hysteresis Switching: Theoretical Analysis and Empirical Evaluation“, is accepted in IEEE Transactions of Networking (ToN), 2023. [Conference Version]

  8. [10-Feb-2023] Our paper “A Comprehensive Empirical Study of Heterogeneity in Federated Learning“, is accepted in IEEE Internet of Things (IoT) Journal, 2023. [ArXiv]

  9. [18-Jan-2023] Our paper “A2FL: Availability-Aware Selection for Machine Learning on Clients with Federated Big Data“, is accepted in IEEE ICC, 2023. [Detailed Paper] [Conference Paper] [Slides]

  10. [26-Oct-2022] Gave a talk on Practical and Efficient Federated Learning at the Institute of Computing Systems Architeture, University of Edinburgh invited by Luo Mai.

  11. [16-Aug-2022] Our paper “REFL: Resource Efficient Federated Learning“, is accepted in ACM EuroSys, 2023. [ArXiv] [Code]

  12. [7-Aug-2022] Our paper “EAFL: Energy-Aware Federated Learning Framework on Battery-Powered Clients”, is accepted in ACM FedEdge workshop at MobiCom, 2022. [ArXiv] [Presentation]

  13. [28-Jun-2022] Gave a talk on Practical and Efficient Federated Learning at Department of Computing, School of Engineering, Imperial College London invited by Hamed Haddadi. [Slides]

  14. [9-Jun-2022] Our paper “Towards Efficient and Practical Federated Learning”, is accepted in ACM CrossFL workshop at MLSys, 2022.

  15. [5-Apr-2022] Our paper “Empirical analysis of federated learning in heterogeneous environments”, is published in ACM EuroMLSys workshop at EuroSys, 2022. [Paper]

  16. [6-Dec-2021] Our paper “Rethinking gradient sparsification as total error minimization”, is published in the most prestigious AI/ML conference NeurIPS as a spotlight paper, 2021. [Paper]

  17. [7-Jul-2021] Our paper “Grace: A compressed communication framework for distributed machine learning”, is published in IEEE ICDCS, 2021.

  18. [7-Jun-2021] Our paper “A Two-tiered Caching Scheme for Information-Centric Networks”, is published in IEEE HPSR, 2021.

  19. [7-Apr-2021] Our paper “Towards mitigating device heterogeneity in federated learning via adaptive model quantization”, is published in ACM EuroMLSys workshop at EuroSys, 2021. [Paper]

  20. [22-Jan-2021] Our paper “T-RACKs: A Faster Recovery Mechanism for TCP in Data Center Networks”, is published in the IEEE/ACM Transactions on Networking (ToN), 2021.

  21. [18-Jan-2021] Our paper “An Efficient Statistical-based Gradient Compression Technique for Distributed Training Systems”, is published in International Conference on ML Systems (MLSys) 2021.

  22. [5-Dec-2020] Our paper “DC2: Delay-aware Compression Control for Distributed Machine Learning” is published in IEEE INFOCOM 2021.

  23. [11-Oct-2020] Our paper “Huffman Coding Based Encoding Techniques for Fast Distributed Deep Learning” is published in DistributedML workshop of ACM CoNEXT, 2020.