Ahmed Mohamed Abdelmoniem Sayed

Photo of Ahmed Sayed 

Lecturer (Assistant Professor)
Head of SAYED Systems Group,
School of Electronic Engineering and Computer Science,
Queen Mary University of London, UK

Address: Office E153a, Engineering Building
Queen Mary University of London,
Mile End, London, UK

Email: ahmed.sayed@qmul.ac.uk

Links: SAYED Systems Group   GoogleScholar   CV   OrcID   LinkedIn   Github   Blog   Disqus

Two fully-funded PhD studentships with deadline 31-Jan-2023 on Efficient Machine Learning and Future Intelligent Networks co-supervised with Steve Uhlig available for Chinese applicants supported by China Scholarship Council (CSC), see THIS GUIDE for details. If you are eligible and interested, please reach out to me via email before the deadline.

About me

I am Ahmed M. A. Sayed (aka. Ahmed M. Abdelmoniem), Assistant Professor at Queen Mary University of London, UK. I lead Scalable Adaptive Yet Efficient Distributed (SAYED) Systems Research Group. I have a PhD in Computer Science and Engineering from the Hong Kong University of Science and Technology (HKUST) advised by Brahim Bensaou. I held the positions of Senior Researcher at Future Networks Lab, Huawei Research, Hong Kong and Research Scientist at SANDS Lab, KAUST, Saudi Arabia working with Marco Canini. My early research involved optimizing networked systems to improve the performance of applications in both wireless and data-center networks and proposing efficient and practical systems for distributed machine learning. My current research focus involves designing and prototyping Networked and Distributed Systems of the Future. In particular, I am interested in developing methods and techniques to enhance the performance of networked and distributed systems. I am currently focusing on developing scalable and efficient systems supporting distributed machine Learning (esp., distributed privacy-preserving ML aka. Federated Learning).

Prospective Students

I’m always looking for bright and enthusiastic people to join my research group. If you are looking to do a PhD with me, thank you for your interest, but please READ THIS FIRST and then reach out to me via Email along with your CV, transcripts, and research statement/proposal (if possible).

Vacancies and Opportunities

  1. Two PhD projects available on Efficient Machine Learning on Decentralized Data at Scale and Machine Learning Support for Future Intelligent Networks for Chinese Applicants to China Scholarship Council (CSC) Studentship, if interested please reach out to me and see THIS GUIDE for more details - Deadline 31-Jan-2023.

  2. Co-supervising with Poonam Yadav PhD candidates for Studenships of iGGi Center for Docotral Training which funds Home and International students, for potential topics see Slides, reach out to me or Poonam before the 26-Jan-2023 Deadline.

  3. Applications are open to S&E BAME PhD studentship for Home UK students, reach out to me if interested - Deadline 31-Jan-2023.

  4. Applications are open to Islamic Development Bank Scholarship for International students of eligible countries, reach out to me if interested - Deadline 31-Jan-2023.

  5. There are various scholarships available, check out the eligible scholarships for you by searching QMUL scholarship database and then get in touch with me.

Active Grants and Collaborations

  1. 2022-Now EPSRC (REPHRAIN Center), Moderation in Decentralised Social Networks (DSNmod), with Ignacio Castro and Gareth Tyson (QMUL), 81K GBP.

  2. 2022-Now HKRGC (GRF), ML Congestion Control in SDN-based Data Center Networks, with Brahim Bensaou (HKUST), 600K HKD.

  3. 2021-Now KAUST (CRG), Machine Learning Architecture for Information Transfer, with Marco Canini (KAUST) and Marco Chiesa (KTH), 400K USD.

Editorial and Organisation

  1. I am co-editing for Frontiers in HPC on the research topic of HPC for AI in Big Model Era, looking forward to your best submission - Abstract Deadline 05-Jan-2023.

  2. I am co-organising the 5th International Workshop on Embedded and Mobile Deep learning as part of the ACM MobiSys 2021, looking forward to your best submission - Deadline 07-May-2021.

News

Please check the publications webpage for full list of the publications along with its PDF.

  1. [26-Oct-2022] Gave a talk on Practical and Efficient Federated Learning at the Institute of Computing Systems Architeture, University of Edinburgh invited by Luo Mai. [Announcement] [Slides]

  2. [16-Aug-2022] Our paper “REFL: Resource Efficient Federated Learning“, is accepted in ACM EuroSys, 2023. [ArXiv] [Code]

  3. [7-Aug-2022] Our paper “EAFL: Energy-Aware Federated Learning Framework on Battery-Powered Clients”, is accepted in ACM FedEdge workshop at MobiCom, 2022. [ArXiv] [Presentation]

  4. [28-Jun-2022] Gave a talk on Practical and Efficient Federated Learning at Department of Computing, School of Engineering, Imperial College London invited by Hamed Haddadi. [Slides]

  5. [9-Jun-2022] Our paper “Towards Efficient and Practical Federated Learning”, is accepted in ACM CrossFL workshop at MLSys, 2022.

  6. [5-Apr-2022] Our paper “Empirical analysis of federated learning in heterogeneous environments”, is published in ACM EuroMLSys workshop at EuroSys, 2022. [Paper]

  7. [6-Dec-2021] Our paper “Rethinking gradient sparsification as total error minimization”, is published in the most prestigious AI/ML conference NeurIPS as a spotlight paper, 2021. [Paper]

  8. [7-Jul-2021] Our paper “Grace: A compressed communication framework for distributed machine learning”, is published in IEEE ICDCS, 2021.

  9. [7-Jun-2021] Our paper “A Two-tiered Caching Scheme for Information-Centric Networks”, is published in IEEE HPSR, 2021.

  10. [7-Apr-2021] Our paper “Towards mitigating device heterogeneity in federated learning via adaptive model quantization”, is published in ACM EuroMLSys workshop at EuroSys, 2021. [Paper]

  11. [22-Jan-2021] Our paper “T-RACKs: A Faster Recovery Mechanism for TCP in Data Center Networks”, is published in the IEEE/ACM Transactions on Networking (ToN), 2021.

  12. [18-Jan-2021] Our paper “An Efficient Statistical-based Gradient Compression Technique for Distributed Training Systems”, is published in International Conference on ML Systems (MLSys) 2021.

  13. [5-Dec-2020] Our paper “DC2: Delay-aware Compression Control for Distributed Machine Learning” is published in IEEE INFOCOM 2021.

  14. [11-Oct-2020] Our paper “Huffman Coding Based Encoding Techniques for Fast Distributed Deep Learning” is published in DistributedML workshop of ACM CoNEXT, 2020.