Dr.-Ing Zahra Ebrahimi
Dr.-Ing Zahra Ebrahimi obtained her Ph.D. from TU Dresden in 2025, where she also worked as a project manager and wissenschaftliche mitarbeiterin in Center for Advancing Electronics Dresden (cfaed). Since 2018 she has worked on the following projects:
- GREEN-DNN: Energy-Efficient Distributed Deployment of AI Models to 5G/6G Network Nodes, BMDV/Acatech grant (AI founder fellowship), 2025.
- X-DNet: Energy-Efficient Distributed and In-Network Computing via Approximation of Applications and Accelerators, BMBF grant (collaboration with Huawei), 2023-2025.
- X-ReAp: Cross(X)-Layer Runtime Reconfigurable Approximate Architecture, DFG grant, 2023-2025.
- Re-Learning: Self-Learning and Flexible Electronics Through Inherent Component Reconfiguration, ESF grant, 2021-2023.
- ReAp: Run-time Reconfigurable Approximate Architecture, DFG grant, 2018-2021.
Zahra's research interests include energy-efficiency/sustainability in edge to cloud continuum, 5G/6G, approximate computing, reconfigurable accelerator design, SW/HW co-design, and embedded systems.
Raum: ID 2/645, E-Mail: zahra.ebrahimi[at]rub.de
Topics for thesis, master project, SHK/WHK, internship
- Approximation of Machine Learning Models for High-Throughput, Energy-Efficient, and Sustainable Computing in 5G/6G Era
To improve the energy consumption and/or the response time of ML applications, various computing approaches have emerged in the era of 5G/6G. These approaches include Federated Learning, Distributed Inference, In-Network Computing, etc. However, to enable the execution of many cutting-edge and compute-intensive models (e.g., LLMs and DNNs) for the resource-constrained devices in the edge-to-cloud continuum, the structure of such models should be optimized without compromising the final quality of results. In this context, Approximate Computing techniques have been shown to provide highly beneficial solutions by exploiting the inherent error resiliency of ML models. Considering such potentials, the main idea in this project is to find and apply a combination of suitable approximation techniques that can reduce the area/power/energy of ML models and boost their performance while satisfying the accuracy requirement of the users.
Required Skills:
- FPGA development and programming: Verilog or VHDL, C++, and Python
- High-Level Synthesis: Vivado and Vitis HLS
- ML: Tensorflow and/or PyTorch, experince in optimizing the structure of medium to large ML modes