M. Sc. Zahra Ebrahimi
Zahra obtained her B.Sc. and M.Sc. degrees from Sharif University of Technology in Iran, where she also worked as a researcher in DSN lab. From 2018 to 2024, she worked in Cfaed, TU Dresden (where she also is a PhD student) on the following projects:
- ReAp: Run-time Reconfigurable Approximate Architecture, DFG grant, 2018-2021.
- Re-Learning: Self-Learning and Flexible Electronics Through Inherent Component Reconfiguration, ESF grant, 2021-2023.
- X-ReAp: Cross(X)-Layer Runtime Reconfigurable Approximate Architecture, DFG grant, 2023-2025.
Zahra is also the manager of her own BMBF-granted X-DNet project (collaboration with Huawei), and acatech/BMDV-granted GREEN-DNN project. Zahra's research interests include approximate computing, reconfigurable accelerator design, in-network computing for 5G/6G, energy-efficiency/sustainability in edge to cloud continuum, SW/HW co-design, and embedded systems.
Raum: ID 2/645, E-Mail: zahra.ebrahimi[at]rub.de
Topics for thesis, master project, SHK/WHK, internship
- Approximation of Machine Learning Models for High-Throughput, Energy-Efficient, and Sustainable Computing in 5G/6G Era
To improve the energy consumption and/or the response time of ML applications, various computing approaches have emerged in the era of 5G/6G. These approaches include Federated Learning, Distributed Inference, In-Network Computing, etc. However, to enable the execution of many cutting-edge and compute-intensive models (e.g., LLMs and DNNs) for the resource-constrained devices in the edge-to-cloud continuum, the structure of such models should be optimized without compromising the final quality of results. In this context, Approximate Computing techniques have been shown to provide highly beneficial solutions by exploiting the inherent error resiliency of ML models. Considering such potentials, the main idea in this project is to find and apply a combination of suitable approximation techniques that can reduce the area/power/energy of ML models and boost their performance while satisfying the accuracy requirement of the users.
Required Skills:
- FPGA development and programming: Verilog or VHDL, C++, and Python
- High-Level Synthesis: Vivado and Vitis HLS
- ML: Tensorflow and/or PyTorch, experince in optimizing the structure of medium to large ML modes