Student/Research Assistant Developing techniques for assessing the robustness and biases of deep learning models and evaluating their impact on reproducibility.
Robust and bias-free deep learning models are an essential aspect of reproducibility in deep learning pipelines as they help to ensure that models perform well in different scenarios and are not vulnerable to small variations in the input data. However, the impact of these variations on reproducibility is not well understood, and the goal of this research is to develop methods to address this issue hence improving model robustness and reproducibility.
- To find out the most common types of dataset and model biases that affect deep learning models, and how they impact model performance and reproducibility.
- Implementing existing methods that can be used to identify and mitigate dataset biases, and how effective are they in improving model reproducibility?
- Implementing the ways of quantifying the effects of dataset biases on robustness and what metrics should be used to evaluate these factors?
- Automating the process of detecting and measuring the impact of dataset biases on model reproducibility?
- Bachelors/master's student studying computer science, applied mathematics, or a related subject.
- Expertise in Machine Learning, ideally in Deep Neural Networks (DNN): Convolutional Neural Networks (CNN) for image segmentation and classification.
- Proficiency in Python.
- Hands-on experience with deep learning toolkits like TensorFlow, PyTorch, etc.
- Ability to think analytically as well as strong communication skills
What we offer:
- Work on a real research project with practical orientation.
- Flexible working hours.
- 40/60 hours of work per month
- Possibility to work remotely, in-person, or hybrid.
Apply: Please send your complete application documents (curriculum vitae and current university transcript) via the email listed below by March 17th, 2023. Email: firstname.lastname@example.org OR email@example.com