This thesis focuses on designing, evaluating, and deploying algorithms for robot perception and control. The main task is predicting both self-motion and the motion of surrounding agents to enable safe navigation, using the event camera as the primary sensor. The approach leverages prior information such as object detection, optical flow, and depth, while also integrating multimodal inputs including IMUs, radar, and standard cameras to ensure robust perception in complex environments.
Table of Content
Summary
Subscribe for Scholarship Alert!
Benefits
- A world-leading, interdisciplinary and international research environment, provided with state-of-the-art experimental equipment and versatile opportunities
- Qualified support through your scientific colleagues
- The chance to independently prepare and work on your tasks
- Flexible working hours as well as a reasonable remuneration
Requirements
- Current master studies in physics, computer mathematics, electrical/electronic engineering or a related topic
- Prior programming experience in Python is a must, C++ and CUDA experience are a plus
- Familiarity with PyTorch and modern deep learning frameworks
- Experience in training and evaluating computer vision models
- Experience with event-based cameras, neuromorphic vision concepts, spiking neural networks, and/or neuromorphic computing is a plus
- Experience with, or willingness to learn, ROS 2 for robotic system integration, and a strong interest in robot perception, computer vision, and sensor fusion
Application Deadline
Not SpecifiedHow To Apply
Are you qualified and interested in this opportunity? Kindly go to
Forschungszentrum Jülich on www.fz-juelich.de to apply
For more information, kindly visit Forschungszentrum Jülich scholarship webpage.