Announcements of Opportunity
Related Pages

SURF@JPL: Announcements of Opportunity
Announcements of Opportunity are posted by JPL technical staff for the SURF@JPL program. Each AO indicates whether or not it is open to non-Caltech students. If an AO is NOT open to non-Caltech students, please DO NOT contact the mentor.
Announcements of Opportunity are posted as they are received. Please check back regularly for new AO submissions!
**Students applying for JPL projects should complete a SURF@JPL application instead of a "regular" SURF application.
**Students pursuing opportunities at JPL must be U.S. citizens or U.S. permanent residents.
<< Prev
Record
16 of
40
Next >>
Back To List
Project: |
Perception for autonomous fast off-road driving
(JPL AO No. 14167)
|
||||||||
Disciplines: | Computer Vision and IoT, Robotics | ||||||||
Mentor: |
Deegan Atha,
(JPL),
deegan.j.atha@jpl.nasa.gov, |
||||||||
Background: | Currently robotics space missions have limited autonomy with significant human in the loop decisions. To enable long range driving and more hazardous exploration sites, JPL is researching novel technologies in perception to enable faster and safer navigation in more hazardous terrains including the Robotic Autonomy in Complex Environments with Resiliency (RACER) task. This task will entail working on a large applied research team that is focused on creating an autonomous fast off-road vehicle. The project hopes to push the boundary of vehicle speed while driving off-road both on and off trail. | ||||||||
Description: | The project for this student will focus on semantic perception. The vehicle includes several sensing modalities from RGB, IR, lidar, stereo, NDVI, and radar and several large GPUs. This project will focus on development and research into learning models to perform semantic inference in off-road terrains with options for utilizing multiple modalities. It will entail handling large datasets and tool creation. Additionally the semantic team has open research questions to explore in the domain of self-supervised learning techniques to replace or augment existing fully supervised hand labeled perception datasets as well as modeling or learning uncertainty of the semantic predictions. | ||||||||
References: | "RELLIS-3D: A Multi-modal Dataset for Off-Road Robotics" "Real-time Semantic Mapping for Autonomous Off-Road Navigation" "Self-Supervised Model Adaptation for Multimodal Semantic Segmentation" "Where Should I Walk? Predicting Terrain Properties From Images Via Self-Supervised Learning" "Real-time Semantic Mapping for Autonomous Off-Road Navigation" "Season-Invariant Semantic Segmentation with a Deep Multimodal Network" | ||||||||
Student Requirements: | Python, C++, ROS, Linux, Pytorch, camera models and geometry, CNNs | ||||||||
Location / Safety: | Project building and/or room locations: . Student will need special safety training: . | ||||||||
Programs: |
This AO can be done under the following programs:
|
<< Prev Record 16 of 40 Next >> Back To List