Skip to content

Research

Light source estimation

This effort addresses the problem of determining the location, direction, intensity, and color of the illuminants in a given scene. The problem has a broad range of applications in augmented reality, robust robot perception, and general scene understanding. In our research, we model complex light interactions with a custom path-tracer, capturing the effects of both direct and indirect illumination. Using a physically-based light model not only improve light source estimation, but will play a critical role in future research in surface property estimation and geometry refinement, ultimately leading to more accurate and complete scene reconstruction systems.

Publications:

  • Mike Kasper, Nima Keivan, Gabe Sibley, Christoffer Heckman. Light Source Estimation in Synthetic Images. In European Conference on Computer Vision, Virtual/Augmented Reality for Visual Artificial Intelligence Workshop 2016.

Sponsors:

  • Toyota grant 33643/1/ECNS20952N: Robust Perception.

Referring Expressions for Object Localization

Understanding references to objects based on attributes, spatial relationships, and other descriptive language expands the capability of robots to locate unknown objects (zero-shot learning), find objects in cluttered scenes, and communicate uncertainty with human collaborators. We are collecting a new set of annotations, SUNspot, for the SUNRGB-d scene understanding dataset. Unlike other referring expression datasets, SUNspot will focus on graspable objects in interior scenes accompanied by the depth sensor data and full semantic segmentation from the SUNRGB-d dataset. Using SUNspot, we hope to develop a novel referring expressions system that will improve object localization for use in human-robot interaction.

Example expressions
Image of kitchen with three highlighted objects, two vases of flowers, and a calendar. Image of a classroom with two notebooks and an envelope highlighted
(1) The calendar is hanging below the cupboards above the sink (1) The envelope is on a desk, left of a girl raising her hand
(2) The flowers are on the corner of the counters, to the left of the range (2) Blue notebook closest to the girl in blue raising her hand
(3) The flowers are on top of the white table in a clear vase (3)The note book is located on the front right corner of the desk

RAMFIS: Representations of Abstract Meaning for Information Synthesis

Humans can readily extract complex information from many different modalities, including spoken and written expressions and information from images and videos, and synthesize it into a coherent whole. This project aims to support automated synthesis of diverse multi-media information sources.

We are proposing a rich, multi-graph Common Semantic Representation (CSR) based on Abstract Meaning Representations (AMRs) embellished with vision and language vector representations and temporal and causal relations between events, and supported by a rich ontology of event and entity types.

Sponsored by DARPA #FA8750-18-2-0016: Active Interpretation of Disparate Alternatives (“AIDA”).


MARBLE: Multi-agent Autonomy with RADAR-Based Localization for Exploration


ARPG is a component of team MARBLE, a funded participant in the DARPA Subterranean Challenge. We are providing the autonomy, perception and low-level planning algorithms for ground vehicle support in the project. The project kicked off in September 2018 and is ongoing, with competition events in September 2019 onward.

Sponsored by DARPA TTO Subterranean Challenge.

 


Compass

Compass is a visual-inertial simultaneous localization and mapping (SLAM) pipeline with extensible frontend capability and an optimization backend based on Ceres solver. Compass facilitates real-time localization and sparse mapping in challenging, low-light conditions such as subterranean environments. Recent developments include the use of depth measurements from stereo or RGB-D cameras, improved keyframing for visually challenging environments, a robust recovery mode for visual tracking failures, and adaptations for compatibility with low-power ARM-based hardware.

Publications:

  • Fernando Nobre, Christoffer Heckman. International Symposium on Experimental Robotics 2016.
  • Fernando Nobre, Mike Kasper, Christoffer Heckman. IEEE International Conference on Robotics and Automation 2017.

Sponsors:

  • DARPA #N65236-16-1-1000. DSO Seedling: Ninja Cars.
  • Toyota grant 33643/1/ECNS20952N: Robust Perception.

Parkour Cars

This project aims to develop high fidelity real-time systems for perception, planning and control of agile vehicles in challenging terrain including jumps and loop-the-loops. The current research is focused on the local planning and control problem. Due to the difficulty of the maneuvers, the planning and control systems must consider the underlying physical model of the vehicle and terrain. This style of simulation-in-the-loop planning enables very accurate prediction and correction of the vehicle state, as well as the ability to learn precise attributes of the underlying physical model.

Publications:

Sponsors:

  • NSF #1646556. CPS: Synergy: Verified Control of Cooperative Autonomous Vehicles.
  • DARPA #N65236-16-1-1000. DSO Seedling: Ninja Cars.
  • Toyota grant 33643/1/ECNS20952N: Robust Perception.