Cornell University
View map

Title: Stochastic Control Barrier Functions for Robot Safety

Abstract: Control barrier functions certify safety for deterministic systems by rendering a set of safe states forward invariant, but these guarantees are only as strong as the underlying model. For contact-rich robotics tasks like manipulation and locomotion, unmodeled dynamics and stochastic disturbances render such certificates unreliable. Rather than seeking more accurate deterministic models, I'll discuss how classical martingale techniques for discrete-time stochastic stability (Kushner, 1967) can be adapted to construct probabilistic forward invariance guarantees that explicitly account for model uncertainty.

The approach enforces a supermartingale condition on barrier-like functions, yielding bounds on the probability of constraint violation under state-dependent stochastic disturbances. Since real disturbance distributions are typically unknown and non-Gaussian, we learn them from data using deep generative models, then synthesize controllers under the learned stochasticity. Real-world results on quadrotor flight with unmodeled payloads and humanoid locomotion demonstrate that explicitly modeling and accounting for model error can produce more robust behavior in practice.

Bio: Preston Culbertson is an Assistant Professor of Computer Science at Cornell. He leads the Praxis Lab, where they study how robots can learn to be robust and reliable in the real world.

He wants to understand robustness in robot learning. His work integrates machine learningnumerical optimization, and control theory to study how robots can remain reliable when models, sensors, or hardware are imperfect. The goal is to develop robotic systems that can manage uncertainty, adapt, and improvise when deployed in messy real-world environments.

1 person is interested in this event

User Activity

No recent activity