Vincent Pacelli

Vincent Pacelli

Postdoctoral Fellow · Georgia Institute of Technology

I am a postdoctoral fellow in the ACDS Lab at Georgia Tech, supervised by Evangelos Theodorou. I received my Ph.D. from Princeton University, where I worked in the IRoM Lab with Anirudha Majumdar. Before that, I earned my B.S.E. in Electrical Engineering and M.S.E. in Robotics from the University of Pennsylvania, where my thesis was supervised by Daniel E. Koditschek.

My research develops principled methods that identify and exploit task-relevant structure in the stochastic optimal control and learning problems found in robotics. I use variational principles from fields like information theory and statistical mechanics to formally characterize what problem structure is relevant to a task—and then build that characterization into algorithms with provable guarantees, both analytically and through data-driven methods. This perspective applies to both sensing (i.e., what information should a controller use?) and computation (i.e., how should an algorithm be structured for a specific problem class?) It also yields fundamental limits on achievable performance: bounds on what any controller can accomplish given a system’s dynamics and sensors.

Recently, I co-wrote a grant proposal on developing a generalization theory for diffusion models, which is actively being funded as part of the DARPA AIQ Program. I also teach AE4803 RO2: Robotics & Autonomy, an undergraduate robotics course, at Georgia Tech.

Google Scholar GitHub

Research

Task-Driven Representations for Robust Control

When attempting to catch a ball, an agent can estimate its position and velocity, model environmental factors like wind, and integrate the resulting equations to predict where it will land. Cognitive psychology studies show that humans instead maintain the angle of gaze at a constant value—a strategy that reduces many hard-to-monitor variables into a single quantity whose invariance to them is precisely what makes it robust.

My work develops a principled framework for automatically synthesizing such representations. I formulate control with an information bottleneck that compresses the full state into a minimal set of task-relevant variables while jointly optimizing a policy over them, and develop algorithms for this problem both in model-based settings and via reinforcement learning from high-dimensional observations such as images. A central theoretical result connects information-constrained control to differential privacy: the resulting policies are formally insensitive to perturbation of any individual state variable, yielding an explicit bound on performance degradation that depends only on the cost under perfect state information, the estimation error magnitude, and the information constraint—not on a model of the noise.

Stochastic Control / Reinforcement Learning Differential Privacy Task-Driven Representations

Fundamental Limits of Sensor-Based Control

Measurement uncertainty places a fundamental limit on the performance any feedback controller can achieve, regardless of its computational sophistication. Characterizing these limits allows engineers to benchmark controllers against what is physically achievable, guide sensor selection, and certify whether a task is feasible with a given sensing modality. My work with Anirudha Majumdar and Zhiting (May) Mei established the first general-purpose bounds of this kind for robotics, using a generalization of Fano’s inequality applied to a quantity we call the task-relevant information potential (TRIP).

I have since found that these bounds are a special case of the Gibbs variational principle—the mathematical foundation of the Second Law of Thermodynamics—which extends them to unbounded costs and continuous control spaces and is never looser. The free energy formulation also reveals a self-consistency structure: any controller that achieves low cost must concentrate the state distribution, which limits how much information the sensor can extract, which in turn tightens the bound. Formalizing this feedback loop produces a fixed-point equation whose solution is provably tighter than the naive estimate and computable via bisection. On both analytical and numerical examples, the naive bound becomes vacuous as sensor quality improves, while the self-consistent bound remains informative across all noise levels.

Fundamental Limits Statistical Mechanics Sensor Design

Optimal Transport for Control and Learning?

Many problems in robotics and machine learning reduce to steering or comparing probability distributions—whether pushing a swarm of trajectory samples toward low-cost regions in model predictive control, shaping a state distribution to satisfy safety constraints, or training a generative model to transform noise into data. Optimal transport (OT) is the natural mathematical framework for such problems because, unlike information-theoretic divergences, it accounts for the geometry of the underlying space.

My postdoctoral work both advances the methods used to solve OT problems in multiple contexts across robotics and AI. The clearest example is in sampling-based MPC, where replacing the KL divergence in the control-as-inference formulation with an entropy-regularized OT objective which respects the geometry of the robot’s task and eliminates pathologies like mode-averaging (when a robot averages two trajectories around an obstacle into one that steers into the obstacle). The same geometric perspective informs my work on covariance steering, where an operator splitting scheme decomposes the distribution-steering problem into parallelizable subproblems for real-time safe planning, demonstrated on a VTOL drone. It also underlies my work on Schrödinger bridge generative models—themselves entropy-regularized OT problems—where I develop task-informative priors that dramatically accelerate training and a distributionally robust formulation that guards against distribution shift. This line of work led to a DARPA-funded project connecting the stochastic control formulation of diffusion models to PAC-Bayes generalization theory.

Optimal Transport Stochastic Control Diffusion Models Generative AI