We build robots that feeland the tools for them to work with you.

A physical AI company training tactile foundation models for the real world.

Indiana / SF

Thesis

Vision told machines what the world looks like.

Touch tells them what the world is.

Now Deployed on partner site.

How we build

Four pieces. One loop.

  1. Tactile foundation model.

    A sensorimotor model trained on vision, touch, proprioception, and language together. Touch is a first-class input, not an afterthought. The model learns to anticipate contact, not just react to it.

  2. Neuroclassical control.

    A classical control layer sits under the learned model, enforcing force and compliance limits in hard real time. If the big brain is wrong, the low-level controller still catches it. Safety and reliability are not post-hoc filters. They are the substrate.

  3. Purpose-built hardware.

    Sensors, grippers, and actuators designed around the model they serve. We build the physical stack so the data we collect and the forces we command are things our own electronics can actually measure and trust.

  4. Software orchestration.

    One runtime that sequences skills, coordinates across a fleet, and streams data back to the training loop. A single robot learning in the field becomes every robot's experience by the next release.

Get in touch

We're building in Indiana and SF. Reach out.

We're hiring, quietly. Research & robotics only.