I'm a PhD researcher at Brown University advised by Professor Nora Ayanian. My work is on robotic cloth manipulation: building the hardware, perception, and learning systems needed to handle fabric reliably on real robots.
π§ͺ Policy Training β ACT for Cloth Folding
Training visuomotor fold policies with ACT (Action Chunking with Transformers). 267 demos, 15+ models trained, 40β60% deployment success. Currently comparing image encoders (ResNet vs DINOv2) and pretrained action models (OpenVLA, Octo, pi0).
multi-fold.mp4 |
single-fold.mp4 |
Multi-fold with cloth resets Β· Single fold baseline (ResNet18, no pretrained action model)
π§ Learning Cloth Dynamics β From Paper to Real Fabric
Took PhysTwin and PGND, got them running on cloth data I collected myself, then extended PGND with a differentiable render loss (DINOv2 + SSIM) and live camera conditioning at rollout time. Three model variants, each adding a new supervisory signal on top of the last.
pgnd-comparison-all.mp4
Baseline vs. Visual PGND β all held-out episodes
cloth-dynamics-fold-l-over-r-v2.mp4
sew-unit-dual-pull-apart.mp4
PhysTwin novel actions: fold left-over-right Β· bimanual pull-apart, simulated with parameters learned from a single training trajectory
Two custom end-effectors: silicone FSR grippers with embedded force sensors for contact-aware grasping, and a UMI-inspired handheld teleop gripper with ArUco markers and IMU for imitation learning data collection.
π¦Ύ Sew Unit β Bimanual Cloth Manipulation Platform
A bimanual robot platform I designed and built from scratch: custom aluminum extrusion frame, two inverted SO-101 arms, ROS2/MoveIt motion planning, and leader-follower teleoperation for data collection.
sew-unit-cad-spin.mp4
CAD model spin
TabithaKO/PhysTwinβ SO-101 cloth data pipeline, multi-camera perception, trajectory generationTabithaKO/pgndβ visual PGND: mesh-constrained Gaussian Splatting + DINOv2 camera conditioning
π tabithako.github.io Β Β·Β π Fashion

