Our synthetic training data is a drop-in replacement for labeled data used to train computer vision systems.
We use video game engines to produce perfectly annotated training datasets for object detection, segmentation, and 6D pose estimation models in common formats like MS COCO. We simulate RGB cameras and RGB+Depth sensors.
Setting up hardware and sensors to collect data, training a team of labelers, and pruning labeling errors adds months of R&D and weighs heavily on project budgets.
Once configured, our generator can produce 100,000+ perfectly labeled training images within hours, and scales in the cloud.
Instead of generating one dataset that “looks right”, we generate many competing versions and benchmark them against client data to learn what makes the best dataset for the problem.
Each SBX dataset is the product of iterative testing and optimization to achieve the best performance on real-world data.
Led applied research projects at UberATG, Kindred AI, and SigOpt focusing on computer vision, robotics, and optimization for machine learning.
Built the tech and team behind Wish.com merchant marketplace, growing it to 100k+ active merchants and 45 engineers, PMs, and designers.