AI_002 AI-Assisted Prototyping Personal · 2026

Shoe-Track
WebAR

A concept from my 2012 Adidas Design Academy application, finally turned into a working proof of concept. Train a model to recognise your shoes, overlay animations on them in real time. All in the browser, no app required. It took twelve years, but the technology finally caught up.

Claude Code Blender YOLOv8 Kaggle TF.js Three.js WebGL WebRTC

2012. Adidas Design
Academy. Semi-final.

The brief: reimagine an Adidas running, soccer, basketball or women's gym shoe. "Show us what generation next would look like."

My answer: Adidas Style Boost. A shoe with a special pattern that your phone could recognize. Once recognized, it would display the shoe's signature animation on screen. Share the photos and videos on social media. Boost your style. Boost your image. In my mind this was the next big thing, adding filters over the real world and make fun photos and videos to share on social media.

The idea never fully left. Over the years I tried to make a proof of concept, but the required technology was always too fragmented or too complex to pull off as a side project.

Since then, I have wanted to make a proof of concept of this idea. Over the years I have tried to do it, but the hurdle was too big. Until now, because all required technology has come to a point that it is available for easy use. I no longer need a dedicated developer for help. Together with my buddy Claude Code I set out to make my idea reality.

2012 — Adidas Design Academy application visuals

A lot of output in
a small amount of time.

Picking the approach

Discussed several approaches with Claude. I decided to train a model through Machine Learning and teach it what a specific pair of shoes looks like. Being able to reliably recognize the shoe would open up a range of possibilities for adding animations on top.

Selecting the shoe

I selected my shoes of choice by looking down. My own Adidas Ultraboost are very distinctive (that chunky midsole, the Primeknit upper, the bold blue and red colors) making them perfect for a first proof of concept.

Building a usable dataset as fast as possible

I needed training data. Rather than spending hours of photographing the shoe in every conceivable environment, I made a 3D model of my shoes using 3DAI Studio and collected ground plane textures and HDRIs to use as varied, realistic backgrounds. A detailed 3D model with just 4 pictures: one of the things that was not possible in 2012.

3D model created with 3DAI Studio — drag to rotate · pinch to zoom

Automating the renders

Claude wrote a Blender script that automated producing large batches of photorealistic renders. The shoe in various background setups, lighting conditions, and viewpoints. After some tinkering with the script to get the results looking right, I ended up with over 2,000 renders. I made different variations: 1 shoe, 2 shoes, a shoe in a walking cycle or multiple shoes in one frame.

Blender render — studio background Blender render — concrete floor Blender render — outdoor HDRI Blender render — side angle

Examples from the 2,000+ synthetic renders

Training the first model

Using Kaggle and a training script written by Claude, I trained my first YOLOv8 model on the synthetic dataset. A platform like Kaggle offers 30 hours of free use of their CPU's and processing power. It gave me the opportunity to train a model fairly quickly. Meanwhile, I asked Claude to set up a proper code project. Git repository, testing pipeline, folder structure and the application side was ready to receive the model.

First test: kind of worked

Claude wrote the application and I supplied the trained model. The shoe got detected, but Claude's own automated tests performed significantly better than the actual test on my mobile phone. Furthermore, a lot of things that were not shoes got detected as a shoe.

Claude's automated test result — shoe detection working
Mobile phone test — shoe detection unreliable

The gap between automated test and real-world mobile test

Retraining with real data

I decided to retrain the model. I added 100 real photographs of my shoes in various real-world environments, plus a set of hard negatives (images of things that are not the shoe), to sharpen the model's discrimination. A few training iterations later, results improved meaningfully. I also decided to up the resolution from 320 x 320 to 640 x 640 to get a better recognition when a shoe is further away. I also decided to let it just detect if it sees a shoe, instead of having a distinction between the left and right shoe.

Real training photo — shoe indoors Real training photo — shoe outdoors Hard negative training example — not a shoe

Real training photographs

The phone orientation breakthrough

I discovered that the orientation of my phone mattered for this prototype. When holding the phone vertically the confidence of the recognition was the same as in the automated tests of Claude. Now I had a model that could detect my shoes with high confidence.

Detection working on mobile after orientation fix — bounding box correctly placed on shoe

In portrait detection on mobile matched automated test quality

Adding animations

With detection working reliably, I added various animations on top of the detected shoe. The 2012 concept, finally a working demo. Twelve years and a bit of AI assistance later.

In the app I included five different kinds of animations. The adidas logo bubbling up from the shoe, a 3D model overlay of another shoe, gray-out the rest of the world except your shoes, light trail and a star trail. See the various options in action below.

Overlay 3D model of another shoe

The 3D model overlaid on the detected shoe — drag to rotate · pinch to zoom

Try the animations yourself

I used these to tweak the animations and decide the settings for in the app.

LIGHT TRAILS
STAR TRAILS

Both animations run live in the browser — no install, no plugin. Same stack that runs in the AR app.

Tracking and animation in motion.

Different animation styles.

Who did what.

This project spans Blender scripting, Machine Learning, browser integration, WebAR: disciplines that would have taken weeks to learn independently. I drove every decision about what to build and how to validate it. Claude handled the implementation that made it possible to do this in days rather than weeks.

Stefan decided

  • The overall approach: Machine Learning model for shoe recognition
  • Choosing the Ultraboost as the target shoe
  • Synthetic vs. real data strategy
  • When to retrain and what data to add
  • Debugging and interpreting real life tests results with the phone
  • Which animations to add and how they should feel

AI accelerated

  • Blender Python script for batch renders
  • Kaggle training script for YOLOv8
  • Project setup, Git structure, and test pipeline
  • Browser application with TF.js integration
  • WebGL/Three.js overlay and animation logic
  • Automated testing between training iterations