Computer Vision · Machine Learning

Football Match
Analysis AI

Upload any football video and get AI-powered insights — player tracking, team detection, speed & distance in metres, and ball possession analysis.

Try it now
YOLO
OpenCV
Teams
FastAPI
Stats
Track
Processing Pipeline

Six Computer Vision modules in a sequential pipeline

Each frame passes through detection, tracking, and analysis to produce per-player statistics and an annotated video.

STEP 01
Object Detection
YOLOv8 detects players, referees, and the ball in every frame with bounding boxes. A fine-tuned model improves ball accuracy.
YOLOv8 · ultralytics
STEP 02
Multi-Object Tracking
ByteTrack assigns persistent IDs across frames so each player keeps the same number even when they overlap or leave the frame.
ByteTrack · supervision
STEP 03
Camera Compensation
Lucas-Kanade Optical Flow tracks static pitch features to measure camera panning — subtracted from player positions for true movement.
Optical Flow · OpenCV
STEP 04
Perspective Transform
Homography matrix maps pixel coordinates to real-world metres on the pitch using 4 known landmarks.
Homography · NumPy
STEP 05
Team Detection
K-Means clustering on shirt-pixel colors automatically assigns players into two teams with zero manual input.
K-Means · scikit-learn
STEP 06
Speed & Possession
Frame-to-frame displacement → speed (km/h) and cumulative distance (m). Ball proximity determines possession per team.
Statistics · NumPy
Technical Deep-Dive

How it works

From raw video to annotated output — here's what happens behind the scenes when you upload a match clip.

1. Video Ingestion
Your clip is uploaded to the FastAPI backend, decoded frame-by-frame with OpenCV, and queued for processing. The SSE stream opens immediately so you see real-time updates on every step.
2. YOLO Detection
YOLOv8x runs on every frame, producing bounding boxes for players, referees, and the ball. A fine-tuned model (trained on football data) is used specifically to improve small-ball detection in wide shots.
3. Tracking + Team ID
ByteTrack links detections across frames (persistent IDs). Then K-Means clusters shirt colors to auto-assign teams — no manual input needed. Camera panning is estimated and subtracted so movement data reflects real pitch displacement.
4. Metrics & Annotation
Pixel positions are mapped to real-world metres via a homography transform. Speed, distance, and ball possession are computed per-player per-frame. The final annotated video has bounding boxes, IDs, speed overlays, and a possession HUD burned in.
Built With

Technology Stack

What You Get

Analysis Output

After processing, you receive two things: an annotated video and a statistics JSON.

Annotated Video
Bounding boxes with player IDs, team colors, speed overlays (km/h), and a real-time ball possession HUD burned into every frame.
Ball Possession %
Exact possession split between both teams based on which player is closest to the ball across all frames analyzed.
Per-Player Stats
Max speed (km/h) and total distance covered (metres) for every tracked player, grouped by auto-detected team color.
See It In Action

Before & After

Drag the slider to compare the raw input footage with the AI-annotated output.

Original
AI Output
Try It Yourself

Analyse your video

Upload a football match clip. Each frame passes through detection, tracking, and analysis to produce per-player statistics and an annotated video.

Drag & drop your video here

or click to browse

Ball Possession
Team 1 — 0% Team 2 — 0%
Player Statistics
Player ID Team Max Speed (km/h) Distance (m)
Upload a video to see results