# CLAUDE.md This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository. ## Running the Application Start the Gradio web interface: ```bash python app.py ``` The app launches on `http://localhost:7860` by default (port 7860, binds to 0.0.0.0). ## Architecture Overview This is a Gradio-based web application for tennis ball tracking using computer vision. The processing pipeline has three main stages: 1. **Detection** ([detector.py](detector.py)): YOLOv8 detects tennis balls (COCO class 32) in each frame 2. **Tracking** ([tracker.py](tracker.py)): Kalman filter smooths trajectories and predicts positions during occlusion 3. **Visualization** ([utils/visualization.py](utils/visualization.py)): Overlays trajectory trails, bounding boxes, speed labels, and generates plots ### Key Design Patterns **Processing Flow** ([app.py](app.py:30-188)): - `process_video()` orchestrates the full pipeline - Uses context managers (`VideoReader`, `VideoWriter`) for safe I/O - Frame-by-frame processing: detect → update tracker → render overlays → write frame - Trajectory data accumulated in tracker, exported to CSV at end **State Management** ([tracker.py](tracker.py:13-211)): - `BallTracker` maintains Kalman filter state vector: `[x, y, vx, vy]` - Handles initialization on first detection - Predicts ball position when detection is lost (up to `max_missing_frames`) - Resets tracker if ball missing too long **Detection Selection** ([app.py](app.py:104-115)): - When multiple detections occur, uses highest confidence detection - Minimum box size filter (5x5 pixels) in [detector.py](detector.py:107) ## Module Dependencies ``` app.py (main entry point) ├── detector.py (BallDetector class) ├── tracker.py (BallTracker class) └── utils/ ├── io_utils.py (VideoReader, VideoWriter, CSV export) └── visualization.py (drawing functions, plotting) ``` All utilities are imported via `utils/__init__.py` which re-exports from submodules. ## Configuration Parameters **Detector** ([detector.py](detector.py:24-49)): - `model_name`: 'yolov8n' (fastest), 'yolov8s', 'yolov8m' (most accurate) - `confidence_threshold`: 0.1-0.9 (lower = more sensitive) - `device`: auto-detected ('cuda' if available, else 'cpu') **Tracker** ([tracker.py](tracker.py:28-47)): - `dt`: time step, calculated as `1.0 / fps` - `max_missing_frames`: typically `int(fps * 0.5)` (half-second tolerance) - `process_noise`: 0.1 (Kalman filter Q matrix) - `measurement_noise`: 10.0 (Kalman filter R matrix) ## Output Files All outputs saved to `output/` directory: - `tracked_video.mp4`: Video with overlays (bounding boxes, trails, speed labels, info panel) - `trajectory.csv`: Frame-by-frame data with columns: frame, timestamp, x/y position, velocity, speed - `trajectory_plot.png`: 2D plot with color-coded speed gradient (blue=slow, red=fast) ## Speed Estimation Speed calculated from Kalman filter velocity: `speed = sqrt(vx² + vy²) / dt` **Units**: Speed is in pixels/second. The "km/h" label uses rough approximation (`speed * 0.01`) - real-world conversion requires camera calibration. ## Deployment Notes **Hugging Face Spaces**: - Use YOLOv8n model for free tier (limited GPU) - App automatically creates `output/` directory on startup - Gradio interface configured with `share=False` by default **Model Downloads**: - YOLOv8 weights downloaded automatically by Ultralytics on first run - Models cached in `~/.cache/torch/hub/` directory