Skip to content

Commit 2b95829

Browse files
committed
drive link
Signed-off-by: Pranav Doma <pranavreddy2327@gmail.com>
1 parent 708c925 commit 2b95829

2 files changed

Lines changed: 446 additions & 392 deletions

File tree

Lines changed: 20 additions & 51 deletions
Original file line numberDiff line numberDiff line change
@@ -1,63 +1,32 @@
1-
## AutoDrive end-to-end longitudinal + lateral cues
1+
## AutoDrive 2.0 - temporal end-to-end distance and curvature estimation
22

3-
AutoDrive is a compact end-to-end network for **driver-assistance style** outputs from a front camera: it predicts **distance to the closest in-path object (CIPO)**, **road curvature** (related to steering demand), and a **binary CIPO presence** signal. It is designed to sit alongside classical pipelines (for example AutoSpeed + homography-based distance) and to consume the same style of **wide front view** inputs after a fixed geometric crop.
3+
For autonomous cruise and driver-assistance applications, it is important to estimate both the road-following demand and the lead-object distance from camera input. AutoDrive is a compact temporal model that predicts three outputs from consecutive front-view frames: **distance-to-CIPO**, **road curvature**, and **CIPO presence probability**.
44

5-
The network shares its backbone architecture with the **AutoSpeed** detector (same width/depth variant) and adds a small **temporal head** that fuses **previous** and **current** frame features. Training typically **warm-starts the backbone** from a trained AutoSpeed checkpoint, then learns the head (and optionally the full stack) on labelled sequences.
5+
AutoDrive processes a pair of frames (`t-1`, `t`) and uses a shared-feature temporal fusion head to capture short-term motion cues that a single-frame model can miss. It is designed for 2:1 input aspect ratio and is typically used with the same center-crop preprocessing used during training.
66

7-
### What AutoDrive outputs (inference)
7+
### Demo Video
8+
[Watch the demo video](<ADD_DRIVE_DEMO_LINK_HERE>)
89

9-
- **Normalized distance** `d_norm` in `[0, 1]`
10-
Mapped to meters as:
11-
`d = 150 × (1 - d_norm)`
12-
*(capped at 150 m)*
10+
## Get Started
1311

14-
- **Curvature** in `1/m`
15-
*(Scaled internally during training; see `CURV_SCALE` in `Models/data_utils/load_data_auto_drive.py`)*
12+
To quickly try AutoDrive on your own data, please follow the steps in the [tutorial](tutorial.ipynb).
13+
For best results, ensure your inference input follows the training preprocessing pipeline (50-degree center crop and 1024x512 resize).
1614

17-
- **CIPO logit**
18-
Converted to probability using the sigmoid function
19-
A common validation threshold is **0.65** for class 1
15+
### Performance Results
2016

21-
Input resolution after the standard ZOD / training preprocessing is **1024 × 512** (2:1), RGB, with **ImageNet** mean/std normalisation.
17+
AutoDrive is trained as a multi-task regression/classification model on sequence data with labels for curvature, CIPO distance, and CIPO presence.
18+
Please add your official release metrics here once final evaluation is complete.
2219

23-
### Demo / explainer
20+
## Model variants
2421

25-
A packaged demo video link can be added here when the public release is ready. Until then, use the [tutorial](tutorial.ipynb) and the inference scripts under `Models/inference/`.
22+
AutoDrive currently uses one primary variant in this repository:
2623

27-
## Get started
24+
**AutoDrive 2.0 model weights - 2:1 aspect ratio, 1024px by 512px input image**
25+
- [Link to Download Pytorch Model Weights *.pt](<https://drive.google.com/drive/u/1/folders/182h_9eBHroMCOfQHJiXVgrNq7zHx7Qws?dmr=1&ec=wgc-drive-hero-goto>)
2826

29-
Follow the steps in **[tutorial.ipynb](tutorial.ipynb)**. Use the same **2:1** aspect ratio as training (1024 × 512) for best results.
27+
### Notes
3028

31-
### Where the code lives
32-
33-
| Topic | Location |
34-
|--------|-----------|
35-
| Network definition | `Models/model_components/autodrive/autodrive_network.py` |
36-
| Data + crop / labels | `Models/data_utils/load_data_auto_drive.py` |
37-
| Training entry | `Models/training/train_auto_drive.py` |
38-
| Example inference (video) | `Models/inference/autodrive_curvature_video.py` |
39-
| Benchmark vs classical | `Models/inference/classical_vs_autodrive_benchmark.py` |
40-
| Side-by-side comparison video | `Models/inference/comparison_video.py` |
41-
42-
### Performance and datasets
43-
44-
Training and evaluation are tied to your **ZOD** (or compatible) label layout: curvature, `distance_to_in_path_object`, and `cipo_detected` per frame. Report numbers in your release notes when you freeze a checkpoint; this README intentionally stays version-agnostic.
45-
46-
## Model weights (release placeholder)
47-
48-
**Planned public artifact:** `AutoDrive.pt` (or `AutoDrive.pth`) — single-file weights aligned with the release tag.
49-
50-
Until the official link is published:
51-
52-
- Use checkpoints produced by this repo, for example
53-
`{zod_root}/training/autodrive/run002/checkpoints/AutoDrive_best.pth`
54-
- The training script saves a dict with a `"model"` key containing `state_dict`-compatible weights.
55-
56-
When you publish the final file, update this section with:
57-
58-
- **PyTorch** — link to `AutoDrive.pt`
59-
- **Optional ONNX / TensorRT** — add links here if you export them
60-
61-
## Model variant
62-
63-
There is a single **AutoDrive** architecture in-tree: **1024 × 512** RGB, **two-frame** input (previous + current), backbone compatible with **AutoSpeed** `.pt` for weight transfer.
29+
- Training entry point: `Models/training/train_auto_drive.py`
30+
- Core network: `Models/model_components/autodrive/autodrive_network.py`
31+
- Data preprocessing and scaling: `Models/data_utils/load_data_auto_drive.py`
32+
- AutoSpeed backbone warm-start is supported through `--autospeed-ckpt`

0 commit comments

Comments
 (0)