| Metric | BADAS 2.0 | COSMOS-BADAS |
|---|---|---|
| Average Precision | 99.4% | 94.0% |
| Early Warning Recall | 91.3% | 48.3% |
| Architecture | V-JEPA2 (Attention) | Autoregressive |
| Smallest Model | 22M params | 2B params (cloud only) |
| Training Data | 2M real-world clips | 2M real-world clips (same) |
| Explainability | Native attention maps | None |
| Group | BADAS 2.0 | BADAS 2.0Flash | BADAS 2.0Flash Lite | BADAS 1.0 | BADAS Open | COSMOS-Reason2Fine-tuned | Gemini 2.5 ProFine-tuned | Qwen3-VL-2B | |
|---|---|---|---|---|---|---|---|---|---|
| Animal | AUC | 96.4% | 94.8% | 92.3% | 94.8% | 88.1% | 81.6% | 79.9% | 75.9% |
| AP | 99.1% | 98.8% | 98.1% | 98.7% | 95.7% | 95.2% | 93.9% | 93.2% | |
| Pedestrian | AUC | 99.8% | 99.6% | 99.1% | 99.1% | 84.4% | 93.6% | 77.4% | 67.4% |
| AP | 99.9% | 99.7% | 99.4% | 99.4% | 87.9% | 95.6% | 77.5% | 75.5% | |
| Intersection | AUC | 100.0% | 100.0% | 99.8% | 98.8% | 95.8% | 97.9% | 85.0% | 61.7% |
| AP | 100.0% | 100.0% | 99.7% | 98.4% | 93.7% | 96.9% | 73.7% | 45.3% | |
| Overtaking | AUC | 100.0% | 100.0% | 99.7% | 97.4% | 92.7% | 97.5% | 83.2% | 82.6% |
| AP | 100.0% | 99.9% | 99.4% | 96.1% | 85.3% | 94.8% | 60.8% | 68.0% | |
| Snow | AUC | 100.0% | 100.0% | 99.9% | 99.6% | 93.2% | 97.5% | 95.3% | 80.4% |
| AP | 100.0% | 100.0% | 99.9% | 99.5% | 94.0% | 97.3% | 93.6% | 77.9% | |
| Infrastructure | AUC | 100.0% | 99.4% | 98.4% | 86.9% | 84.0% | 97.4% | 90.8% | 58.4% |
| AP | 100.0% | 99.6% | 98.7% | 91.5% | 88.0% | 98.1% | 92.2% | 62.5% | |
| Motorcyclist | AUC | 99.8% | 99.8% | 100.0% | 99.8% | 96.6% | 98.9% | 80.1% | 72.5% |
| AP | 99.9% | 99.9% | 100.0% | 99.9% | 96.9% | 99.1% | 76.6% | 76.1% | |
| Cyclist | AUC | 100.0% | 98.7% | 99.4% | 98.6% | 93.1% | 94.0% | 82.9% | 64.8% |
| AP | 100.0% | 99.1% | 99.5% | 98.7% | 94.2% | 95.3% | 81.2% | 59.8% | |
| Rain | AUC | 100.0% | 100.0% | 100.0% | 97.5% | 96.6% | 99.6% | 82.2% | 81.5% |
| AP | 100.0% | 100.0% | 100.0% | 98.0% | 95.9% | 99.4% | 64.5% | 66.2% | |
| Fog | AUC | 100.0% | 99.8% | 99.8% | 100.0% | 99.4% | 98.2% | 81.6% | 82.5% |
| AP | 100.0% | 99.5% | 99.5% | 100.0% | 98.7% | 95.9% | 60.5% | 76.0% | |
| OVERALL | AUC | 99.3% | 98.9% | 98.1% | 94.9% | 82.3% | 92.6% | 83.3% | 67.8% |
| AP | 99.4% | 99.0% | 98.4% | 96.0% | 84.5% | 94.1% | 79.5% | 67.2% |
| Model | AP @0.5s | AP @1.0s | AP @1.5s | mAP | FPR | Params |
|---|---|---|---|---|---|---|
| BADAS 2.0 | 94.3% | 95.7% | 92.1% | 94.0% | 4.6% | 300M |
| BADAS 2.0 Flash | 94.5% | 96.2% | 91.5% | 94.1% | 9.7% | 86M |
| BADAS 2.0 Flash Lite | 94.6% | 94.7% | 90.7% | 93.3% | 12.2% | 22M |
| BADAS 1.0 | 93.5% | 93.6% | 90.4% | 92.5% | 10.9% | 300M |
| COSMOS-BADAS | 90.4% | 88.9% | 87.5% | 88.9% | – | 2B |
| Model | DAD AUC | DAD AP | DoTA AUC | DoTA AP | DADA AUC | DADA AP |
|---|---|---|---|---|---|---|
| BADAS 2.0 | 99.3% | 92.2% | 99.1% | 99.9% | 99.1% | 99.6% |
| BADAS 2.0 Flash | 98.7% | 84.9% | 98.5% | 99.8% | 99.0% | 99.5% |
| BADAS 2.0 Flash Lite | 98.2% | 87.0% | 98.5% | 99.8% | 98.1% | 99.2% |
| BADAS 1.0 | 99.0% | 94.0% | 72.0% | 95.0% | 87.0% | 90.0% |
| COSMOS-BADAS | 94.4% | 60.2% | 98.3% | 99.8% | 95.9% | 97.8% |
| Qwen3-VL-2B | 75.4% | 14.1% | 70.9% | 95.1% | 80.5% | 88.6% |
Attention heatmaps reveal exactly what the model sees and focuses on during risk scenarios. Not a black box.
Predicts the right action to take and explains why in natural language. "Brake immediately – a dark vehicle is crossing the intersection from the left directly into the ego vehicle's path."
| Model | Params | AP | A100 (FP16) | Jetson Thor (TensorRT) |
|---|---|---|---|---|
| BADAS 2.0 | 300M | 99.4% | 34ms | 41ms |
| BADAS 2.0 Flash | 86M | 99.0% | 4.8ms | 12.5ms |
| BADAS 2.0 Flash Lite | 22M | 98.4% | 2.8ms | 5.9ms |
Most collision anticipation models are trained on synthetic data or small academic datasets. BADAS 2.0 is trained exclusively on real-world edge device footage from Nexar's network – the largest ego-centric driving dataset ever assembled for this task.
BADAS 2.0 is trained on ~200,000 labeled videos (~2M windowed clips) – a 5x expansion over v1.0. The corpus is assembled through intelligent data mining: BADAS 1.0 runs as an active oracle over millions of unlabeled Nexar drives, surfacing high-risk clips for human review.
The result: 99.4% AP at 4.6% FPR – a 58% reduction in false alarms over v1.0 on the sliding-window benchmark, with gains across all subgroups including the hardest long-tail categories.
Excels on rare, edge-case scenarios – animals, fog, snow, motorcyclists, infrastructure failures. 99.4% AP across all 10 long-tail categories where competitors collapse.
V-JEPA2 predicts latent-space representations of future frames. This optimizes for physical causality – what will happen – not visual similarity to training data.
Works on situations that have nothing to do with driving on the road directly – drones flying, vacuums cleaning, a forklift at work. This is because the model went beyond road rules. It learned physics.
BADAS 2.0 is not a driving model. It is a world model that happens to be deployed on roads.
Ground truth for training, validating, and benchmarking autonomous systems. Access the world's largest library of outcome-verified edge cases.
API access to collision prediction as a feature layer. Ship BADAS as a premium safety tier on Snapdragon Ride or proprietary ADAS platforms.
Real-time collision risk scoring for commercial vehicle operations. Move from reactive incident management to predictive intervention.
BADAS risk scores as underwriting inputs. Behavior-based pricing backed by production-grade collision anticipation AI.
BADAS stands for Beyond ADAS (Advanced Driver Assistance Systems). It is Nexar's collision anticipation system, now in its second generation. BADAS 2.0 fine-tunes V-JEPA2 on ~200,000 labeled edge device videos (~2M windowed clips) and achieves state-of-the-art accuracy across all public collision anticipation benchmarks.
BADAS 2.0 was evaluated on the Nexar Kaggle competition (1,344 clips, single window), a new 10-group long-tail benchmark (888 clips covering animal, pedestrian, cyclist, fog, rain, snow, intersection, infrastructure, passing/overtaking, and motorcyclist scenarios), and three public external benchmarks: DAD, DoTA, and DADA-2000 using ego-centric re-annotation and sliding-window evaluation.
The paper compares five BADAS variants (2.0, 1.0, 2.0 Flash, 2.0 Flash Lite, Open) against four VLM baselines: Gemini-BADAS (Gemini 2.5 Pro fine-tuned on BADAS data), COSMOS-BADAS (NVIDIA COSMOS-Reason2-2B fine-tuned), vanilla Gemini 2.5 Pro, and Qwen3-VL-2B. Even after fine-tuning on the same data, autoregressive VLMs remain significantly below the BADAS family on the long-tail benchmark.
BADAS 2.0 fine-tunes V-JEPA2 (ViT-L, 300M parameters) end-to-end on edge device video. A future-prediction branch estimates the scene 1 second ahead, giving the classifier access to both present and anticipated dynamics. The distilled variants – BADAS 2.0 Flash at 86M (4x compression) and BADAS 2.0 Flash Lite at 22M (14x compression) – use domain-specific SSL pre-training followed by knowledge distillation to achieve near-parity accuracy at 7–12x faster inference.
BADAS is available via a public API that lets you upload video, run predictions, and export results – including prediction overlays and attention heatmaps. Try the web-based playground at the top of this page for direct browser access. Enterprise partners can request full API access for integration into AV programs, fleets, or ADAS platforms.