JEPA Wiki — Index
Where to start
If you're new to JEPA: read overview first for the full story, then concepts/latent-prediction for the core principle, then concepts/jepa-vs-alternatives to understand how JEPA differs from LLMs and diffusion models.
If you're looking for a specific paper: use the paper table below — each page has an architecture diagram or video, key results, and cross-references.
If you want concrete numbers: go to concepts/benchmarks-and-results.
Concepts
Core principles
- concepts/latent-prediction — The defining JEPA principle: predict in representation space, never in pixel space
- concepts/masking-strategies — Patch, multi-block, sequencer, object-level, dense predictive loss, deep self-supervision
- concepts/collapse-prevention — EMA, SIGReg, deep self-supervision, VICReg, distillation
Architecture & training
- concepts/vision-transformers — Encoder scales (5M to 2B), tokenization, positional encoding, predictor designs
- concepts/training-recipes — Datasets, training schedules, hyperparameters, scaling laws, compute costs
Comparisons
- concepts/jepa-vs-alternatives — JEPA vs LLMs, diffusion, DINO, MAE. The capacity waste argument, why diffusion can't plan, energy-based planning, representations as coarse-grained data.
Applications
- concepts/world-models-and-planning — Action conditioning, CEM planning, robot results, planning speed
- concepts/object-centric-representations — Patches vs objects, causal inductive bias, efficiency gains
- concepts/benchmarks-and-results — Consolidated results across all tasks and models
Papers — JEPA Family (by date)
| ID | Short Name | Date | Modality | Page |
|---|---|---|---|---|
| 2301.08243 | I-JEPA | 2023-01 | image | papers/2301.08243 |
| 2307.12698 | MC-JEPA | 2023-07 | video | papers/2307.12698 |
| 2403.00504 | IWM | 2024-03 | image | papers/2403.00504 |
| 2404.08471 | V-JEPA | 2024-04 | video | papers/2404.08471 |
| 2404.16432 | Point-JEPA | 2024-04 | point-cloud | papers/2404.16432 |
| 2409.15803 | 3D-JEPA | 2024-09 | 3D | papers/2409.15803 |
| 2501.14622 | ACT-JEPA | 2025-01 | policy/action | papers/2501.14622 |
| 2506.09985 | V-JEPA 2 | 2025-06 | video | papers/2506.09985 |
| 2509.14252 | LLM-JEPA | 2025-09 | language | papers/2509.14252 |
| 2511.08544 | LeJEPA | 2025-11 | image | papers/2511.08544 |
| 2512.10942 | VL-JEPA | 2025-12 | vision-language | papers/2512.10942 |
| 2602.03604 | EB-JEPA | 2026-02 | multi | papers/2602.03604 |
| 2602.11389 | C-JEPA | 2026-02 | video/causal | papers/2602.11389 |
| 2603.14482 | V-JEPA 2.1 | 2026-03 | video/image | papers/2603.14482 |
| 2603.19312 | LeWorldModel | 2026-03 | video/pixels | papers/2603.19312 |
| 2603.22281 | ThinkJEPA | 2026-03 | video/VLM | papers/2603.22281 |
Papers — Related Work
| ID | Short Name | Date | Modality | Page |
|---|---|---|---|---|
| 2104.14294 | DINO | 2021-04 | image | papers/2104.14294 |
| 2304.07193 | DINOv2 | 2023-04 | image | papers/2304.07193 |
| 2512.16922 | NEPA | 2025-12 | image | papers/2512.16922 |
| 2410.06940 | REPA | 2024-10 | image | papers/2410.06940 |
| 2508.10104 | DINOv3 | 2025-08 | image | papers/2508.10104 |
| 2603.06507 | Self-Flow | 2026-03 | image/video/audio | papers/2603.06507 |
| 2604.01193 | SSD | 2026-04 | language/code | papers/2604.01193 |
Not yet ingested
| 2507.02915 | Audio-JEPA | 2025-07 | audio | not on HF Papers |
Foundational
- papers/lecun-position-paper — "A Path Towards Autonomous Machine Intelligence" (LeCun, 2022). The 62-page blueprint: cognitive architecture, JEPA, H-JEPA, energy-based planning, Mode-1/Mode-2, four training criteria. What was validated and what remains open.
- papers/saining-xie-interview — 7-hour interview with Saining Xie (AMI Labs co-founder). Three stages of understanding JEPA, the case against LLMs, DiT/REPA origin story, AMI Labs mission, forward predictions.
Sources
- Turing Post: 14 JEPA Milestones — the article that seeded this wiki
- LeCun: A Path Towards Autonomous Machine Intelligence — the foundational position paper