Auto Seed Vl2 Apr 2026

. A seed is a tuple ( s = (v, w) ), where ( v \in \mathbbR^d ) is a visual prototype and ( w \in \mathbbR^d ) is a textual prototype, such that for any example ( (x, y) ) from a past task, ( |f_I(x) - v| ) and ( |f_T(y) - w| ) are small, and ( \textsim(v, w) ) is high.

[5] Zhang, Y., et al. (2024). VLM-CL: A benchmark for continual learning in vision-language models. NeurIPS Datasets Track.

[2] Shin, H., et al. (2017). Continual learning with deep generative replay. NIPS. auto seed vl2

[4] Thengane, V., et al. (2023). Continual-CLIP: Fine-tuning CLIP for continual learning. CVPR Workshop.

: Auto-Seed VL2 outperforms all baselines, including ER-VLM with 10× more memory, and beats generative replay by over 13 points on average. The BLEU-4 score on C→F is particularly striking, indicating that generated seeds capture caption semantics well. 6.2 Ablation Study Removing components from Auto-Seed VL2 on C→R: (2024)

Auto-Seed VL2 maintains a set of auto-generated seeds ( \mathcalS ) that grows slowly over tasks. Auto-Seed VL2 operates in three phases per task: (1) Seed replay, (2) Online adaptation, (3) Seed update. 4.1 Overall Architecture

[7] Khattak, M. U., et al. (2023). MaPLe: Multi-modal prompt learning. CVPR. [2] Shin, H

[6] von Oswald, J., et al. (2020). Continual learning with hypernetworks. ICLR.