This note deals with state-of-the-art methodologies in prompt tuning and addresses some distinctions between different methods used in different research.

This note compares methodologies of five papers and what they do namely:

The first paper is in a single task setting while the others are in CIL.

CIL questions for prompt tuning: How do we perform continual learning with two core concepts - plasticity + stability? How do we generate prompts and how to create an objective function? What about the input to the models and how to optimize?

Summaries

Methodologies

Prompt Pool and Selection

Prompts - Just like Prompt Tuning (PT) in NLP where they use frozen transformer models for downstream tasks prepended to input tokens, here prompts are learnable embedding vectors.

Key - Value based (L2P)

Prompt Components (CODA-P)

Contrastive Prototypical Loss (CPP)

Dual Prompting (TaIDPT)

Incremental Context Prompt (MLCIL)

Catastropic forgetting specific methods

Expansion and Orthogonality (CODA-P)

Selective Confidence Cluster Replay (MLCIL)

Datasets & Models

Datasets

Models