AI Blog

AI Blog

by Michele Laurelli

LoRA (Low-Rank Adaptation)

Technique
Definition

Parameter-efficient fine-tuning adding trainable low-rank matrices to frozen weights.

Adds small trainable matrices A and B where ΔW = BA. Reduces trainable parameters to 0.1-1% while maintaining performance. Enables efficient adapter-based fine-tuning.

Examples

1

Fine-tuning LLMs with LoRA

2

Adapter modules

3

Multi-task learning

Michele Laurelli - AI Research & Engineering