>_TheQuery
← Glossary

Fine-tuning

MLOps

The process of further training a pretrained model on a smaller, task-specific dataset to adapt it for a particular use case.

Fine-tuning is a transfer learning technique in which a model that has been pretrained on a large general-purpose dataset is further trained on a smaller dataset specific to the target task. This approach leverages the knowledge already encoded in the pretrained model's weights, typically requiring far less data and compute than training from scratch.

There are several approaches to fine-tuning. Full fine-tuning updates all model parameters, which can be expensive for large models. Parameter-efficient fine-tuning methods such as LoRA (Low-Rank Adaptation), prefix tuning, and adapter layers update only a small fraction of parameters while achieving competitive performance. Instruction fine-tuning trains models on input-output pairs formatted as instructions, improving their ability to follow directions.

Fine-tuning is central to deploying large language models and other foundation models for specific applications. It allows organizations to customize general-purpose models for domain-specific tasks like medical question answering, legal document analysis, or code generation, achieving strong performance with relatively modest datasets and computational budgets.

Last updated: February 20, 2026