Fine-tuning is the process of further a pre-trained AI model using your own data so that it better understands your specific context, style, or objectives.
You'll need:
Strictly follow the expected dataset structure for the model you're fine-tuning. More details about Data sample, visit here: https://fptcloud.com/en/documents/model-fine-tuning/?doc=select-dataset-format
Clean, diverse, and non-duplicated data.
A clear objective for fine-tuning (e.g., tech support, customer service, content writing, etc.).
Depends on your needs:
Under 1B parameters: fast, cost-efficient, good for lightweight devices.
7B - 13B: balanced between quality and performance.
Over 30B: ideal for demanding, high-quality applications.
It depends on:
Model size
Amount of training data.
Your hardware setup.
Typically, it ranges from a few hours to several days.
It depends on the model size:
<1B parameters: 1 GPU (24 GB VRAM) is sufficient
7B models: 2-4 GPUs (40 GB VRAM each)
13B models: 4-8 GPUs recommended
30B+ models: Requires 8+ GPUs and multi-node setup
For small to medium models (up to 13B), a single node with multiple GPUs is enough.
For larger models (30B+), multi-node setups are recommended for better memory and performance.