Fine-tune with LoRA Technology
Customize your language models efficiently with Low-Rank Adaptation. Select your model, dataset, and parameters to start training.
Customize your language models efficiently with Low-Rank Adaptation. Select your model, dataset, and parameters to start training.
Low-Rank Adaptation for Efficient Fine-tuning
Configure Your Training
Select your model, dataset, and training parameters to begin fine-tuning.
14 billion parameters
7 billion parameters
2 billion parameters
52K instruction-following samples
15K instruction-following samples
Upload your own data
Monitor Your Training
Loss
0.42
Learning Rate
0.001
GPU Utilization
87%
VRAM Usage
12.4/16 GB
Elapsed Time
24m 32s