Hardware Requirements

  • The project uses the Qwen2.5-7B-Instruct model by default and applies LoRA for fine-tuning during the SFT (Supervised Fine-Tuning) phase. This requires approximately 16GB of VRAM.

  • You may also use other models and methods supported by LLaMA Factory.

Full (bf16 or fp16)

32

120GB

240GB

600GB

1200GB

18xGB

Full (pure_bf16)

16

60GB

120GB

300GB

600GB

8xGB

Freeze/LoRA/GaLore/APOLLO/BAdam

16

16GB

32GB

64GB

160GB

2xGB

QLoRA

8

10GB

20GB

40GB

80GB

xGB

QLoRA

4

6GB

12GB

24GB

48GB

x/2GB

QLoRA

2

4GB

8GB

16GB

24GB

x/4GB

Last updated