2024-04-02 12:37:37 +00:00
|
|
|
We provide diverse examples about fine-tuning LLMs.
|
|
|
|
|
|
|
|
```
|
|
|
|
examples/
|
|
|
|
├── lora_single_gpu/
|
2024-04-16 09:44:48 +00:00
|
|
|
│ ├── pretrain.sh: Do continuous pre-training using LoRA
|
2024-04-15 14:14:34 +00:00
|
|
|
│ ├── sft.sh: Do supervised fine-tuning using LoRA
|
|
|
|
│ ├── reward.sh: Do reward modeling using LoRA
|
|
|
|
│ ├── ppo.sh: Do PPO training using LoRA
|
|
|
|
│ ├── dpo.sh: Do DPO training using LoRA
|
|
|
|
│ ├── orpo.sh: Do ORPO training using LoRA
|
2024-04-25 21:34:58 +00:00
|
|
|
│ ├── sft_mllm.sh: Do supervised fine-tuning on multimodal data using LoRA
|
2024-04-02 12:37:37 +00:00
|
|
|
│ ├── prepare.sh: Save tokenized dataset
|
2024-04-15 14:14:34 +00:00
|
|
|
│ └── predict.sh: Do batch predict and compute BLEU and ROUGE scores after LoRA tuning
|
2024-04-02 12:37:37 +00:00
|
|
|
├── qlora_single_gpu/
|
2024-04-15 14:14:34 +00:00
|
|
|
│ ├── bitsandbytes.sh: Fine-tune 4/8-bit BNB models using QLoRA
|
|
|
|
│ ├── gptq.sh: Fine-tune 4/8-bit GPTQ models using QLoRA
|
|
|
|
│ ├── awq.sh: Fine-tune 4-bit AWQ models using QLoRA
|
|
|
|
│ └── aqlm.sh: Fine-tune 2-bit AQLM models using QLoRA
|
2024-04-02 12:37:37 +00:00
|
|
|
├── lora_multi_gpu/
|
2024-04-15 14:14:34 +00:00
|
|
|
│ ├── single_node.sh: Fine-tune model with Accelerate on single node using LoRA
|
2024-04-21 16:37:32 +00:00
|
|
|
│ ├── multi_node.sh: Fine-tune model with Accelerate on multiple nodes using LoRA
|
2024-04-23 10:29:46 +00:00
|
|
|
│ └── ds_zero3.sh: Fine-tune model with DeepSpeed ZeRO-3 using LoRA (weight sharding)
|
2024-04-02 12:37:37 +00:00
|
|
|
├── full_multi_gpu/
|
2024-04-15 14:14:34 +00:00
|
|
|
│ ├── single_node.sh: Full fine-tune model with DeepSpeed on single node
|
|
|
|
│ ├── multi_node.sh: Full fine-tune model with DeepSpeed on multiple nodes
|
2024-04-23 10:29:46 +00:00
|
|
|
│ └── predict.sh: Do parallel batch predict and compute BLEU and ROUGE scores after full tuning
|
2024-04-02 12:37:37 +00:00
|
|
|
├── merge_lora/
|
2024-04-02 12:51:21 +00:00
|
|
|
│ ├── merge.sh: Merge LoRA weights into the pre-trained models
|
2024-04-15 14:14:34 +00:00
|
|
|
│ └── quantize.sh: Quantize the fine-tuned model with AutoGPTQ
|
2024-04-02 12:37:37 +00:00
|
|
|
├── inference/
|
2024-04-25 11:02:32 +00:00
|
|
|
│ ├── cli_demo.sh: Chat with fine-tuned model in the CLI with LoRA adapters
|
|
|
|
│ ├── api_demo.sh: Chat with fine-tuned model in an OpenAI-style API with LoRA adapters
|
|
|
|
│ ├── web_demo.sh: Chat with fine-tuned model in the Web browser with LoRA adapters
|
2024-04-15 14:14:34 +00:00
|
|
|
│ └── evaluate.sh: Evaluate model on the MMLU/CMMLU/C-Eval benchmarks with LoRA adapters
|
2024-04-02 12:37:37 +00:00
|
|
|
└── extras/
|
|
|
|
├── galore/
|
2024-04-02 12:51:21 +00:00
|
|
|
│ └── sft.sh: Fine-tune model with GaLore
|
2024-04-16 09:44:48 +00:00
|
|
|
├── badam/
|
|
|
|
│ └── sft.sh: Fine-tune model with BAdam
|
2024-04-02 12:37:37 +00:00
|
|
|
├── loraplus/
|
2024-04-15 14:14:34 +00:00
|
|
|
│ └── sft.sh: Fine-tune model using LoRA+
|
2024-04-21 10:11:10 +00:00
|
|
|
├── mod/
|
|
|
|
│ └── sft.sh: Fine-tune model using Mixture-of-Depths
|
2024-04-02 12:37:37 +00:00
|
|
|
├── llama_pro/
|
2024-04-02 12:51:21 +00:00
|
|
|
│ ├── expand.sh: Expand layers in the model
|
2024-04-15 14:14:34 +00:00
|
|
|
│ └── sft.sh: Fine-tune the expanded model
|
2024-04-02 12:37:37 +00:00
|
|
|
└── fsdp_qlora/
|
2024-04-15 14:14:34 +00:00
|
|
|
└── sft.sh: Fine-tune quantized model with FSDP+QLoRA
|
2024-04-02 12:37:37 +00:00
|
|
|
```
|