update examples

This commit is contained in:
hiyouga 2024-04-02 20:41:49 +08:00
parent 11a6c1bad6
commit 31ffbde24d
5 changed files with 6 additions and 29 deletions

View File

@ -23,8 +23,8 @@ examples/
│ ├── single_node.sh
│ └── multi_node.sh
├── merge_lora/
│ ├── merge.sh
│ └── quantize.sh
│ ├── merge.sh: Merge LoRA weights
│ └── quantize.sh: Quantize with AutoGPTQ
├── inference/
│ ├── cli_demo.sh
│ ├── api_demo.sh

View File

@ -1,5 +0,0 @@
```bash
pip install "transformers>=4.39.1"
pip install "accelerate>=0.28.0"
pip install "bitsandbytes>=0.43.0"
```

View File

@ -1,5 +1,9 @@
#!/bin/bash
pip install "transformers>=4.39.1"
pip install "accelerate>=0.28.0"
pip install "bitsandbytes>=0.43.0"
CUDA_VISIBLE_DEVICES=0,1 accelerate launch \
--config_file ../accelerate/fsdp_config.yaml \
../../src/train_bash.py \

View File

@ -1,9 +0,0 @@
Usage:
- `pretrain.sh`: do pre-train (optional)
- `sft.sh`: do supervised fine-tuning
- `reward.sh`: do reward modeling (must after sft.sh)
- `ppo.sh`: do PPO training (must after sft.sh and reward.sh)
- `dpo.sh`: do DPO training (must after sft.sh)
- `orpo.sh`: do ORPO training
- `predict.sh`: do predict (must after sft.sh and dpo.sh)

View File

@ -1,13 +0,0 @@
> [!WARNING]
> Merging LoRA weights into a quantized model is not supported.
> [!TIP]
> Use `--model_name_or_path path_to_model` solely to use the exported model or model fine-tuned in full/freeze mode.
>
> Use `CUDA_VISIBLE_DEVICES=0`, `--export_quantization_bit 4` and `--export_quantization_dataset data/c4_demo.json` to quantize the model with AutoGPTQ after merging the LoRA weights.
Usage:
- `merge.sh`: merge the lora weights
- `quantize.sh`: quantize the model with AutoGPTQ (must after merge.sh, optional)