update examples
This commit is contained in:
parent
11a6c1bad6
commit
31ffbde24d
|
@ -23,8 +23,8 @@ examples/
|
||||||
│ ├── single_node.sh
|
│ ├── single_node.sh
|
||||||
│ └── multi_node.sh
|
│ └── multi_node.sh
|
||||||
├── merge_lora/
|
├── merge_lora/
|
||||||
│ ├── merge.sh
|
│ ├── merge.sh: Merge LoRA weights
|
||||||
│ └── quantize.sh
|
│ └── quantize.sh: Quantize with AutoGPTQ
|
||||||
├── inference/
|
├── inference/
|
||||||
│ ├── cli_demo.sh
|
│ ├── cli_demo.sh
|
||||||
│ ├── api_demo.sh
|
│ ├── api_demo.sh
|
||||||
|
|
|
@ -1,5 +0,0 @@
|
||||||
```bash
|
|
||||||
pip install "transformers>=4.39.1"
|
|
||||||
pip install "accelerate>=0.28.0"
|
|
||||||
pip install "bitsandbytes>=0.43.0"
|
|
||||||
```
|
|
|
@ -1,5 +1,9 @@
|
||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
|
|
||||||
|
pip install "transformers>=4.39.1"
|
||||||
|
pip install "accelerate>=0.28.0"
|
||||||
|
pip install "bitsandbytes>=0.43.0"
|
||||||
|
|
||||||
CUDA_VISIBLE_DEVICES=0,1 accelerate launch \
|
CUDA_VISIBLE_DEVICES=0,1 accelerate launch \
|
||||||
--config_file ../accelerate/fsdp_config.yaml \
|
--config_file ../accelerate/fsdp_config.yaml \
|
||||||
../../src/train_bash.py \
|
../../src/train_bash.py \
|
||||||
|
|
|
@ -1,9 +0,0 @@
|
||||||
Usage:
|
|
||||||
|
|
||||||
- `pretrain.sh`: do pre-train (optional)
|
|
||||||
- `sft.sh`: do supervised fine-tuning
|
|
||||||
- `reward.sh`: do reward modeling (must after sft.sh)
|
|
||||||
- `ppo.sh`: do PPO training (must after sft.sh and reward.sh)
|
|
||||||
- `dpo.sh`: do DPO training (must after sft.sh)
|
|
||||||
- `orpo.sh`: do ORPO training
|
|
||||||
- `predict.sh`: do predict (must after sft.sh and dpo.sh)
|
|
|
@ -1,13 +0,0 @@
|
||||||
> [!WARNING]
|
|
||||||
> Merging LoRA weights into a quantized model is not supported.
|
|
||||||
|
|
||||||
> [!TIP]
|
|
||||||
> Use `--model_name_or_path path_to_model` solely to use the exported model or model fine-tuned in full/freeze mode.
|
|
||||||
>
|
|
||||||
> Use `CUDA_VISIBLE_DEVICES=0`, `--export_quantization_bit 4` and `--export_quantization_dataset data/c4_demo.json` to quantize the model with AutoGPTQ after merging the LoRA weights.
|
|
||||||
|
|
||||||
|
|
||||||
Usage:
|
|
||||||
|
|
||||||
- `merge.sh`: merge the lora weights
|
|
||||||
- `quantize.sh`: quantize the model with AutoGPTQ (must after merge.sh, optional)
|
|
Loading…
Reference in New Issue