LLaMA-Factory/examples/inference/llama3_vllm.yaml

5 lines
118 B
YAML

model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
template: llama3
infer_backend: vllm
vllm_enforce_eager: true