LLaMA-Factory/data
hoshi-hiyouga d4d9180c40
Update README_zh.md
2024-05-02 02:14:55 +08:00
..
belle_multiturn add dpo mix dataset 2024-04-20 01:31:38 +08:00
example_dataset add dpo mix dataset 2024-04-20 01:31:38 +08:00
hh_rlhf_en add dpo mix dataset 2024-04-20 01:31:38 +08:00
mllm_demo_data release v0.7.0 2024-04-26 23:18:00 +08:00
ultra_chat add dpo mix dataset 2024-04-20 01:31:38 +08:00
README.md Update README.md 2024-05-02 02:13:46 +08:00
README_zh.md Update README_zh.md 2024-05-02 02:14:55 +08:00
alpaca_data_en_52k.json restore from git lfs 2023-08-01 16:33:25 +08:00
alpaca_data_zh_51k.json fix #3247 2024-04-12 17:41:33 +08:00
alpaca_gpt4_data_en.json restore from git lfs 2023-08-01 16:33:25 +08:00
alpaca_gpt4_data_zh.json restore from git lfs 2023-08-01 16:33:25 +08:00
c4_demo.json support autogptq in llama board #246 2023-12-16 16:31:30 +08:00
comparison_gpt4_data_en.json restore from git lfs 2023-08-01 16:33:25 +08:00
comparison_gpt4_data_zh.json restore from git lfs 2023-08-01 16:33:25 +08:00
dataset_info.json update readme 2024-04-26 23:39:19 +08:00
glaive_toolcall_10k.json fix dataset 2024-01-18 12:59:30 +08:00
identity.json update data 2024-03-02 19:37:18 +08:00
lima.json restore from git lfs 2023-08-01 16:33:25 +08:00
mllm_demo.json release v0.7.0 2024-04-26 23:18:00 +08:00
oaast_rm.json restore from git lfs 2023-08-01 16:33:25 +08:00
oaast_rm_zh.json restore from git lfs 2023-08-01 16:33:25 +08:00
oaast_sft.json restore from git lfs 2023-08-01 16:33:25 +08:00
oaast_sft_zh.json restore from git lfs 2023-08-01 16:33:25 +08:00
orca_rlhf.json add orca_dpo_pairs dataset 2024-03-20 20:09:06 +08:00
wiki_demo.txt update dataset 2023-11-17 23:19:12 +08:00

README.md

If you are using a custom dataset, please add your dataset description to dataset_info.json according to the following format. We also provide several examples in the next section.

"dataset_name": {
  "hf_hub_url": "the name of the dataset repository on the Hugging Face hub. (if specified, ignore script_url and file_name)",
  "ms_hub_url": "the name of the dataset repository on the ModelScope hub. (if specified, ignore script_url and file_name)",
  "script_url": "the name of the directory containing a dataset loading script. (if specified, ignore file_name)",
  "file_name": "the name of the dataset file in this directory. (required if above are not specified)",
  "file_sha1": "the SHA-1 hash value of the dataset file. (optional, does not affect training)",
  "subset": "the name of the subset. (optional, default: None)",
  "folder": "the name of the folder of the dataset repository on the Hugging Face hub. (optional, default: None)",
  "ranking": "whether the dataset is a preference dataset or not. (default: false)",
  "formatting": "the format of the dataset. (optional, default: alpaca, can be chosen from {alpaca, sharegpt})",
  "columns (optional)": {
    "prompt": "the column name in the dataset containing the prompts. (default: instruction)",
    "query": "the column name in the dataset containing the queries. (default: input)",
    "response": "the column name in the dataset containing the responses. (default: output)",
    "history": "the column name in the dataset containing the histories. (default: None)",
    "messages": "the column name in the dataset containing the messages. (default: conversations)",
    "system": "the column name in the dataset containing the system prompts. (default: None)",
    "tools": "the column name in the dataset containing the tool description. (default: None)",
    "images": "the column name in the dataset containing the image inputs. (default: None)"
  },
  "tags (optional, used for the sharegpt format)": {
    "role_tag": "the key in the message represents the identity. (default: from)",
    "content_tag": "the key in the message represents the content. (default: value)",
    "user_tag": "the value of the role_tag represents the user. (default: human)",
    "assistant_tag": "the value of the role_tag represents the assistant. (default: gpt)",
    "observation_tag": "the value of the role_tag represents the tool results. (default: observation)",
    "function_tag": "the value of the role_tag represents the function call. (default: function_call)",
    "system_tag": "the value of the role_tag represents the system prompt. (default: system, can override system column)"
  }
}

After that, you can load the custom dataset by specifying --dataset dataset_name.


Currently we support dataset in alpaca or sharegpt format, the dataset in alpaca format should follow the below format:

[
  {
    "instruction": "user instruction (required)",
    "input": "user input (optional)",
    "output": "model response (required)",
    "system": "system prompt (optional)",
    "history": [
      ["user instruction in the first round (optional)", "model response in the first round (optional)"],
      ["user instruction in the second round (optional)", "model response in the second round (optional)"]
    ]
  }
]

Regarding the above dataset, the description in dataset_info.json should be:

"dataset_name": {
  "file_name": "data.json",
  "columns": {
    "prompt": "instruction",
    "query": "input",
    "response": "output",
    "system": "system",
    "history": "history"
  }
}

The query column will be concatenated with the prompt column and used as the user prompt, then the user prompt would be prompt\nquery. The response column represents the model response.

The system column will be used as the system prompt. The history column is a list consisting string tuples representing prompt-response pairs in the history. Note that the responses in the history will also be used for training in supervised fine-tuning.

For the pre-training datasets, only the prompt column will be used for training, for example:

[
  {"text": "document"},
  {"text": "document"}
]

Regarding the above dataset, the description in dataset_info.json should be:

"dataset_name": {
  "file_name": "data.json",
  "columns": {
    "prompt": "text"
  }
}

For the preference datasets, the response column should be a string list whose length is 2, with the preferred answers appearing first, for example:

[
  {
    "instruction": "user instruction",
    "input": "user input",
    "output": [
      "chosen answer",
      "rejected answer"
    ]
  }
]

Regarding the above dataset, the description in dataset_info.json should be:

"dataset_name": {
  "file_name": "data.json",
  "ranking": true,
  "columns": {
    "prompt": "instruction",
    "query": "input",
    "response": "output",
  }
}

The dataset in sharegpt format should follow the below format:

[
  {
    "conversations": [
      {
        "from": "human",
        "value": "user instruction"
      },
      {
        "from": "gpt",
        "value": "model response"
      }
    ],
    "system": "system prompt (optional)",
    "tools": "tool description (optional)"
  }
]

Regarding the above dataset, the description in dataset_info.json should be:

"dataset_name": {
  "file_name": "data.json",
  "formatting": "sharegpt",
  "columns": {
    "messages": "conversations",
    "system": "system",
    "tools": "tools"
  },
  "tags": {
    "role_tag": "from",
    "content_tag": "value",
    "user_tag": "human",
    "assistant_tag": "gpt"
  }
}

where the messages column should be a list following the u/a/u/a/u/a order.

We also supports the dataset in the openai format:

[
  {
    "messages": [
      {
        "role": "system",
        "content": "system prompt (optional)"
      },
      {
        "role": "user",
        "content": "user instruction"
      },
      {
        "role": "assistant",
        "content": "model response"
      }
    ]
  }
]

Regarding the above dataset, the description in dataset_info.json should be:

"dataset_name": {
  "file_name": "data.json",
  "formatting": "sharegpt",
  "columns": {
    "messages": "messages"
  },
  "tags": {
    "role_tag": "role",
    "content_tag": "content",
    "user_tag": "user",
    "assistant_tag": "assistant",
    "system_tag": "system"
  }
}

Pre-training datasets and preference datasets are incompatible with the sharegpt format yet.