Slide 41
Slide 41 text
# for llm-jp-eval
max_seq_length: 2048
dataset_artifact: "wandb-japan/llm-leaderboard/jaster:v3" #if you use artifacts, please fill here (if not, fill
null)
dataset_dir: "/jaster/1.1.0/evaluation/test"
target_dataset: "all" # {all, jamp, janli, jcommonsenseqa, jemhopqa, jnli, jsem, jsick, jsquad, jsts, niilc,
chabsa}
log_dir: "./logs"
torch_dtype: "bf16" # {fp16, bf16, fp32}
custom_prompt_template: " [INST] {instruction}\n{input}[/INST]"
custom_fewshots_template: null
# Please include {input} and {output} as variables
# example of fewshots template
# "\n### 入力:\n{input}\n### 回答:\n{output}"
metainfo:
basemodel_name: "mistralai/Mistral-7B-Instruct-v0.2"
model_type: "open llm" # {open llm, commercial api}
instruction_tuning_method: "None" # {"None", "Full", "LoRA", ...}
instruction_tuning_data: ["None"] # {"None", "jaster", "dolly_ja", "oasst_ja", ...}
num_few_shots: 0
llm-jp-eval-version: "1.1.0"
config.yamlの設定 (llm-jp-eval)