From a46024035bf987a0d88ecc505a9e6cda5b2c2208 Mon Sep 17 00:00:00 2001 From: Junyang Lin Date: Mon, 25 Sep 2023 14:46:15 +0800 Subject: [PATCH] Update README.md typo --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index c4d8b3a..83d00fc 100644 --- a/README.md +++ b/README.md @@ -283,7 +283,7 @@ The above speed and memory profiling are conducted using [this script](https://q Now we provide the official training script, `finetune.py`, for users to finetune the pretrained model for downstream applications in a simple fashion. Additionally, we provide shell scripts to launch finetuning with no worries. This script supports the training with [DeepSpeed](https://github.com/microsoft/DeepSpeed) and [FSDP](https://engineering.fb.com/2021/07/15/open-source/fsdp/). The shell scripts that we provide use DeepSpeed (Note: this may have conflicts with the latest version of pydantic) and Peft. You can install them by: ```bash -pip install peft deespeed +pip install peft deepspeed ``` To prepare your training data, you need to put all the samples into a list and save it to a json file. Each sample is a dictionary consisting of an id and a list for conversation. Below is a simple example list with 1 sample: