diff --git a/README.md b/README.md index c4d8b3a..83d00fc 100644 --- a/README.md +++ b/README.md @@ -283,7 +283,7 @@ The above speed and memory profiling are conducted using [this script](https://q Now we provide the official training script, `finetune.py`, for users to finetune the pretrained model for downstream applications in a simple fashion. Additionally, we provide shell scripts to launch finetuning with no worries. This script supports the training with [DeepSpeed](https://github.com/microsoft/DeepSpeed) and [FSDP](https://engineering.fb.com/2021/07/15/open-source/fsdp/). The shell scripts that we provide use DeepSpeed (Note: this may have conflicts with the latest version of pydantic) and Peft. You can install them by: ```bash -pip install peft deespeed +pip install peft deepspeed ``` To prepare your training data, you need to put all the samples into a list and save it to a json file. Each sample is a dictionary consisting of an id and a list for conversation. Below is a simple example list with 1 sample: