From e6f2a7af6d2c59889a865add93812729b05a8b22 Mon Sep 17 00:00:00 2001 From: "lukeming.lkm" Date: Wed, 11 Oct 2023 19:09:26 +0800 Subject: [PATCH] update readme --- README.md | 45 ++++++++++++++++++++++++++++++++++++--------- README_CN.md | 45 ++++++++++++++++++++++++++++++++++++--------- README_JA.md | 45 ++++++++++++++++++++++++++++++++++++--------- 3 files changed, 108 insertions(+), 27 deletions(-) diff --git a/README.md b/README.md index fc5fe1c..2132fc6 100644 --- a/README.md +++ b/README.md @@ -195,6 +195,28 @@ print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True)) +In the event of a network issue while attempting to download model checkpoints and codes from HuggingFace, an alternative approach is to initially fetch the checkpoint from ModelScope and then load it from the local directory as outlined below: + +```python +from modelscope import snapshot_download +from transformers import AutoModelForCausalLM, AutoTokenizer + +# Downloading model checkpoint to a local dir model_dir +# model_dir = snapshot_download('qwen/Qwen-7B', revision='v1.1.4') +# model_dir = snapshot_download('qwen/Qwen-7B-Chat', revision='v1.1.4') +# model_dir = snapshot_download('qwen/Qwen-14B', revision='v1.0.4') +model_dir = snapshot_download('qwen/Qwen-14B-Chat', revision='v1.0.4') + +# Loading local checkpoints +# trust_remote_code is still set as True since we still load codes from local dir instead of transformers +tokenizer = AutoTokenizer.from_pretrained(model_dir, trust_remote_code=True) +model = AutoModelForCausalLM.from_pretrained( + model_dir, + device_map="auto", + trust_remote_code=True +).eval() +``` + #### 🤖 ModelScope ModelScope is an opensource platform for Model-as-a-Service (MaaS), which provides flexible and cost-effective model service to AI developers. Similarly, you can run the models with ModelScope as shown below: @@ -449,32 +471,37 @@ merged_model.save_pretrained(new_model_directory, max_shard_size="2048MB", safe_ Note: For multi-GPU training, you need to specify the proper hyperparameters for distributed training based on your machine. Besides, we advise you to specify your maximum sequence length with the argument `--model_max_length`, based on your consideration of data, memory footprint, and training speed. ### Profiling of Memory and Speed -We profile the GPU memory and training speed of both LoRA (LoRA (emb) refers to training the embedding and output layer, while LoRA has no trainable embedding and output layer) and Q-LoRA in the setup of single-GPU training. In this test, we experiment on a single A100-SXM4-80G GPU, and we use CUDA 11.8 and Pytorch 2.0. We uniformly use a batch size of 1 and gradient accumulation of 8. We profile the memory (GB) and speed (s/iter) of inputs of different lengths, namely 256, 512, 1024, and 2048. The statistics are listed below: +We profile the GPU memory and training speed of both LoRA (LoRA (emb) refers to training the embedding and output layer, while LoRA has no trainable embedding and output layer) and Q-LoRA in the setup of single-GPU training. In this test, we experiment on a single A100-SXM4-80G GPU, and we use CUDA 11.8 and Pytorch 2.0. Flash attention 2 is applied. We uniformly use a batch size of 1 and gradient accumulation of 8. We profile the memory (GB) and speed (s/iter) of inputs of different lengths, namely 256, 512, 1024, 2048, 4096, and 8192. We also report the statistics of full-parameter finetuning with Qwen-7B on 2 A100 GPUs. We only report the statistics of 256, 512, and 1024 tokens due to the limitation of GPU memory. The statistics are listed below: - + + + + + + - + - + - + - + - + - + - +
Model SizeMethodSequence LengthModel SizeMethodSequence Length
2565121024204840968192
256512102420487BLoRA20.1G / 1.2s/it20.4G / 1.5s/it21.5G / 2.8s/it23.8G / 5.2s/it29.7G / 10.1s/it36.6G / 21.3s/it
7BLoRA19.9G / 1.6s/it20.2G / 1.6s/it21.5G / 2.9s/it23.7G / 5.5s/itLoRA (emb)33.7G / 1.4s/it34.1G / 1.6s/it35.2G / 2.9s/it35.1G / 5.3s/it39.2G / 10.3s/it48.5G / 21.7s/it
LoRA (emb)33.5G / 1.6s/it34.0G / 1.7s/it35.0G / 3.0s/it35.0G / 5.7s/itQ-LoRA11.5G / 3.0s/it11.5G / 3.0s/it12.3G / 3.5s/it13.9G / 7.0s/it16.9G / 11.6s/it23.5G / 22.3s/it
Q-LoRA11.5G / 3.0s/it12.2G / 3.6s/it12.7G / 4.8s/it13.9G / 7.3s/itFull-parameter139.2G / 4.0s/it148.0G / 4.0s/it162.0G / 4.5s/it---
14BLoRA34.5G / 2.0s/it35.0G / 2.5s/it35.2G / 4.9s/it37.3G / 8.9s/it14BLoRA34.6G / 1.6s/it35.1G / 2.4s/it35.3G / 4.4s/it37.4G / 8.4s/it42.5G / 17.0s/it55.2G / 36.0s/it
LoRA (emb)51.0G / 2.1s/it51.0G / 2.7s/it51.5G / 5.0s/it53.9G / 9.2s/itLoRA (emb)51.2 / 1.7s/it51.1G / 2.6s/it51.5G / 4.6s/it54.1G / 8.6s/it56.8G / 17.2s/it67.7G / 36.3s/it
Q-LoRA18.3G / 5.4s/it18.4G / 6.4s/it18.5G / 8.5s/it19.9G / 12.4s/itQ-LoRA18.7G / 5.3s/it18.4G / 6.3s/it18.9G / 8.2s/it19.9G / 11.8s/it23.0G / 20.1s/it27.9G / 38.3s/it
diff --git a/README_CN.md b/README_CN.md index de3a656..67201bd 100644 --- a/README_CN.md +++ b/README_CN.md @@ -186,6 +186,28 @@ print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True)) +若在使用上述代码时由于各种原因无法从 HuggingFace 拉取模型和代码,可以先从 ModelScope 下载模型及代码至本地,再从本地加载模型: + +```python +from modelscope import snapshot_download +from transformers import AutoModelForCausalLM, AutoTokenizer + +# Downloading model checkpoint to a local dir model_dir +# model_dir = snapshot_download('qwen/Qwen-7B', revision='v1.1.4') +# model_dir = snapshot_download('qwen/Qwen-7B-Chat', revision='v1.1.4') +# model_dir = snapshot_download('qwen/Qwen-14B', revision='v1.0.4') +model_dir = snapshot_download('qwen/Qwen-14B-Chat', revision='v1.0.4') + +# Loading local checkpoints +# trust_remote_code is still set as True since we still load codes from local dir instead of transformers +tokenizer = AutoTokenizer.from_pretrained(model_dir, trust_remote_code=True) +model = AutoModelForCausalLM.from_pretrained( + model_dir, + device_map="auto", + trust_remote_code=True +).eval() +``` + #### 🤖 ModelScope 魔搭(ModelScope)是开源的模型即服务共享平台,为泛AI开发者提供灵活、易用、低成本的一站式模型服务产品。使用ModelScope同样非常简单,代码如下所示: @@ -435,32 +457,37 @@ merged_model.save_pretrained(new_model_directory, max_shard_size="2048MB", safe_ 注意:分布式训练需要根据你的需求和机器指定正确的分布式训练超参数。此外,你需要根据你的数据、显存情况和训练速度预期,使用`--model_max_length`设定你的数据长度。 ### 显存占用及训练速度 -下面记录7B和14B模型在单GPU使用LoRA(LoRA (emb)指的是embedding和输出层参与训练,而LoRA则不优化这部分参数)和QLoRA时处理不同长度输入的显存占用和训练速度的情况。本次评测运行于单张A100-SXM4-80G GPU,使用CUDA 11.8和Pytorch 2.0。我们统一使用batch size为1,gradient accumulation为8的训练配置,记录输入长度分别为256、512、1024和2048的显存占用(GB)和训练速度(s/iter)。具体数值如下所示: +下面记录7B和14B模型在单GPU使用LoRA(LoRA (emb)指的是embedding和输出层参与训练,而LoRA则不优化这部分参数)和QLoRA时处理不同长度输入的显存占用和训练速度的情况。本次评测运行于单张A100-SXM4-80G GPU,使用CUDA 11.8和Pytorch 2.0,并使用了flash attention 2。我们统一使用batch size为1,gradient accumulation为8的训练配置,记录输入长度分别为256、512、1024、2048、4096和8192的显存占用(GB)和训练速度(s/iter)。我们还使用2张A100测了Qwen-7B的全参数微调。受限于显存大小,我们仅测试了256、512和1024token的性能。具体数值如下所示: - + + + + + + - + - + - + - + - + - + - +
Model SizeMethodSequence LengthModel SizeMethodSequence Length
2565121024204840968192
256512102420487BLoRA20.1G / 1.2s/it20.4G / 1.5s/it21.5G / 2.8s/it23.8G / 5.2s/it29.7G / 10.1s/it36.6G / 21.3s/it
7BLoRA19.9G / 1.6s/it20.2G / 1.6s/it21.5G / 2.9s/it23.7G / 5.5s/itLoRA (emb)33.7G / 1.4s/it34.1G / 1.6s/it35.2G / 2.9s/it35.1G / 5.3s/it39.2G / 10.3s/it48.5G / 21.7s/it
LoRA (emb)33.5G / 1.6s/it34.0G / 1.7s/it35.0G / 3.0s/it35.0G / 5.7s/itQ-LoRA11.5G / 3.0s/it11.5G / 3.0s/it12.3G / 3.5s/it13.9G / 7.0s/it16.9G / 11.6s/it23.5G / 22.3s/it
Q-LoRA11.5G / 3.0s/it12.2G / 3.6s/it12.7G / 4.8s/it13.9G / 7.3s/itFull-parameter139.2G / 4.0s/it148.0G / 4.0s/it162.0G / 4.5s/it---
14BLoRA34.5G / 2.0s/it35.0G / 2.5s/it35.2G / 4.9s/it37.3G / 8.9s/it14BLoRA34.6G / 1.6s/it35.1G / 2.4s/it35.3G / 4.4s/it37.4G / 8.4s/it42.5G / 17.0s/it55.2G / 36.0s/it
LoRA (emb)51.0G / 2.1s/it51.0G / 2.7s/it51.5G / 5.0s/it53.9G / 9.2s/itLoRA (emb)51.2 / 1.7s/it51.1G / 2.6s/it51.5G / 4.6s/it54.1G / 8.6s/it56.8G / 17.2s/it67.7G / 36.3s/it
Q-LoRA18.3G / 5.4s/it18.4G / 6.4s/it18.5G / 8.5s/it19.9G / 12.4s/itQ-LoRA18.7G / 5.3s/it18.4G / 6.3s/it18.9G / 8.2s/it19.9G / 11.8s/it23.0G / 20.1s/it27.9G / 38.3s/it
diff --git a/README_JA.md b/README_JA.md index 46953b2..6c25759 100644 --- a/README_JA.md +++ b/README_JA.md @@ -191,6 +191,28 @@ print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True)) +HuggingFaceからモデルのチェックポイントとコードをダウンロードする際にネットワークの問題が発生した場合、ModelScopeからチェックポイントをダウンロードする方法はこちらでございます。 + +```python +from modelscope import snapshot_download +from transformers import AutoModelForCausalLM, AutoTokenizer + +# Downloading model checkpoint to a local dir model_dir +# model_dir = snapshot_download('qwen/Qwen-7B', revision='v1.1.4') +# model_dir = snapshot_download('qwen/Qwen-7B-Chat', revision='v1.1.4') +# model_dir = snapshot_download('qwen/Qwen-14B', revision='v1.0.4') +model_dir = snapshot_download('qwen/Qwen-14B-Chat', revision='v1.0.4') + +# Loading local checkpoints +# trust_remote_code is still set as True since we still load codes from local dir instead of transformers +tokenizer = AutoTokenizer.from_pretrained(model_dir, trust_remote_code=True) +model = AutoModelForCausalLM.from_pretrained( + model_dir, + device_map="auto", + trust_remote_code=True +).eval() +``` + #### 🤖 ModelScope ModelScope は、MaaS(Model-as-a-Service) のためのオープンソースプラットフォームであり、AI 開発者に柔軟で費用対効果の高いモデルサービスを提供します。同様に、以下のように ModelScope でモデルを実行することができます: @@ -443,32 +465,37 @@ merged_model.save_pretrained(new_model_directory, max_shard_size="2048MB", safe_ 注意:マルチGPUトレーニングの場合、分散トレーニング用の適切なハイパーパラメータをマシンに応じて指定する必要があります。また、データ、メモリフットプリント、トレーニング速度を考慮して、引数 `--model_max_length` で最大シーケンス長を指定することをお勧めします。 ### メモリと速度のプロファイリング -シングルGPUトレーニングのセットアップにおいて、LoRA (LoRA(emb)はembeddingと出力層を学習させるが、LoRAはembeddingと出力層を学習させない) とQ-LoRAのGPUメモリとトレーニング速度をプロファイリングする。このテストでは、シングルA100-SXM4-80G GPUで実験し、CUDA 11.8とPytorch 2.0を使用します。256、512、1024、2048という異なる長さの入力のメモリ(GB)と速度(s/iter)をプロファイリングします。統計量を以下に示す: +シングルGPUトレーニングのセットアップにおいて、LoRA (LoRA(emb)はembeddingと出力層を学習させるが、LoRAはembeddingと出力層を学習させない) とQ-LoRAのGPUメモリとトレーニング速度をプロファイリングする。このテストでは、シングルA100-SXM4-80G GPUで実験し、CUDA 11.8とPytorch 2.0を使用します。Flash attention 2を使用します。256、512、1024、2048、4096、8192という異なる長さの入力のメモリ(GB)と速度(s/iter)をプロファイリングします。また、2台のA100 GPUを用いたQwen-7Bによるフルパラメータ・ファインチューニングの統計量も報告する。GPUメモリの制限のため、256、512、1024トークンの統計のみを報告する。統計量を以下に示す: - + + + + + + - + - + - + - + - + - + - +
Model SizeMethodSequence LengthModel SizeMethodSequence Length
2565121024204840968192
256512102420487BLoRA20.1G / 1.2s/it20.4G / 1.5s/it21.5G / 2.8s/it23.8G / 5.2s/it29.7G / 10.1s/it36.6G / 21.3s/it
7BLoRA19.9G / 1.6s/it20.2G / 1.6s/it21.5G / 2.9s/it23.7G / 5.5s/itLoRA (emb)33.7G / 1.4s/it34.1G / 1.6s/it35.2G / 2.9s/it35.1G / 5.3s/it39.2G / 10.3s/it48.5G / 21.7s/it
LoRA (emb)33.5G / 1.6s/it34.0G / 1.7s/it35.0G / 3.0s/it35.0G / 5.7s/itQ-LoRA11.5G / 3.0s/it11.5G / 3.0s/it12.3G / 3.5s/it13.9G / 7.0s/it16.9G / 11.6s/it23.5G / 22.3s/it
Q-LoRA11.5G / 3.0s/it12.2G / 3.6s/it12.7G / 4.8s/it13.9G / 7.3s/itFull-parameter139.2G / 4.0s/it148.0G / 4.0s/it162.0G / 4.5s/it---
14BLoRA34.5G / 2.0s/it35.0G / 2.5s/it35.2G / 4.9s/it37.3G / 8.9s/it14BLoRA34.6G / 1.6s/it35.1G / 2.4s/it35.3G / 4.4s/it37.4G / 8.4s/it42.5G / 17.0s/it55.2G / 36.0s/it
LoRA (emb)51.0G / 2.1s/it51.0G / 2.7s/it51.5G / 5.0s/it53.9G / 9.2s/itLoRA (emb)51.2 / 1.7s/it51.1G / 2.6s/it51.5G / 4.6s/it54.1G / 8.6s/it56.8G / 17.2s/it67.7G / 36.3s/it
Q-LoRA18.3G / 5.4s/it18.4G / 6.4s/it18.5G / 8.5s/it19.9G / 12.4s/itQ-LoRA18.7G / 5.3s/it18.4G / 6.3s/it18.9G / 8.2s/it19.9G / 11.8s/it23.0G / 20.1s/it27.9G / 38.3s/it