From 6acf8f126f7a122ca12aa967a2e09ef1f299979f Mon Sep 17 00:00:00 2001 From: JustinLin610 Date: Sat, 5 Aug 2023 23:59:59 +0800 Subject: [PATCH] fix typo --- README.md | 4 ++-- README_CN.md | 2 -- 2 files changed, 2 insertions(+), 4 deletions(-) diff --git a/README.md b/README.md index 2d42a77..e68a037 100644 --- a/README.md +++ b/README.md @@ -186,7 +186,7 @@ print(f'Response: {response}') ## Quantization -We provide examples to show how to load models in `NF4` and `Int8`. For starters, make sure you have implemented `bitsandbytes`. Note that the requirements for `bitsandbytes` is: +We provide examples to show how to load models in `NF4` and `Int8`. For starters, make sure you have implemented `bitsandbytes`. Note that the requirements for `bitsandbytes` are: ``` **Requirements** Python >=3.8. Linux distribution (Ubuntu, MacOS, etc.) + CUDA > 10.0. @@ -197,7 +197,7 @@ Windows users should find another option, which might be [bitsandbytes-windows-w Then you only need to add your quantization configuration to `AutoModelForCausalLM.from_pretrained`. See the example below: ```python -from transformers import BitsAndBytesConfig +from transformers import AutoModelForCausalLM, BitsAndBytesConfig # quantization configuration for NF4 (4 bits) quantization_config = BitsAndBytesConfig( diff --git a/README_CN.md b/README_CN.md index 01b4083..51d0c4e 100644 --- a/README_CN.md +++ b/README_CN.md @@ -188,14 +188,12 @@ print(f'Response: {response}') 如希望使用更低精度的量化模型,如4比特和8比特的模型,我们提供了简单的示例来说明如何快速使用量化模型。在开始前,确保你已经安装了`bitsandbytes`。请注意,`bitsandbytes`的安装要求是: - ``` **Requirements** Python >=3.8. Linux distribution (Ubuntu, MacOS, etc.) + CUDA > 10.0. ``` Windows用户需安装特定版本的`bitsandbytes`,可选项包括[bitsandbytes-windows-webui](https://github.com/jllllll/bitsandbytes-windows-webui/releases/tag/wheels)。 - 你只需要在`AutoModelForCausalLM.from_pretrained`中添加你的量化配置,即可使用量化模型。如下所示: ```python