main
JustinLin610 1 year ago
parent f63c1d1026
commit 6acf8f126f

@ -186,7 +186,7 @@ print(f'Response: {response}')
## Quantization
We provide examples to show how to load models in `NF4` and `Int8`. For starters, make sure you have implemented `bitsandbytes`. Note that the requirements for `bitsandbytes` is:
We provide examples to show how to load models in `NF4` and `Int8`. For starters, make sure you have implemented `bitsandbytes`. Note that the requirements for `bitsandbytes` are:
```
**Requirements** Python >=3.8. Linux distribution (Ubuntu, MacOS, etc.) + CUDA > 10.0.
@ -197,7 +197,7 @@ Windows users should find another option, which might be [bitsandbytes-windows-w
Then you only need to add your quantization configuration to `AutoModelForCausalLM.from_pretrained`. See the example below:
```python
from transformers import BitsAndBytesConfig
from transformers import AutoModelForCausalLM, BitsAndBytesConfig
# quantization configuration for NF4 (4 bits)
quantization_config = BitsAndBytesConfig(

@ -188,14 +188,12 @@ print(f'Response: {response}')
如希望使用更低精度的量化模型如4比特和8比特的模型我们提供了简单的示例来说明如何快速使用量化模型。在开始前确保你已经安装了`bitsandbytes`。请注意,`bitsandbytes`的安装要求是:
```
**Requirements** Python >=3.8. Linux distribution (Ubuntu, MacOS, etc.) + CUDA > 10.0.
```
Windows用户需安装特定版本的`bitsandbytes`,可选项包括[bitsandbytes-windows-webui](https://github.com/jllllll/bitsandbytes-windows-webui/releases/tag/wheels)。
你只需要在`AutoModelForCausalLM.from_pretrained`中添加你的量化配置,即可使用量化模型。如下所示:
```python

Loading…
Cancel
Save