Update README.md

main
Yang An 2 years ago committed by GitHub
parent 0e3cccda48
commit ea5532847c
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -59,7 +59,7 @@ Below, we provide simple examples to show how to use Qwen-7B with 🤖 ModelScop
Before running the code, make sure you have setup the environment and installed the required packages. Make sure the pytorch version is higher than `1.12`, and then install the dependent libraries. Before running the code, make sure you have setup the environment and installed the required packages. Make sure the pytorch version is higher than `1.12`, and then install the dependent libraries.
```bash ```bash
pip install transformers==4.31.0 accelerate tiktoken einops pip install -r requirements.txt
``` ```
If your device supports fp16 or bf16, we recommend installing [flash-attention](https://github.com/Dao-AILab/flash-attention) for higher efficiency and lower memory usage. (**flash-attention is optional and the project can run normally without installing it**) If your device supports fp16 or bf16, we recommend installing [flash-attention](https://github.com/Dao-AILab/flash-attention) for higher efficiency and lower memory usage. (**flash-attention is optional and the project can run normally without installing it**)
@ -81,9 +81,8 @@ To use Qwen-7B-Chat for the inference, all you need to do is to input a few line
from transformers import AutoModelForCausalLM, AutoTokenizer from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.generation import GenerationConfig from transformers.generation import GenerationConfig
# Note: our tokenizer rejects attacks and so that you cannot input special tokens like <|endoftext|> or it will throw an error. # Note: For tokenizer usage, please refer to examples/tokenizer_showcase.ipynb.
# To remove the strategy, you can add `allowed_special`, which accepts the string "all" or a `set` of special tokens. # The default behavior now has injection attack prevention off.
# For example: tokens = tokenizer(text, allowed_special="all")
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen-7B-Chat", trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen-7B-Chat", trust_remote_code=True)
# We recommend checking the support of BF16 first. Run the command below: # We recommend checking the support of BF16 first. Run the command below:
# import torch # import torch
@ -276,5 +275,3 @@ Researchers and developers are free to use the codes and model weights of both Q
If you are interested to leave a message to either our research team or product team, feel free to send an email to qianwen_opensource@alibabacloud.com. If you are interested to leave a message to either our research team or product team, feel free to send an email to qianwen_opensource@alibabacloud.com.

Loading…
Cancel
Save