From ea5532847c90c73dfacb967b7bbb1a9ece0cc8bf Mon Sep 17 00:00:00 2001 From: Yang An Date: Sat, 5 Aug 2023 21:59:47 +0800 Subject: [PATCH] Update README.md --- README.md | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-) diff --git a/README.md b/README.md index 85f2b5e..75fc6d2 100644 --- a/README.md +++ b/README.md @@ -59,7 +59,7 @@ Below, we provide simple examples to show how to use Qwen-7B with 🤖 ModelScop Before running the code, make sure you have setup the environment and installed the required packages. Make sure the pytorch version is higher than `1.12`, and then install the dependent libraries. ```bash -pip install transformers==4.31.0 accelerate tiktoken einops +pip install -r requirements.txt ``` If your device supports fp16 or bf16, we recommend installing [flash-attention](https://github.com/Dao-AILab/flash-attention) for higher efficiency and lower memory usage. (**flash-attention is optional and the project can run normally without installing it**) @@ -81,9 +81,8 @@ To use Qwen-7B-Chat for the inference, all you need to do is to input a few line from transformers import AutoModelForCausalLM, AutoTokenizer from transformers.generation import GenerationConfig -# Note: our tokenizer rejects attacks and so that you cannot input special tokens like <|endoftext|> or it will throw an error. -# To remove the strategy, you can add `allowed_special`, which accepts the string "all" or a `set` of special tokens. -# For example: tokens = tokenizer(text, allowed_special="all") +# Note: For tokenizer usage, please refer to examples/tokenizer_showcase.ipynb. +# The default behavior now has injection attack prevention off. tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen-7B-Chat", trust_remote_code=True) # We recommend checking the support of BF16 first. Run the command below: # import torch @@ -276,5 +275,3 @@ Researchers and developers are free to use the codes and model weights of both Q If you are interested to leave a message to either our research team or product team, feel free to send an email to qianwen_opensource@alibabacloud.com. - -