diff --git a/README.md b/README.md index a3e85f1..8eb79a1 100644 --- a/README.md +++ b/README.md @@ -76,7 +76,7 @@ Now you can start with ModelScope or Transformers. #### 🤗 Transformers -To use Qwen-7B-chat for the inference, all you need to do is to input a few lines of codes as demonstrated below: +To use Qwen-7B-Chat for the inference, all you need to do is to input a few lines of codes as demonstrated below: ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer @@ -111,7 +111,7 @@ To use Qwen-7B-chat for the inference, all you need to do is to input a few line ``` -Running Qwen-7B is also simple. +Running Qwen-7B pretrained base model is also simple.
Running Qwen-7B