From f8b79b4cfa7f1c315f1103379956897b45e79b1f Mon Sep 17 00:00:00 2001 From: Yang An Date: Thu, 3 Aug 2023 17:50:07 +0800 Subject: [PATCH] Update README.md --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index a3e85f1..8eb79a1 100644 --- a/README.md +++ b/README.md @@ -76,7 +76,7 @@ Now you can start with ModelScope or Transformers. #### 🤗 Transformers -To use Qwen-7B-chat for the inference, all you need to do is to input a few lines of codes as demonstrated below: +To use Qwen-7B-Chat for the inference, all you need to do is to input a few lines of codes as demonstrated below: ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer @@ -111,7 +111,7 @@ To use Qwen-7B-chat for the inference, all you need to do is to input a few line ``` -Running Qwen-7B is also simple. +Running Qwen-7B pretrained base model is also simple.
Running Qwen-7B