From fb52dd330837638afd65fb332c523d83c451966f Mon Sep 17 00:00:00 2001
From: Junyang Lin
Date: Wed, 13 Sep 2023 16:53:34 +0800
Subject: [PATCH] Update README.md
---
README.md | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/README.md b/README.md
index 757ab30..7f1ec7e 100644
--- a/README.md
+++ b/README.md
@@ -15,6 +15,10 @@
+__Will be back soon...__
+
+---
+
We opensource **Qwen-7B** and **Qwen-7B-Chat** on both **🤖 ModelScope** and **🤗 Hugging Face** (Click the logos on top to the repos with codes and checkpoints). This repo includes the brief introduction to Qwen-7B, the usage guidance, and also a technical memo [link](tech_memo.md) that provides more information.
Qwen-7B is the 7B-parameter version of the large language model series, Qwen (abbr. Tongyi Qianwen), proposed by Alibaba Cloud. Qwen-7B is a Transformer-based large language model, which is pretrained on a large volume of data, including web texts, books, codes, etc. Additionally, based on the pretrained Qwen-7B, we release Qwen-7B-Chat, a large-model-based AI assistant, which is trained with alignment techniques. The features of the Qwen-7B series include: