Merge branch 'main' of github.com:QwenLM/Qwen-7B

main
yangapku 1 year ago
commit fb3180d8f0

@ -15,6 +15,10 @@
</p>
<br><br>
__Will be back soon...__
---
We opensource **Qwen-7B** and **Qwen-7B-Chat** on both **🤖 ModelScope** and **🤗 Hugging Face** (Click the logos on top to the repos with codes and checkpoints). This repo includes the brief introduction to Qwen-7B, the usage guidance, and also a technical memo [link](tech_memo.md) that provides more information.
Qwen-7B is the 7B-parameter version of the large language model series, Qwen (abbr. Tongyi Qianwen), proposed by Alibaba Cloud. Qwen-7B is a Transformer-based large language model, which is pretrained on a large volume of data, including web texts, books, codes, etc. Additionally, based on the pretrained Qwen-7B, we release Qwen-7B-Chat, a large-model-based AI assistant, which is trained with alignment techniques. The features of the Qwen-7B series include:

@ -44,4 +44,5 @@ torchrun $DISTRIBUTED_ARGS finetune.py \
--model_max_length 2048 \
--lazy_preprocess True \
--use_lora \
--gradient_checkpointing \
--deepspeed finetune/ds_config_zero2.json

@ -32,4 +32,5 @@ python finetune.py \
--report_to "none" \
--model_max_length 2048 \
--lazy_preprocess True \
--gradient_checkpointing \
--use_lora

@ -46,4 +46,5 @@ torchrun $DISTRIBUTED_ARGS finetune.py \
--lazy_preprocess True \
--use_lora \
--q_lora \
--gradient_checkpointing \
--deepspeed finetune/ds_config_zero2.json

@ -32,5 +32,6 @@ python finetune.py \
--report_to "none" \
--model_max_length 2048 \
--lazy_preprocess True \
--gradient_checkpointing \
--use_lora \
--q_lora
Loading…
Cancel
Save