From d40742b004e9ffecd3f60fb5f9597f7f0dd3c4ad Mon Sep 17 00:00:00 2001 From: Ren Xuancheng Date: Thu, 1 Feb 2024 20:41:31 +0800 Subject: [PATCH 1/2] Update README.md Added notes due to recent peft update. --- README.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/README.md b/README.md index 72bd59d..4a381c8 100644 --- a/README.md +++ b/README.md @@ -693,6 +693,8 @@ model = AutoPeftModelForCausalLM.from_pretrained( ).eval() ``` +> NOTE: If `peft>=0.8.0`, it will try to load the tokenizer as well, however, initialized without `trust_remote_code=True`, leading to `ValueError: Tokenizer class QWenTokenizer does not exist or is not currently imported.` Currently, you could downgrade `peft<0.8.0` or move tokenizer files elsewhere to workaround this issue. + If you want to merge the adapters and save the finetuned model as a standalone model (you can only do this with LoRA, and you CANNOT merge the parameters from Q-LoRA), you can run the following codes: ```python From a66e21d56f6e18dbf9647af64af81734e928a716 Mon Sep 17 00:00:00 2001 From: Ren Xuancheng Date: Thu, 1 Feb 2024 20:46:17 +0800 Subject: [PATCH 2/2] Update README_CN.md --- README_CN.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/README_CN.md b/README_CN.md index a814e53..cfade96 100644 --- a/README_CN.md +++ b/README_CN.md @@ -683,6 +683,9 @@ model = AutoPeftModelForCausalLM.from_pretrained( ).eval() ``` +> 注意: 如果`peft>=0.8.0`,加载模型同时会尝试加载tokenizer,但peft内部未相应设置`trust_remote_code=True`,导致`ValueError: Tokenizer class QWenTokenizer does not exist or is not currently imported.`要避过这一问题,你可以降级`peft<0.8.0`或将tokenizer相关文件移到其它文件夹。 + + 如果你觉得这样一步到位的方式让你很不安心或者影响你接入下游应用,你可以选择先合并并存储模型(LoRA支持合并,Q-LoRA不支持),再用常规方式读取你的新模型,示例如下: ```python