@ -693,6 +693,8 @@ model = AutoPeftModelForCausalLM.from_pretrained(
).eval()
```
> NOTE: If `peft>=0.8.0`, it will try to load the tokenizer as well, however, initialized without `trust_remote_code=True`, leading to `ValueError: Tokenizer class QWenTokenizer does not exist or is not currently imported.` Currently, you could downgrade `peft<0.8.0` or move tokenizer files elsewhere to workaround this issue.
If you want to merge the adapters and save the finetuned model as a standalone model (you can only do this with LoRA, and you CANNOT merge the parameters from Q-LoRA), you can run the following codes:
@ -683,6 +683,9 @@ model = AutoPeftModelForCausalLM.from_pretrained(
).eval()
```
> 注意: 如果`peft>=0.8.0`,加载模型同时会尝试加载tokenizer,但peft内部未相应设置`trust_remote_code=True`,导致`ValueError: Tokenizer class QWenTokenizer does not exist or is not currently imported.`要避过这一问题,你可以降级`peft<0.8.0`或将tokenizer相关文件移到其它文件夹。