feihu.hf
|
ea86f6136a
|
add run gptq
|
1 year ago |
苏阳
|
65c73034c3
|
Fix Dockerfile for <https://github.com/QwenLM/Qwen/issues/783>.
|
1 year ago |
Ren Xuancheng
|
0a5b3b6b9a
|
Merge pull request #840 from JianxinMa/main
update readme
|
1 year ago |
兼欣
|
508acdeb88
|
add openai version requirement (openai<1.0)
|
1 year ago |
Yang An
|
5aa84bdfd3
|
Merge pull request #804 from QwenLM/update_readme_for_vllm_gptq
update readme for vllm-gptq
|
1 year ago |
feihu.hf
|
b7eb73d6ec
|
update readme for vllm-gptq
|
1 year ago |
yangapku
|
07602f7166
|
update wechat
|
1 year ago |
Yang An
|
524a7ff278
|
Merge pull request #742 from JianxinMa/main
update agent benchmarks and add qwen-72b results
|
1 year ago |
兼欣
|
cadc4c7d1a
|
fix typo
|
1 year ago |
兼欣
|
7eb9016908
|
update agent benchmarks and add qwen-72b results
|
1 year ago |
Yang An
|
a0a557aad8
|
Merge pull request #692 from QwenLM/jklj077-patch-1
Remove redundant code in finetune.py
|
1 year ago |
yangapku
|
c4fdd89d20
|
update README
|
1 year ago |
Ren Xuancheng
|
a6c1ea82ba
|
Remove redundant code in finetune.py
Fix https://github.com/QwenLM/Qwen/issues/660
|
1 year ago |
Yang An
|
8ec779a83e
|
Merge pull request #690 from QwenLM/jklj077-patch-1
fix typo in finetune.py
|
1 year ago |
Ren Xuancheng
|
8c7174e8c0
|
fix typo in finetune.py
fix https://github.com/QwenLM/Qwen/issues/687
|
1 year ago |
yangapku
|
08d5c79293
|
update README_CN.md
|
1 year ago |
yangapku
|
b1d80a9385
|
add 72B and 1.8B Qwen models, add Ascend 910 and Hygon DCU support, add docker support
|
1 year ago |
yangapku
|
e8e15962d8
|
add 72B and 1.8B Qwen models, add Ascend 910 and Hygon DCU support, add docker support
|
1 year ago |
yangapku
|
981c89b2a9
|
update finetune.py
|
1 year ago |
Yang An
|
91675cbcd4
|
Merge pull request #632 from JianxinMa/main
openai_api.py: compatible with both pydantic v1 and v2
|
1 year ago |
兼欣
|
2408eea7d6
|
openai_api.py: compatible with both pydantic v1 and v2
|
1 year ago |
yangapku
|
dcfc400881
|
update tokenization_note.md
|
1 year ago |
yangapku
|
42bd2fa694
|
update wechat
|
1 year ago |
Yang An
|
99cacff46a
|
Merge pull request #583 from QwenLM/update_readme_1106
add modelscope links for int8 models
|
1 year ago |
lukeming.lkm
|
845dc08474
|
add modelscope links for int8 models
|
1 year ago |
yangapku
|
c00209f932
|
update evaluate scripts
|
1 year ago |
yangapku
|
83368388aa
|
update openai_api.py
|
1 year ago |
Junyang Lin
|
daf00604d9
|
Update finetune.py
|
1 year ago |
Junyang Lin
|
10c3ceee39
|
Merge pull request #507 from songt96/feature/songt
print peft trainable params
|
1 year ago |
Junyang Lin
|
e6ecfb2294
|
Update README_FR.md
|
1 year ago |
Junyang Lin
|
6cf80d2c11
|
Update README_JA.md
|
1 year ago |
Junyang Lin
|
8effcb2a76
|
Update README_CN.md
|
1 year ago |
Junyang Lin
|
d082c2c926
|
Update README.md
|
1 year ago |
songt
|
e46d65084a
|
print peft trainable params
|
1 year ago |
Junyang Lin
|
3e63f107fa
|
Update finetune.py
|
1 year ago |
Junyang Lin
|
8c5beaba8d
|
Merge pull request #498 from QwenLM/1019_update_readme
update readme
|
1 year ago |
JustinLin610
|
c908968cea
|
update readme
|
1 year ago |
Junyang Lin
|
07de511766
|
Merge pull request #476 from dlutsniper/main
Update finetune.py For lower cpu memory
|
1 year ago |
梦典
|
f3d7c69be2
|
Update finetune.py
Just lora finetuing need low cpu memory usage
|
1 year ago |
Junyang Lin
|
3885918ab1
|
Merge pull request #485 from QwenLM/1017_readme
1017 readme
|
1 year ago |
JustinLin610
|
2a58d35ebd
|
Merge remote-tracking branch 'origin/main' into 1017_readme
|
1 year ago |
JustinLin610
|
899bc5bb98
|
update news
|
1 year ago |
Junyang Lin
|
2e0612cec3
|
Merge pull request #484 from QwenLM/1017_readme
add french readme
|
1 year ago |
JustinLin610
|
e6d8deb975
|
add french readme
|
1 year ago |
yangapku
|
93963f8d1f
|
add result of int8 models
|
1 year ago |
Junyang Lin
|
e3a7c5ecc7
|
Merge pull request #477 from QwenLM/1016_readme
update readme
|
1 year ago |
JustinLin610
|
235aa8f71e
|
update readme
|
1 year ago |
梦典
|
67b3f949b6
|
Update finetune.py For lower cpu memory
1.Change default device_map to "auto"
device_map = "auto"
2.For lower cpu memory, add parameter at load model
low_cpu_mem_usage=True,
|
1 year ago |
yangapku
|
78352b5a79
|
update readme about batch inference
|
1 year ago |
Yang An
|
9d1d0be363
|
Update requirements_web_demo.txt
|
1 year ago |