苏阳
|
73b34d4a9d
|
Fix bug of low_cpu_mem_usage in finetune.py.
|
1 year ago |
苏阳
|
23a01b0696
|
Add Docker image for CUDA-12.1.
|
1 year ago |
Ren Xuancheng
|
a6c1ea82ba
|
Remove redundant code in finetune.py
Fix https://github.com/QwenLM/Qwen/issues/660
|
1 year ago |
Ren Xuancheng
|
8c7174e8c0
|
fix typo in finetune.py
fix https://github.com/QwenLM/Qwen/issues/687
|
1 year ago |
yangapku
|
e8e15962d8
|
add 72B and 1.8B Qwen models, add Ascend 910 and Hygon DCU support, add docker support
|
1 year ago |
yangapku
|
981c89b2a9
|
update finetune.py
|
1 year ago |
Junyang Lin
|
daf00604d9
|
Update finetune.py
|
1 year ago |
songt
|
e46d65084a
|
print peft trainable params
|
1 year ago |
Junyang Lin
|
3e63f107fa
|
Update finetune.py
|
1 year ago |
梦典
|
f3d7c69be2
|
Update finetune.py
Just lora finetuing need low cpu memory usage
|
1 year ago |
梦典
|
67b3f949b6
|
Update finetune.py For lower cpu memory
1.Change default device_map to "auto"
device_map = "auto"
2.For lower cpu memory, add parameter at load model
low_cpu_mem_usage=True,
|
1 year ago |
Junyang Lin
|
00dceebbaa
|
Update finetune.py
|
1 year ago |
Junyang Lin
|
f7681f9e77
|
Update finetune.py
|
1 year ago |
Junyang Lin
|
75cc160f74
|
Update finetune.py
|
1 year ago |
JustinLin610
|
b5fad3d561
|
fix single-gpu qlora, and add profiling
|
1 year ago |
JustinLin610
|
af22d5e0ce
|
add finetuning
|
1 year ago |