Merge pull request #335 from QwenLM/qwen_cpp

qwen.cpp link
main
Iurnem 1 year ago committed by GitHub
commit 4b124bfcba
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -467,6 +467,8 @@ model = load_model_on_gpus('Qwen/Qwen-7B-Chat', num_gpus=2)
Then you can run the 7B chat model on 2 GPUs using the above scripts.
<br><br>
We also provide pure C++ implementation of Qwen-LM and tiktoken, see [qwen.cpp](https://github.com/QwenLM/qwen.cpp) for details.
## Tool Usage
Qwen-Chat has been optimized for tool usage and function calling capabilities. Users can develop agents, LangChain applications, and even agument Qwen with a Python Code Interpreter.

@ -456,6 +456,8 @@ model = load_model_on_gpus('Qwen/Qwen-7B-Chat', num_gpus=2)
你即可使用2张GPU进行推理。
<br><br>
我们同时提供了Qwen-LM和tiktoken的C++实现, 更多细节请查看[qwen.cpp](https://github.com/QwenLM/qwen.cpp).
## 工具调用
Qwen-Chat针对工具使用、函数调用能力进行了优化。用户可以开发基于Qwen的Agent、LangChain应用、甚至Code Interpreter。

Loading…
Cancel
Save