中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
18.8k
Stars
1.9k
Forks
1
Issues
9
Contributors
184
Watchers
llmplmpre-trained-language-modelsalpacallamanlpquantizationlarge-language-modelsloraalpaca-2llama-2
Python
{"name":"Apache License 2.0","spdxId":"Apache-2.0"}
Project Description
A model that provides Chinese LLaMA and a large Alpaca model fine-tuned with instructions. These models are based on the original LLaMA, and have been retrained with Chinese data to expand the Chinese vocabulary, further enhancing the model's ability to understand Chinese semantics. At the same time, this project also uses Chinese instruction data for fine-tuning, significantly improving the model's understanding and execution of instructions.