Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
21.2k
Stars
2.6k
Forks
624
Issues
10.0k
Contributors
305
Watchers
nlppre-trained-modelunilmminilmlayoutlmlayoutxlmbeitdocument-aitrocrbeit-3foundation-modelsxlm-edeepnetllmmultimodalmllmkosmoskosmos-1textdiffuserbitnet
Python
{"name":"MIT License","spdxId":"MIT"}
Project Description
Unilm is a large-scale self-supervised pre-training model across tasks, languages and modalities. It is pre-trained by self-supervised learning, which enables the model to be transferred to different tasks and languages, with wide application value. The design goal of Unilm is to provide a unified pre-training model that can handle various natural language processing tasks, such as machine translation, text summarization, question answering, etc.