DeepSeek
DeepSeek V3.1
DeepSeek-V3.1 is a hybrid model that supports both thinking mode and non-thinking mode. DeepSeek-V3.1 is fine-tuned on top of DeepSeek-V3.1-Base, which is built upon the original V3 base checkpoint using a two-phase long-context extension approach, following the methodology outlined in the original DeepSeek-V3 report. We have expanded our dataset by collecting additional long documents and substantially extending both training phases. The 32K extension phase has been increased tenfold to 630 billion tokens, while the 128K extension phase has been extended by 3.3 times to 209 billion tokens.
DeepSeek R1 0528
DeepSeek R1 0528 is the latest open-source model released by the DeepSeek team, boasting impressive reasoning capabilities; notably, it achieves performance comparable to OpenAI’s o1 model in mathematics, coding, and reasoning tasks.
pa/test-model-interface
Wufan Test Synchronized from Novita to the AI API
222
222
DeepSeek V3 0324
DeepSeek V3, a 685B-parameter mixture-of-experts model, is the latest iteration of the flagship chat model family from the DeepSeek team.