Back to AI is open source
DeepSeek V3.1 Open Source Bulletin: Hugging Face Launches Weight with 128K Context

DeepSeek V3.1 Open Source Bulletin: Hugging Face Launches Weight with 128K Context

AI is open source Admin 5 views

DeepSeek has published the V3.1-Base model weights to Hugging Face, which can be downloaded and used directly. Many media outlets called this "open source" release; However, the current model card is not available, the license is not marked, and the strict boundaries of use still need to be subject to the official follow-up instructions. The online service has been upgraded to V3.1, claiming that the context length has been expanded to 128K, and the Web/App/Mini Program remains compatible with the API.


1. Key information

1. Release form: Hugging Face provides V3.1-Base weights (safetensors), and the page shows that it supports BF16 / F8_E4M3 / F32, providing a variety of quantitative versions.

2. Scale parameters: The page displays "Model size:685B params".

3. Context length: Multiple reports and official announcements show that the online model has been upgraded to 128K context, and the API call method remains unchanged.

4. Architectural background: The V3 series is a MoE (Mixture-of-Experts) route, combined with DeepSeek's self-developed MLA and other technologies; V3.1 On this basis, engineering and context enhancement are done (according to public materials and media summaries).

5. Notes: There are no detailed and permission fields for the current Hugging Face model card; You need to check the license and terms before downloading and commercializing.


2. Open source address (official and authoritative entrance)

1. Hugging Face · DeepSeek-V3.1-Base:

https://huggingface.co/deepseek-ai/DeepSeek-V3.1-Base

2, DeepSeek official website (products and APIs):

https://www.deepseek.com/

3, DeepSeek-V3 GitHub (Architecture and Thesis Background Reference):

https://github.com/deepseek-ai/DeepSeek-V3


3. Availability and deployment suggestions

1. Download and format: SafeTensors is preferred; Pick the BF16 or FP8(F8_E4M3) quantified variants by hardware.

2. Inference resources: 685B (MoE total parameters) level model requires high video memory/distributed inference; If resources are limited, choose quantization or cloud inference first.

3. Context strategy: 128K context is suitable for long documents/long codebases, and the prompt project should be combined with retrieval enhancement (RAG) to reduce invalid context injection.

4. Evaluation and grayscale: first use small sample benchmarks (code, search, long article summary) to do A/B, set thresholds for speed and cost, and then expand to production.


4. Typical application scenarios

1. Long document understanding and compliance summary: one-time context loading and segmented reasoning of contracts/annual reports/technical white papers.

2. Code agent: read, write and reconstruct large code bases, combined with tool calls and test frameworks.

3. Enterprise search and knowledge assistant: Combined with vector retrieval/RAG, it uses longer context to do cross-database summary and evidence chain answers.


5. Risks and boundaries

1. Unclear license: Currently, there is no clear license field, and commercial use is strictly prohibited by default. Wait for official model card and license updates.

2. Computing power and cost: MoE-level models still have significant memory/bandwidth requirements; Evaluate TCO and throughput before deciding on the scale of the landing.

3. Data compliance: Sensitive data is easy to carry in long contexts, requiring desensitization, grading, and access control, and configuring log and expiration erasure policies.


6. FAQ

1. Is V3.1 really "open source"?

Currently, the weights can be downloaded on Hugging Face, which is published in "open weights"; However, the model card is not yet available, the license is not marked, and the strict open source/commercial boundary must be subject to the official license.

2. Where can I experience the online version and call the API?

DeepSeek's official website provides Web/App/Mini Programs and APIs, and the official announcement says that it has been upgraded to V3.1, and the API remains compatible.

3. What are the main differences from V3?

public information focuses on "longer context (to 128K)" and "engineering optimization and speed experience improvement"; The underlying layer still continues the V3 system, and detailed training and evaluation data need to be supplemented by the official model card.

4. How to try it if there are not enough resources?

Priority is given to quantitative weights and cloud inference; Offline deployment allows for a small sample evaluation before deciding whether to invest in distributed inference and high-end GPUs.

Recommended Tools

More