SoFunction
Updated on 2025-04-03

Detailed tutorial on local deployment of DeepSeek model

With the rapid development of artificial intelligence technology, the demand for local deployment of large models is also increasing. As an open source and powerful large language model, DeepSeek provides flexible local deployment solutions, allowing users to run models efficiently in local environments while protecting data privacy. The following is a detailed DeepSeek local deployment process.

1. Environmental preparation

(I) Hardware requirements

  • Minimum configuration: CPU (supports AVX2 instruction set) + 16GB memory + 30GB storage.
  • Recommended configuration: NVIDIA GPU (RTX 3090 or higher) + 32GB memory + 50GB storage.

(II) Software dependency

  • operating system: Windows, macOS or Linux.
  • Docker: If you use the Open Web UI, you need to install Docker.

2. Install Ollama

Ollama is an open source tool for easy running and deploying large language models locally. Here are the steps to install Ollama:

  1. Visit Ollama official website: Go to Ollama's official website and click the "Download" button.
  2. Download the installation package: Select the corresponding installation package according to your operating system. After the download is completed, double-click the installation file and follow the prompts to complete the installation.
  3. Verify installation: After the installation is completed, enter the following command in the terminal to check the Ollama version:
  4. ollama --version

    If the version number is output (e.g.ollama version is 0.5.6), it means the installation is successful.

3. Download and deploy the DeepSeek model

Ollama supports multiple DeepSeek model versions, and users can choose the appropriate model according to the hardware configuration. Here are the deployment steps:

Select a model version:

  • Entry level: Version 1.5B, suitable for preliminary testing.
  • Mid-range: 7B or 8B version, suitable for most consumer-grade GPUs.
  • high performance: 14B, 32B or 70B version, suitable for high-end GPUs.

Download the model:

Open the terminal, enter the following command to download and run the DeepSeek model. For example, the command to download version 7B is:

ollama run deepseek-r1:7b

If you need to download another version, you can refer to the following command:

ollama run deepseek-r1:8b  #8B versionollama run deepseek-r1:14b #14B versionollama run deepseek-r1:32b #32B version

Start the Ollama service:

Run the following command in the terminal to start the Ollama service:

ollama serve

After the service is started, it can be accessedhttp://localhost:11434to interact with the model.

4. Use Open Web UI (optional)

To interact with the DeepSeek model more intuitively, the Open Web UI can be used. Here are the steps to install and use:

  1. Install Docker: Make sure Docker is installed on your machine.
  2. Run Open Web UI
    Run the following command in the terminal to install and start the Open Web UI:
  3. docker run -d -p 3000:8080 \
      --add-host=:host-gateway \
      -v open-webui:/app/backend/data \
      --name open-webui \
      --restart always \
      /open-webui/open-webui:main
    

After the installation is complete, accesshttp://localhost:3000, select deepseek-r1:latest model to start using.

V. Performance optimization and resource management

  • Resource allocation: Select the appropriate model version according to the hardware configuration. Smaller models (such as 1.5B to 14B) perform well on standard hardware, while larger models (such as 32B and 70B) require more powerful GPU support.
  • Memory management: Make sure the system has enough memory and storage space to avoid insufficient resources during runtime.

6. Frequently Asked Questions and Solutions

  • Model download timeout: If there is a timeout problem when downloading the model, you can try to rerun the download command.
  • Service startup failed: Make sure the Ollama service is installed and started correctly. If the service starts fail, you can try restarting the Ollama service.

7. Summary

Through the above steps, you can successfully deploy the DeepSeek model locally and interact with the model through Ollama or Open Web UI. Local deployment not only protects data privacy, but also flexibly adjusts model parameters according to requirements to meet the usage needs in different scenarios. If you encounter any problems during the deployment process, you can leave a message in the comment area and we will solve it together.

I hope this tutorial can help you deploy the DeepSeek model smoothly and embark on a new journey of efficient development!

This is the end of this article about the detailed tutorial on the local deployment of DeepSeek model. For more related content on local deployment of DeepSeek, please search for my previous articles or continue browsing the related articles below. I hope everyone will support me in the future!