In this article, we will discuss in detail several common error-reporting problems that DeepSeek may encounter when deploying locally through Ollama and provide solutions. The issues involved include slow download speed of model, errors in connection, inability to find the downloaded model in ChatBox, and how to change the download location of the model, etc.
1. The download speed of the model is too slow, how to solve it?
Many users reported that when downloading models through Ollama, the download speed was very slow and even failed to complete the download smoothly. Here is an effective solution:
Solution:
Try pressing Ctrl+C to cancel the download and then start the download again: Some users have reported that when pressing Ctrl+C to cancel the download during the download process, the download speed becomes faster. This is because in some cases, reconnecting may bypass network congestion or speed up the download path.
In addition, you can also try the following methods to increase the download speed:
- Replace network environment: Sometimes slow download speed may be due to network problems, try to switch to a more stable network environment, or use an accelerator.
- Using a proxy server: You can set up a proxy server in Ollama's configuration to help bypass local network restrictions.
2. Error: Post "http://127.0.0.1:11434/api/show": dial tcp 127.0.0.1:11434: connectex: No connection could be made because the target machine actively refused it.
This error usually indicates that the Ollama local service is not started properly or there is a problem with the network configuration. The solution is as follows:
Solution:
Check that the Ollama service is started: Make sure the Ollama service is running locally. Enter the following command to see if port 11434 is occupied:
netstat -an | findstr 11434
If the port is occupied, you will see an output similar to the following:
TCP 0.0.0.0:11434 0.0.0.0:0 LISTENING
This means that port 11434 has been occupied. If no output is seen, the port is idle.
macOS/Linux operating system:
Open the terminal.
Enter the following command to check port occupancy:
lsof -i :11434
If the port is occupied, information about the relevant process will be displayed. If there is no output, the port is not occupied.
Check the firewall settings: This error may be that the firewall blocks the port of the Ollama service. You can try to temporarily turn off the firewall, or allow port 11434 to pass in the firewall settings.
Check port conflicts: Make sure that no other programs occupy port 11434 locally. You can avoid conflicts by changing Ollama's port configuration.
Change Ollama's port configuration
If you find that port 11434 has been occupied, you can avoid conflicts by changing Ollama's port. Here are the specific ways to change the port:
Windows and macOS/Linux operating systems:
Find Ollama's configuration file, usually one or similar configuration file, with the path as follows:
- Windows: C:\Users\<YourUsername>\.ollama\
- macOS/Linux: ~/.ollama/
Open the configuration file and find the port-related settings items, columns such as: 11434 or similar fields. Change 11434 to an unoccupied port number, such as 11435 or other higher port number
json
{ "port": 11434 }
json
{ "port": 11435 }
It can also be solved by configuring environment variables, right-clicking on my computer, properties, advanced system settings, and creating new environment variables.
Variable name: OLLAMA_HOST
Value: 0.0.0.0: 11435
3. Downloaded model cannot be found in ChatBox
If you cannot find the downloaded model when using Ollama's ChatBox, it may be due to the model path configuration problem.
Solution:
Check that the model file is in the correct directory: By default, Ollama stores the model in the specified local directory. If you cannot find the model in ChatBox, first check the directory to confirm that the model file does exist.
Make sure that Ollama's model path is configured correctly: You can view and configure the path of the model file in Ollama's settings. Make sure that the path points to the correct location where you downloaded the model.
If you changed the domain name location in the previous step, you just need to change the domain name of your API.
4. How to view the location of your local downloaded model?
To view the location of your local downloaded model, you can use the following two methods:
Method 1:
In Ollama's settings, a "Model Storage Path" option is usually provided. In this option, you can see where all downloaded models are stored.
Method 2:
If you are not sure about Ollama's storage path, you can use the file search tool to find the file name of the downloaded model or the file type of the model. Generally speaking, the file name of the model contains the name or version number of the model.
5. How to change the download location of the model?
If you want to store the model downloaded by Ollama in a specific directory, it can be done by changing the configuration of Ollama.
Solution:
Modify the Ollama configuration file: Open the Ollama configuration file (usually) and find the fields related to "Model Path". In this field, you can specify the download path for the model.
Use command line parameters: When starting Ollama, some versions support specifying the model download directory through command line parameters. Please check Ollama's documentation to see if this feature is supported.
For example, you can use commands like the following when starting Ollama:
ollama --model-dir "D:/my_models"
This will download the model to the D:/my_models directory.
It can also be solved by configuring environment variables, right-clicking on my computer, properties, advanced system settings, and creating new environment variables.
Variable name: OLLAMA_MS
Value: Create a new folder path
Conclusion
With the above workaround, you should be able to successfully resolve common problems encountered during local deployment using DeepSeek and Ollama.
This is the article about how DeepSeek solves common local deployment errors through Ollama. For more related content on DeepSeek to solve local deployment errors, please search for my previous articles or continue browsing the related articles below. I hope everyone will support me in the future!