DeepSeek-R1 is a powerful language model that you can run on your local machine using Ollama, which simplifies downloading, running, and interacting with LLMs. This guide will walk you through setting up DeepSeek-R1 and making API calls to use it in your applications.
Why Run DeepSeek-R1 Locally?
Running DeepSeek-R1 on your own system provides several benefits:
✅ Privacy & Security – Your data stays on your device.
✅ Faster Responses – No network latency from API calls.
✅ Offline Access – Work without an internet connection.
✅ No API Costs – Avoid paying for cloud-based LLM services.
✅ Customization – Fine-tune and modify model settings as needed.
Setting Up DeepSeek-R1 Locally With Ollama
Step 1: Install Ollama
Download and install Ollama from its official website. Once installed, you can manage and run DeepSeek-R1 seamlessly.
Step 2: Download and Run DeepSeek-R1
Open a terminal and run the following command to download and start the model:
ollama run deepseek-r1
If your hardware cannot handle the full 671B model, you can run a smaller version by specifying the size (e.g., 7B, 14B, 32B):
ollama run deepseek-r1:7b
Step 3: Run DeepSeek-R1 as a Background Service
To keep the model running for API access, start the Ollama server:
ollama serve
Calling DeepSeek-R1 via API
Once DeepSeek-R1 is running, you can interact with it via:
- Command Line Interface (CLI)
- HTTP API
- JavaScript Code
1. Using CLI
You can test DeepSeek-R1 directly in the terminal:
ollama run deepseek-r1
Type a question, and the model will generate a response.
2. Using HTTP API
To send API requests, use cURL:
curl http://localhost:11434/api/chat -d '{
"model": "deepseek-r1",
"messages": [{ "role": "user", "content": "What is the capital of France?" }],
"stream": false
}'
This returns a JSON response with the answer.
3. Using JavaScript (Node.js)
To call the DeepSeek-R1 API in a Node.js application, use the fetch
API:
const fetch = require("node-fetch");
async function chatWithDeepSeekR1(prompt) {
const response = await fetch("http://localhost:11434/api/chat", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
model: "deepseek-r1",
messages: [{ role: "user", content: prompt }],
stream: false,
}),
});
const data = await response.json();
console.log("DeepSeek-R1 Response:", data.message.content);
}
chatWithDeepSeekR1("Explain Newton's second law of motion");
Conclusion
Running DeepSeek-R1 locally with Ollama gives you full control, faster responses, and cost savings. With simple commands, you can set up the model, interact via CLI, make API requests, or integrate it into JavaScript applications.
Now you’re ready to build and experiment with DeepSeek-R1 on your own machine! 🚀