Ollama Use Cases
Ollama Use Cases
Updated on 05 Jun 2025

This guide walks you through inferencing deepseek-r1:1.5b model using Ollama and OpenWebUI with supported NVIDIA GPU H100.

  1. Step 1: Create a GPU Container using Ollama-OpenWebUI template' Alt text Alt text
  2. Step 2: Create root user for OpenWebUI Alt text
  3. Step 3: In this tutorial, we will use the deepseek-r1:1.5b model as an example. Feel free to choose a different model that best suits your needs. Alt text
  4. Step 4: After pulling successfully, send a prompt to test the model Alt text