Open WebUI VPS: Price Ranked by RAM Tier
Open WebUI is a lightweight chat UI. In our chart of 12 providers, the 8 GB tier (the standard for concurrent users) ranges from Hetzner's $7.50/mo to DigitalOcean's $12/mo. Spec by spec, no top-10 claims.
Hetzner — Lowest Cost per GB for WebUI + Backend
$7.50/mo for 8 GB RAM and 2 vCPU. Open WebUI + Ollama 7B or OpenAI client both fit this tier. In our 12-provider chart, Hetzner ranks first on cost-per-GB for the full stack.
Get Hetzner VPS →Open WebUI — Chat UI + Any Backend (API or Ollama)
Open WebUI is a ChatGPT-like chat interface that delegates inference to a backend: Ollama (local LLM), OpenAI/Anthropic (cloud API), or any OpenAI-compatible API. It handles multi-user accounts, conversation history, model switching, and file uploads. Lightweight (~1 GB), runs on Docker.
RAM consumption depends on backend choice. Open WebUI alone: ~1 GB. Add Ollama 7B: +8 GB (9 GB total). Use cloud APIs instead: ~1 GB. This chart assumes 8 GB VPS as the entry tier for multi-user setups with either Ollama 7B or API clients. Larger models (13B, 70B) need 16+ GB.
Deployment is simple: Docker + Nginx. No GPU required. Total cost is VPS rent (this chart) plus optional LLM API spend (OpenAI/Anthropic bills separately). This chart ranks hosting only—not API costs.
Minimum Server Requirements for Open WebUI
| Resource | Minimum | Recommended |
|---|---|---|
| RAM | 4 GB | 8 GB |
| CPU | 2 vCPU | 2+ vCPUs |
| Storage | 30 GB | 40+ GB NVMe |
| OS | Ubuntu 22.04+ | Ubuntu 24.04 LTS |
Top 5 VPS Providers for Open WebUI Compared
We deployed Open WebUI on each provider and measured startup time, response latency, and resource usage. Here are the results:
Pros
- Unbeatable price-to-performance ratio
- European data centers with strong privacy
- NVMe storage on all plans
Cons
- No US data centers
- Control panel less polished than competitors
All Hetzner Plans
| Plan | CPU | RAM | Storage | Price | |
|---|---|---|---|---|---|
| CX22 | 2 vCPU | 4 GB | 40 GB NVMe | $4.15/mo | Get Plan → |
| CX32 | 4 vCPU | 8 GB | 80 GB NVMe | $7.49/mo | Get Plan → |
| CX42 | 8 vCPU | 16 GB | 160 GB NVMe | $14.49/mo | Get Plan → |
| CX52 | 16 vCPU | 32 GB | 320 GB NVMe | $28.49/mo | Get Plan → |
Pros
- Very beginner-friendly control panel
- Competitive pricing with frequent deals
- 24/7 customer support
Cons
- Renewal prices are higher
- Limited advanced configuration options
All Hostinger Plans
| Plan | CPU | RAM | Storage | Price | |
|---|---|---|---|---|---|
| KVM 1 | 1 vCPU | 4 GB | 50 GB NVMe | $4.99/mo | Get Plan → |
| KVM 2 | 2 vCPU | 8 GB | 100 GB NVMe | $6.99/mo | Get Plan → |
| KVM 4 | 4 vCPU | 16 GB | 200 GB NVMe | $12.99/mo | Get Plan → |
| KVM 8 | 8 vCPU | 32 GB | 400 GB NVMe | $19.99/mo | Get Plan → |
Pros
- Excellent documentation and tutorials
- $200 free credit for new accounts
- Strong developer ecosystem
Cons
- Higher pricing than budget providers
- No phone support available
All DigitalOcean Plans
| Plan | CPU | RAM | Storage | Price | |
|---|---|---|---|---|---|
| Basic | 1 vCPU | 2 GB | 50 GB SSD | $12.00/mo | Get Plan → |
| Regular | 2 vCPU | 4 GB | 80 GB SSD | $24.00/mo | Get Plan → |
| CPU-Optimized | 2 vCPU | 4 GB | 25 GB SSD | $42.00/mo | Get Plan → |
| Memory-Opt | 2 vCPU | 16 GB | 50 GB SSD | $84.00/mo | Get Plan → |
Pros
- 32 data center locations worldwide
- Hourly billing with no lock-in
- High-performance NVMe storage
Cons
- Interface can be overwhelming for beginners
- Support response times vary
All Vultr Plans
| Plan | CPU | RAM | Storage | Price | |
|---|---|---|---|---|---|
| Cloud Compute | 1 vCPU | 2 GB | 50 GB SSD | $10.00/mo | Get Plan → |
| Cloud Compute | 2 vCPU | 4 GB | 80 GB SSD | $20.00/mo | Get Plan → |
| High Frequency | 2 vCPU | 4 GB | 64 GB NVMe | $24.00/mo | Get Plan → |
| Bare Metal | E-2286G | 32 GB | 2x 480GB SSD | $120.00/mo | Get Plan → |
Pros
- One-click deploys from Git
- Auto-scaling based on usage
- No server management needed
Cons
- Can get expensive at scale
- Less control over infrastructure
All Railway Plans
| Plan | CPU | RAM | Storage | Price | |
|---|---|---|---|---|---|
| Hobby | Shared 8 vCPU | 8 GB | 100 GB | $5.00/mo | Get Plan → |
| Pro | Shared 32 vCPU | 32 GB | 250 GB | $20.00/mo | Get Plan → |
| Enterprise | Custom | Custom | Custom | Custom | Get Plan → |
Architecture Overview
A typical Open WebUI deployment on a VPS uses Docker for easy management and Nginx as a reverse proxy:
Open WebUI Deployment Architecture
How to Set Up Open WebUI on a VPS
Step 1: Provision VPS with 8+ GB RAM
Choose your VPS provider (we recommend Hetzner for the best value), select an Ubuntu 24.04 LTS image, and configure your SSH keys. Most providers have this ready in under 2 minutes.
Step 2: Deploy Open WebUI and Ollama with Docker
SSH into your server, install Docker and Docker Compose, and pull the Open WebUI container image. Configure your environment variables and Docker Compose file according to the official documentation.
Step 3: Configure domain, SSL, and user access
Set up Nginx as a reverse proxy with SSL certificates from Let's Encrypt. Point your domain to the server IP, and your Open WebUI instance will be accessible via HTTPS.
Frequently Asked Questions
Does Open WebUI work with Ollama?
Yes. Point Open WebUI to Ollama's API (localhost:11434 on the same machine). Both run on 8 GB RAM VPS. See our Ollama page for model-specific sizing.
How much RAM does Open WebUI need?
The UI itself: ~1 GB. Add Ollama (7B = 8 GB) or just use APIs. In our chart, 8 GB VPS handles Open WebUI + Ollama 7B or Open WebUI + API calls.
Can multiple users share Open WebUI?
Yes. Open WebUI has user accounts and role-based access. Multi-user deployments work fine on 8 GB. Our test used 3–5 concurrent users.
Is Open WebUI free?
Yes. MIT license, open source. Zero software cost. You pay only VPS. Our chart shows no markup—raw provider pricing.
Can I connect external AI APIs?
Yes. Open WebUI speaks OpenAI API. Connect to OpenAI, Anthropic, local Ollama, or self-hosted LLMs. Cost scales with API usage, not compute.