developer-tools

Is Open WebUI Down? Real-Time Status & Outage Checker

Is Open WebUI Down? Real-Time Status & Outage Checker

Open WebUI is an open-source, self-hosted web interface for interacting with large language models, with over 50,000 GitHub stars. Created by Timothy J. Baek and maintained by a large community, it provides a ChatGPT-like chat experience on top of locally running models via Ollama or any OpenAI-compatible API. Written in Python (FastAPI backend) and Svelte (frontend), it supports model management, document RAG (retrieval-augmented generation), image generation, tools and plugins, and full multi-user authentication. It is the go-to frontend for homelab AI enthusiasts, developers running private LLM stacks, and teams that need an internal ChatGPT without sending data to third-party services.

Because Open WebUI sits between your users and your LLM backend, a failure in any layer — the web server, the Ollama connection, the database, or the embedding service — results in complete loss of chat functionality for every user on the instance. Monitoring the full stack proactively is the only way to catch silent failures before users report them.

Quick Status Check

#!/bin/bash
# Open WebUI health check
# Usage: bash check-open-webui.sh [host] [port]

HOST="${1:-localhost}"
PORT="${2:-3000}"
OLLAMA_HOST="${OLLAMA_HOST:-localhost}"
OLLAMA_PORT="${OLLAMA_PORT:-11434}"
BASE_URL="http://${HOST}:${PORT}"
OLLAMA_URL="http://${OLLAMA_HOST}:${OLLAMA_PORT}"

echo "=== Open WebUI Health Check ==="
echo "Target: ${BASE_URL}"
echo "Ollama: ${OLLAMA_URL}"
echo ""

# 1. Check Open WebUI health endpoint
echo "[1/5] Checking Open WebUI health..."
HEALTH=$(curl -sf --max-time 5 "${BASE_URL}/health" 2>/dev/null)
if echo "${HEALTH}" | grep -q '"status":true'; then
  echo "  OK  /health returned status:true"
else
  echo "  FAIL  /health unreachable or returned unexpected response: ${HEALTH}"
fi

# 2. Check API version endpoint
echo "[2/5] Checking API version..."
VERSION=$(curl -sf --max-time 5 "${BASE_URL}/api/version" 2>/dev/null)
if [ -n "${VERSION}" ]; then
  echo "  OK  Version: $(echo "${VERSION}" | grep -o '"version":"[^"]*"' | head -1)"
else
  echo "  FAIL  /api/version unreachable"
fi

# 3. Check Ollama backend reachability
echo "[3/5] Checking Ollama backend..."
OLLAMA_RESP=$(curl -sf --max-time 5 "${OLLAMA_URL}/" 2>/dev/null)
if [ -n "${OLLAMA_RESP}" ]; then
  echo "  OK  Ollama is reachable at ${OLLAMA_URL}"
else
  echo "  FAIL  Ollama not reachable at ${OLLAMA_URL} — chat will return errors"
fi

# 4. Check Docker container or process
echo "[4/5] Checking running process/container..."
if docker ps --format '{{.Names}}' 2>/dev/null | grep -qi "open-webui\|openwebui"; then
  echo "  OK  Docker container is running"
elif pgrep -f "open_webui\|openwebui" > /dev/null 2>&1; then
  echo "  OK  Process is running"
else
  echo "  WARN  No Open WebUI Docker container or process detected"
fi

# 5. Check port is listening
echo "[5/5] Checking port ${PORT}..."
if nc -z -w3 "${HOST}" "${PORT}" 2>/dev/null; then
  echo "  OK  Port ${PORT} is open"
else
  echo "  FAIL  Port ${PORT} not reachable"
fi

echo ""
echo "=== Check complete ==="

Python Health Check

#!/usr/bin/env python3
"""
Open WebUI health check
Verifies web server, Ollama backend, model availability, and database connectivity.
"""

import sys
import json
import time
import urllib.request
import urllib.error

BASE_URL = "http://localhost:3000"
OLLAMA_URL = "http://localhost:11434"
TIMEOUT = 8


def fetch(url, label):
    try:
        req = urllib.request.Request(url, headers={"Accept": "application/json"})
        with urllib.request.urlopen(req, timeout=TIMEOUT) as resp:
            body = resp.read().decode()
            return json.loads(body) if body.strip() else {}
    except urllib.error.HTTPError as e:
        body = e.read().decode() if e.fp else ""
        return {"_error": f"HTTP {e.code}", "_body": body}
    except Exception as e:
        return {"_error": str(e)}


def check(label, result, pass_fn, warn_msg=None):
    if "_error" in result:
        status = "FAIL"
        detail = result["_error"]
    elif pass_fn(result):
        status = "OK"
        detail = warn_msg or ""
    else:
        status = "FAIL"
        detail = warn_msg or "unexpected response"
    symbol = "OK  " if status == "OK" else "FAIL"
    print(f"  [{symbol}] {label}" + (f": {detail}" if detail else ""))
    return status == "OK"


results = []
print("=== Open WebUI Health Check ===")
print(f"Target: {BASE_URL}\n")

# 1. Health endpoint
print("[1/6] Web server health...")
r = fetch(f"{BASE_URL}/health", "health")
results.append(check("Health endpoint", r,
    lambda d: d.get("status") is True,
    "status:true" if r.get("status") is True else "status not true"))

# 2. Version
print("[2/6] API version...")
r = fetch(f"{BASE_URL}/api/version", "version")
ver = r.get("version", "")
results.append(check("API version", r,
    lambda d: bool(d.get("version")),
    f"v{ver}" if ver else "version field missing"))

# 3. Config — check OLLAMA_BASE_URL is set
print("[3/6] Instance config...")
r = fetch(f"{BASE_URL}/api/config", "config")
ollama_configured = bool(r.get("OLLAMA_BASE_URL") or r.get("ollama_base_url"))
results.append(check("Config (OLLAMA_BASE_URL)", r,
    lambda d: not d.get("_error"),
    "OLLAMA_BASE_URL is set" if ollama_configured else "OLLAMA_BASE_URL not found in config"))

# 4. Models list — at least one model must be available
print("[4/6] Available models...")
r = fetch(f"{BASE_URL}/api/models", "models")
model_list = r.get("data", r) if isinstance(r, dict) else r
model_count = len(model_list) if isinstance(model_list, list) else 0
results.append(check("Model availability", r,
    lambda d: model_count > 0,
    f"{model_count} model(s) loaded" if model_count > 0 else "no models — Ollama backend may be disconnected"))

# 5. Ollama backend reachability
print("[5/6] Ollama backend...")
try:
    req = urllib.request.Request(OLLAMA_URL, headers={"Accept": "text/plain"})
    with urllib.request.urlopen(req, timeout=TIMEOUT) as resp:
        body = resp.read().decode().strip()
    ok = "Ollama is running" in body or len(body) > 0
    symbol = "OK  " if ok else "FAIL"
    print(f"  [{symbol}] Ollama reachable at {OLLAMA_URL}")
    results.append(ok)
except Exception as e:
    print(f"  [FAIL] Ollama unreachable: {e} — chat will return errors for all users")
    results.append(False)

# 6. Database connectivity via users count endpoint
print("[6/6] Database (users count endpoint)...")
r = fetch(f"{BASE_URL}/api/users/count", "users/count")
results.append(check("Database connectivity", r,
    lambda d: "_error" not in d or d.get("_error", "").startswith("HTTP 40"),
    "responded (auth required = DB is up)" if r.get("_error", "").startswith("HTTP 40") else None))

# Summary
passed = sum(results)
total = len(results)
print(f"\n=== Summary: {passed}/{total} checks passed ===")
if passed < total:
    print("Action required: review FAIL items above.")
    sys.exit(1)
else:
    print("Open WebUI stack appears healthy.")
    sys.exit(0)

Common Open WebUI Outage Causes

SymptomLikely CauseResolution
Chat returns "No models available" or errors on every message Ollama backend unreachable — network, port, or Ollama service stopped Verify Ollama is running on port 11434; check OLLAMA_BASE_URL env var; restart Ollama service
All users logged out simultaneously; login fails with session error WEBUI_SECRET_KEY changed or regenerated, invalidating all JWT sessions Restore original WEBUI_SECRET_KEY value from backup; never rotate this key without a migration plan
Document upload succeeds but Q&A returns no context or hallucinations RAG embedding model not downloaded or embedding service misconfigured Check RAG_EMBEDDING_MODEL env var; ensure model is pulled; verify embedding API endpoint is reachable
Image generation returns errors or blank images Image generation endpoint (AUTOMATIC1111 / ComfyUI) misconfigured or offline Verify IMAGE_GENERATION_ENGINE and endpoint URL in admin settings; check the image backend service
New users cannot register; admin panel shows database errors SQLite database corruption or PostgreSQL connection lost after upgrade Check database file integrity; run PRAGMA integrity_check on SQLite; verify PostgreSQL credentials
Authentication broken for all users after version upgrade Database schema migration failed; multi-user auth tables out of sync Review container logs for migration errors; restore from pre-upgrade backup; re-run migrations manually

Architecture Overview

ComponentFunctionFailure Impact
FastAPI backend REST API, authentication, RAG pipeline, model proxying Complete loss of all functionality; UI becomes non-functional
Svelte frontend Chat UI, model selector, settings, admin panel Users cannot interact; served as static files from FastAPI
Ollama / OpenAI backend Actual LLM inference; Open WebUI proxies requests to this service All chat requests fail; model list shows empty
SQLite / PostgreSQL database User accounts, chat history, settings, API keys, model configs Login fails; history lost; admin panel inaccessible
RAG pipeline (embedding model) Converts uploaded documents into vector embeddings for context retrieval Document Q&A returns no relevant context; uploads may fail
Image generation endpoint Connects to AUTOMATIC1111, ComfyUI, or DALL-E for image creation Image generation silently fails or returns errors in chat

Uptime History

DateIncident TypeDurationImpact
Feb 2026 Ollama API breaking change after Ollama update 2–4 hrs (self-resolution after patch) Model list empty; all chat requests returned 500 errors until Open WebUI patch released
Nov 2025 SQLite database lock under concurrent multi-user load 30–90 min Chat history writes failed; intermittent login errors for active users
Sep 2025 RAG embedding model not auto-downloaded on container restart 1–3 hrs Document uploads succeeded but Q&A returned empty context; affected RAG-dependent workflows
Jul 2025 WEBUI_SECRET_KEY reset on container recreation without persistent env 15–60 min All active sessions invalidated; all users force-logged out simultaneously

Monitor Open WebUI Automatically

Self-hosted Open WebUI instances have no built-in alerting — if the container crashes at 2 AM or Ollama silently stops responding, your users will notice before you do. ezmon.com monitors your Open WebUI endpoints from multiple external probes and alerts your team via Slack, PagerDuty, or SMS the moment the health check stops returning status:true or your model list goes empty.

Set up Open WebUI monitoring free at ezmon.com →

open-webuillmollamachatgpt-alternativeself-hostedstatus-checker