// deploy · comfyui-auth

ComfyUI Authentication: How to Secure Your Instance

ComfyUI has no built-in auth. Here is how to secure it with Nginx, API keys, JWT, rate limiting, and a hardening checklist — before the next botnet finds your port.

Published 2026-05-08comfyui authenticationcomfyui securitycomfyui nginx
1 day
How long an exposed ComfyUI port 8188 can go before being discovered and abused - based on the April 2026 botnet campaign that compromised open instances for crypto mining.
ComfyUI community reports, April 2026
ComfyUI auth methods - complexity vs protection
MethodSetup timeProtects UIProtects APISuitable for
Firewall / security group5 min✓ (IP allowlist)Internal / team use
Nginx API key header30 min✓ (with basic auth)Production APIs
Nginx + rate limiting45 minMulti-client APIs
JWT middleware2–4 hrsMulti-tenant SaaS products
mTLS (mutual TLS)4–8 hrsInternal microservices only

Why ComfyUI Has No Built-In Auth

ComfyUI was built as a local tool. Its API was designed for a single developer running it on their own machine - not for internet-facing production deployments. There is no native concept of users, API keys, or access control in the ComfyUI codebase.

The April 2026 botnet incident demonstrated the consequences: attackers scanned for open port 8188, found thousands of unprotected ComfyUI instances, and used them to run crypto mining workflows. Many victims did not realize their GPU was compromised for days. Port 8188 must never be exposed directly to the internet - authentication must be handled entirely at the infrastructure layer.

In March 2026, the Censys ARC research team discovered a botnet campaign - named GHOST - that had been actively targeting internet-exposed ComfyUI instances. Over 1,000 exposed instances were identified; 97 confirmed exploits in a single scan cycle. The attack vector was not a CVE in ComfyUI's core: it exploited unauthenticated access combined with custom nodes that execute arbitrary Python (FL_CodeNode, EvaluateMultiple, SrlEval). On instances without vulnerable nodes, the attack installed malicious packages via ComfyUI-Manager's pip endpoint. The payload mined Monero (CPU, XMRig) and Conflux (GPU, lolMiner), enrolled hosts in a Hysteria v2 proxy botnet, and persisted across restarts with seven distinct mechanisms - including a fake 'GPU Performance Monitor' custom node that re-downloaded the payload every 6 hours. Instances on RunPod, Vast.ai, and home labs were all affected.

ComfyUI ships with zero authentication. Every endpoint - /prompt, /history, /upload/image - is open to anyone who can reach port 8188. That was acceptable when ComfyUI ran only on your laptop. It is not acceptable when it runs on a server.

In April 2026, a botnet campaign actively scanned the internet for exposed ComfyUI instances on port 8188, hijacked the GPU to mine cryptocurrency, and exfiltrated workflow files. Instances on RunPod, Vast.ai, and home lab servers were all affected. The fix is straightforward but requires a few deliberate steps.

Layer 1: Never Expose Port 8188 Directly

Docker bypasses most host firewalls. When you publish -p 8188:8188, Docker adds an iptables rule that lets traffic reach the container even if UFW or firewalld blocks it. The only reliable way to prevent public access is to bind ComfyUI to loopback before Docker forwards the port.

In your Compose file, use the 127.0.0.1:8188:8188 form - not just 8188:8188:

$yaml
# docker-compose.yml
services:
  comfyui:
    image: comfyui:production
    ports:
      - "127.0.0.1:8188:8188"  # loopback only - Docker cannot bypass this

With this binding, the container is reachable only from the host machine itself. Your Nginx (or any other reverse proxy) running on the same host can reach it at 127.0.0.1:8188. Nothing outside the host can.

If you use a cloud GPU provider (RunPod, Vast.ai, Lambda Labs): their network tab shows open ports. Do not open port 8188 there. Use their built-in port-forwarding or an SSH tunnel to access ComfyUI for development.

Layer 2: Nginx Reverse Proxy with API Key

The cleanest production pattern: Nginx handles TLS termination and API key validation. ComfyUI never sees unauthenticated requests.

$nginx.conf
# /etc/nginx/sites-available/comfyui
server {
    listen 443 ssl http2;
    server_name api.yoursite.com;

    ssl_certificate     /etc/letsencrypt/live/api.yoursite.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/api.yoursite.com/privkey.pem;
    ssl_protocols       TLSv1.2 TLSv1.3;

    # Reject any request missing the API key
    if ($http_x_api_key != "your-secret-key-here") {
        return 401;
    }

    # HTTP endpoints
    location / {
        proxy_pass         http://127.0.0.1:8188;
        proxy_set_header   Host $host;
        proxy_set_header   X-Real-IP $remote_addr;
        proxy_read_timeout 120s;
    }

    # WebSocket endpoint - must use upgrade headers
    location /ws {
        proxy_pass              http://127.0.0.1:8188;
        proxy_http_version      1.1;
        proxy_set_header        Upgrade $http_upgrade;
        proxy_set_header        Connection "Upgrade";
        proxy_set_header        Host $host;
        proxy_read_timeout      3600s;
    }
}

# Redirect HTTP to HTTPS
server {
    listen 80;
    server_name api.yoursite.com;
    return 301 https://$host$request_uri;
}

Get a TLS certificate with Certbot: sudo certbot --nginx -d api.yoursite.com. Certbot auto-renews via a systemd timer; no manual renewal needed.

Clients send the key in every request:

$bash
curl -s https://api.yoursite.com/system_stats \
  -H "X-API-Key: your-secret-key-here"

Rotate the key by updating the Nginx config and reloading: sudo nginx -s reload. No downtime.

Layer 3: Rate Limiting Per Client

A single client should not be able to flood the queue. Add Nginx rate limiting at the /prompt endpoint:

$nginx-ratelimit.conf
# In nginx.conf http block (outside server blocks)
limit_req_zone $http_x_api_key zone=per_key:10m rate=10r/m;

# In the server block, restrict /prompt
location /prompt {
    limit_req zone=per_key burst=5 nodelay;
    proxy_pass http://127.0.0.1:8188;
    # ... other proxy settings
}

This allows 10 requests per minute per API key, with a burst of 5. A client that exceeds this gets a 429 Too Many Requests response. Adjust rate and burst to match your expected usage pattern.

JWT for Multi-Tenant APIs

If you serve multiple users and need per-user access control - not just a shared API key - use JWTs. Your backend issues signed tokens; a Lua script in Nginx validates them before forwarding to ComfyUI.

This requires lua-resty-jwt (available in OpenResty or via the libnginx-mod-http-lua package on Ubuntu):

$nginx-jwt.conf
# nginx.conf with JWT validation
location /prompt {
    access_by_lua_block {
        local jwt = require("resty.jwt")
        local auth_header = ngx.var.http_authorization
        if not auth_header then
            ngx.status = 401
            ngx.say("Missing Authorization header")
            ngx.exit(401)
        end
        local token = auth_header:match("Bearer (.+)")
        local verified = jwt:verify("your-jwt-secret", token)
        if not verified["verified"] then
            ngx.status = 401
            ngx.say("Invalid token")
            ngx.exit(401)
        end
    }
    proxy_pass http://127.0.0.1:8188;
}

Your API server generates tokens with a library like jsonwebtoken (Node.js) or python-jose (Python). Include a sub (user ID) and exp (expiration) claim. The Lua script validates the signature and expiry before allowing the request through.

mTLS for Internal Services

If your worker processes call ComfyUI internally and you want to ensure no other process on the host can inject jobs, use mutual TLS. Both sides present certificates; connections without a valid client cert are rejected.

$bash
# Generate CA and certificates
openssl genrsa -out ca.key 4096
openssl req -new -x509 -key ca.key -out ca.crt -days 3650 -subj "/CN=comfyui-internal-ca"

# Server cert (for Nginx)
openssl genrsa -out server.key 2048
openssl req -new -key server.key -out server.csr -subj "/CN=api.yoursite.com"
openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key -out server.crt -days 365

# Client cert (for your worker)
openssl genrsa -out worker.key 2048
openssl req -new -key worker.key -out worker.csr -subj "/CN=worker"
openssl x509 -req -in worker.csr -CA ca.crt -CAkey ca.key -out worker.crt -days 365
$nginx-mtls.conf
# Add to nginx server block
ssl_client_certificate /etc/nginx/ca.crt;
ssl_verify_client on;  # reject any connection without a valid client cert

Workers send their certificate with every request. This is the strongest internal isolation option - no shared secret that can be leaked in environment variables.

Production Hardening Checklist

Before any ComfyUI instance touches the internet:

  • [ ] Port 8188 bound to 127.0.0.1 - not 0.0.0.0, not the public interface
  • [ ] Nginx running in front with TLS (valid cert, not self-signed for public endpoints)
  • [ ] API key or JWT validation on every endpoint
  • [ ] Rate limiting on /prompt - at minimum 10 req/min per key
  • [ ] /output directory not served publicly - download via /view with auth
  • [ ] ComfyUI version up to date (check github.com/comfyanonymous/ComfyUI releases)
  • [ ] Docker image rebuilt from official ComfyUI source - not an unofficial Docker Hub image
  • [ ] No workflow files containing credentials, API keys, or personal data stored on disk
  • [ ] Firewall rule: only port 443 (Nginx) open to the internet, port 22 (SSH) restricted to your IP
  • [ ] Automated cert renewal (Certbot timer or equivalent)

Run curl https://api.yoursite.com/system_stats without an API key. It should return 401, not a JSON response. If it returns JSON, your auth layer is misconfigured.

What Authentication Cannot Fix

Auth protects the API surface. It does not protect against:

  • Malicious custom nodes - a node with arbitrary Python code runs with full container permissions
  • Model files with embedded payloads - some model formats can include executable code (inspect before loading from untrusted sources)
  • SSRF via workflow nodes - a node that fetches arbitrary URLs can reach internal network services from inside the container

For untrusted workflows or user-submitted nodes, run ComfyUI in an isolated container with no access to the host network and no mounted credentials.

Frequently Asked Questions

Does ComfyUI support API keys natively?

No. ComfyUI has no built-in authentication mechanism. All auth must be implemented at the network layer - Nginx, a reverse proxy, or a gateway in front of ComfyUI. Adding auth inside ComfyUI is not currently on the official roadmap.

What was the April 2026 botnet attack?

A botnet campaign scanned for ComfyUI instances listening on port 8188 without authentication. When found, attackers submitted workflows that ran shell commands via custom nodes, used the GPU for cryptomining, and exfiltrated workflow JSON files. Instances on cloud providers (RunPod, Vast.ai) and home labs were both affected. The attack was opportunistic - it targeted exposed ports, not specific users.

Is it safe to run ComfyUI on a server without Nginx?

Only if you bind ComfyUI exclusively to 127.0.0.1 (not 0.0.0.0) and access it via SSH tunnel or a VPN. If port 8188 is reachable from the internet - even briefly - the instance can be compromised. An SSH tunnel is the minimum viable isolation for development. For production, use Nginx with TLS and API key validation.

How do I revoke access for a specific API key?

Update the Nginx config to remove or change the key and reload: sudo nginx -s reload. The old key stops working immediately. For multi-tenant JWT setups, you can maintain a blocklist of revoked token IDs and check it in the Lua validation script, or use short token expiry (15–60 minutes) so revocation is automatic.

Can I expose the ComfyUI web UI publicly with a password?

Yes - use Nginx HTTP Basic Auth in front of port 8188. This protects the UI but is not suitable for API usage (Basic Auth does not work cleanly with the WebSocket endpoint). For API production use, X-API-Key or JWT is the correct approach. For UI-only access by a small team, Basic Auth is acceptable.

How do I add API key authentication to ComfyUI without modifying its source code?

Run ComfyUI behind an Nginx reverse proxy that validates an X-API-Key header before forwarding requests. Nginx checks the key against a static value (or a Redis/database lookup for multi-client), returning 401 if it does not match. ComfyUI itself listens only on 127.0.0.1:8188 - it never sees unauthenticated requests. This pattern requires zero changes to ComfyUI and survives ComfyUI updates.

What happened in the April 2026 ComfyUI botnet attack?

Attackers systematically scanned port 8188 across cloud IP ranges and found ComfyUI instances left exposed without authentication. They submitted workflow JSONs that executed shell commands via malicious custom nodes or model loading hooks, establishing persistent access for crypto mining. Instances running on cloud GPUs (RunPod, Vast.ai) were the primary targets due to their GPU power. The attack was only possible because port 8188 was directly accessible from the internet.

Should I use JWT or API keys for a multi-tenant ComfyUI API?

For most production ComfyUI APIs, API keys (random 256-bit tokens) are simpler and sufficient. JWTs add value when you need stateless auth with embedded claims (user ID, plan tier, rate limit) that the application layer reads without a database lookup. If your API has many clients with different permissions or you need short-lived tokens that expire automatically, JWTs are worth the added complexity. For a single-tenant or small-team API, an API key validated by Nginx is all you need.