Tag: Drupal

  • ddev-litellm: Running a Local LLM Proxy in Your Drupal Dev Environment

    I’ve been working with Drupal’s AI ecosystem for a while now, and one of the recurring friction points is getting a consistent local setup for testing AI-powered features. Remote API keys, rate limits, costs per request, and internet dependency all add up when you’re just trying to iterate on a feature locally.

    LiteLLM solves a big part of this — it’s a proxy that gives you a unified OpenAI-compatible endpoint regardless of what model you’re actually talking to. Ollama on your host, a vLLM instance, a HuggingFace endpoint — LiteLLM sits in front and normalises it all. The Drupal ai_provider_litellm module then just needs to point at that URL.

    The missing piece was making this painless inside DDEV. So I built an addon.

    What it does

    ddev-litellm adds two services to your DDEV project:

    • LiteLLM proxy — accessible internally by Drupal at http://ddev-<project>-litellm:4000, and externally in your browser at https://<project>.ddev.site:4001
    • PostgreSQL — LiteLLM needs this for its Prisma ORM and virtual key management

    On startup, the addon creates a virtual key (sk-drupal-dev-key) automatically so Drupal can authenticate against the proxy without you having to do it manually each time.

    Prerequisites

    You’ll need:

    • DDEV 1.24.10 or higher
    • Ollama installed on your host machine
    • At least one model pulled locally
    
    
    
    
    
    # pull a model if you haven't already
    ollama pull llama3.2

    If you don’t have Ollama set up yet, do that first. The addon will wait for it on startup but there’s nothing useful to proxy if no models are available.

    Installation

    Inside your existing DDEV project:

    ddev add-on get credevator/ddev-litellm
    ddev restart

    That’s it for the install step. The restart will pull the LiteLLM image — it’s around 2GB so the first restart takes a few minutes. Subsequent starts are fast.

    Configuring your models

    After install, you’ll have a new file at .ddev/litellm_config.yaml. Open it — this is where you tell LiteLLM what backends to use.

    The default ships with Ollama pre-configured. The important thing to get right is the host address. On Mac and Windows, host.docker.internal resolves to your host machine from inside Docker. Linux is different — see the troubleshooting section below.

    # .ddev/litellm_config.yaml
    model_list:
      - model_name: llama3.2
        litellm_params:
          model: ollama/llama3.2
          api_base: http://host.docker.internal:11434

    Change llama3.2 to whatever model you actually have pulled. Add as many entries as you want — one per model, or one entry that covers multiple via LiteLLM’s routing. After editing:

    ddev restart

    The config is read at startup, not hot-reloaded.

    Checking it’s running

    Three commands worth knowing:

    
    
    
    
    
    ddev litellm           # service status + the URLs you care about
    ddev litellm-models    # list models the proxy can see
    ddev litellm logs      # tail the proxy logs — useful when things aren't working
    
    
    
    
    
    $ ddev litellm-models
    Available models:
      - llama3.2

    If models shows empty, Ollama isn’t reachable. Check ddev litellm logs for the connection error.

    You can also hit the LiteLLM UI in a browser at https://<project>.ddev.site:4000 — it has a simple interface for exploring models and sending test requests.

    Connecting Drupal

    This is where the actual value lands. You need two Drupal modules:

    Step 1 — add the key

    In Drupal, go to /admin/config/system/keys/add. Create a new key with the value:

    sk-ddev-litellm

    That’s the master key the addon creates on startup. Name it something obvious like “LiteLLM dev key”.

    [screenshot] The Key module add form at /admin/config/system/keys/add with the key value filled in. Key type set to “Authentication”.

    Step 2 — configure the provider

    Go to /admin/config/ai/providers/litellm. Set:

    • API URL: http://ddev-<project>-litellm:4000 — replace <project> with your actual DDEV project name
    • API key: select the key you just created

    [screenshot] The ai_provider_litellm settings page with the internal container URL set and the key dropdown showing the key you just created.

    Save, then go to any AI-enabled feature in your Drupal setup and it should route through LiteLLM to Ollama.

    Linux: the one gotcha

    On Mac and Windows, host.docker.internal is set up automatically. On Linux it isn’t — Docker doesn’t add it by default.

    The addon handles this in docker-compose.litellm.yaml with an extra_hosts entry, but you may need to verify your Docker setup actually exposes the gateway. Run:

    docker network inspect ddev_default | grep Gateway

    Take that IP and use it directly in litellm_config.yaml instead of host.docker.internal:

    api_base: http://172.17.0.1:11434  # your gateway IP

    Not ideal, but it works until Docker Desktop ships consistent host resolution on Linux.

    Uninstalling

    
    
    
    
    
    ddev add-on remove ddev-litellm

    This removes the auto-generated DDEV files but preserves litellm_config.yaml — in case you’ve spent time tuning it. Delete that manually if you want a clean slate. The Postgres volume also sticks around; remove it with docker volume ls and docker volume rm if you want to free the space.


    The addon is on GitHub under Apache 2.0. If you run into issues or want to add support for a different model backend, open an issue or PR — the codebase is pretty small.