Skip to main content

Istio

This guide shows how to deploy llm-d with Istio as your inference gateway. By the end, inference requests will flow from an Istio-managed Gateway to your model servers via the llm-d EPP.

note

This guide assumes familiarity with Gateway API and llm-d.

Prerequisites

  1. The environment variables ${GUIDE_NAME}, ${MODEL_NAME} and ${NAMESPACE} should be set as part of deploying one of the well-lit path guides.
  2. A Kubernetes cluster running one of the three most recent Kubernetes releases
  3. Helm
  4. jq

Step 1: Install Gateway API and Gateway API Inference Extension CRDs

Install the required Gateway API and Gateway API Inference Extension CRDs:

GATEWAY_API_VERSION=v1.5.1
GAIE_VERSION=v1.5.0

kubectl apply -k "https://github.com/kubernetes-sigs/gateway-api/config/crd?ref=${GATEWAY_API_VERSION}"
kubectl apply -k "https://github.com/kubernetes-sigs/gateway-api-inference-extension/config/crd?ref=${GAIE_VERSION}"

Verify the APIs are available:

kubectl api-resources --api-group=gateway.networking.k8s.io
kubectl api-resources --api-group=inference.networking.k8s.io

Step 2: Install Istio

Install Istio with inference extension support enabled:

ISTIO_VERSION=1.29.2
curl -L https://istio.io/downloadIstio | ISTIO_VERSION=${ISTIO_VERSION} sh -
export PATH="$PWD/istio-${ISTIO_VERSION}/bin:$PATH"
istioctl install -y \
--set values.pilot.env.ENABLE_GATEWAY_API_INFERENCE_EXTENSION=true

Verify the installation:

kubectl get pods -n istio-system

Expected output:

NAME READY STATUS RESTARTS AGE
istiod-xxxxxxxxxx-xxxxx 1/1 Running 0 30s

Step 3: Deploy the Gateway

Create a Gateway resource. Istio watches this resource and creates an Envoy-based proxy that accepts incoming traffic.

kubectl apply -k ./guides/recipes/gateway/istio -n ${NAMESPACE}

Verify the Gateway is programmed:

kubectl get gateway llm-d-inference-gateway -n ${NAMESPACE}

Expected output:

NAME CLASS ADDRESS PROGRAMMED AGE
llm-d-inference-gateway istio 10.xx.xx.xx True 30s

Wait until PROGRAMMED shows True before proceeding.

Step 4: Send a Request

important

Before sending requests, you must deploy a well-lit path guide. This sets up a model server deployment, an InferencePool, and an HTTPRoute to connect the Gateway to the pool.

Get the Gateway external address:

export IP=$(kubectl get gateway llm-d-inference-gateway -n ${NAMESPACE} -o jsonpath='{.status.addresses[0].value}')

Send an inference request via the managed Gateway:

curl -X POST http://${IP}/v1/completions \
-H 'Content-Type: application/json' \
-d '{
"model": '\"${MODEL_NAME}\"',
"prompt": "How are you today?"
}' | jq

Cleanup

kubectl delete gateway llm-d-inference-gateway -n ${NAMESPACE}
istioctl uninstall --purge -y
kubectl delete namespace istio-system
kubectl delete gatewayclass istio istio-remote
kubectl delete -k "https://github.com/kubernetes-sigs/gateway-api/config/crd?ref=${GATEWAY_API_VERSION}"
kubectl delete -k "https://github.com/kubernetes-sigs/gateway-api-inference-extension/config/crd?ref=${GAIE_VERSION}"

Troubleshooting

Gateway not showing PROGRAMMED=True

kubectl describe gateway llm-d-inference-gateway -n ${NAMESPACE}
kubectl get pods -n istio-system
kubectl logs -n istio-system deployment/istiod --tail=20

Verify Istio was installed with the inference extension flag enabled.

HTTPRoute not accepted

kubectl describe httproute ${GUIDE_NAME} -n ${NAMESPACE}

Verify that parentRefs matches the Gateway name and backendRefs matches the InferencePool name.

No response from Gateway IP

kubectl get gateway llm-d-inference-gateway -o jsonpath='{.status.addresses[0].value}' -n ${NAMESPACE}

If the address is empty, your Gateway may still be waiting for a LoadBalancer service. Check that your cluster supports external load balancers.