Triton Inference Server
LiteLLM supports Embedding Models on Triton Inference Servers
Usage​
- SDK
- PROXY
Example Call​
Use the triton/
prefix to route to triton server
from litellm import embedding
import os
response = await litellm.aembedding(
model="triton/<your-triton-model>",
api_base="https://your-triton-api-base/triton/embeddings", # /embeddings endpoint you want litellm to call on your server
input=["good morning from litellm"],
)
Add models to your config.yaml
model_list:
- model_name: my-triton-model
litellm_params:
model: triton/<your-triton-model>"
api_base: https://your-triton-api-base/triton/embeddings
Start the proxy
$ litellm --config /path/to/config.yaml --detailed_debug
Send Request to LiteLLM Proxy Server
- OpenAI Python v1.0.0+
- curl
import openai
from openai import OpenAI
# set base_url to your proxy server
# set api_key to send to proxy server
client = OpenAI(api_key="<proxy-api-key>", base_url="http://0.0.0.0:4000")
response = client.embeddings.create(
input=["hello from litellm"],
model="my-triton-model"
)
print(response)--header
is optional, only required if you're using litellm proxy with Virtual Keyscurl --location 'http://0.0.0.0:4000/embeddings' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer sk-1234' \
--data ' {
"model": "my-triton-model",
"input": ["write a litellm poem"]
}'