Skip to main content

W&B Inference

W&B Inference provides access to leading open-source foundation models via W&B Weave and an OpenAI-compliant API. With W&B Inference, you can:

  • Develop AI applications and agents without signing up for a hosting provider or self-hosting a model.
  • Try the supported models in the W&B Weave Playground.
important

W&B Inference credits are included with Free, Pro, and Academic plans for a limited time. Availability may vary for Enterprise. Once credits are consumed:

  • Free accounts must upgrade to a Pro plan to continue using Inference.
  • Pro plan users will be billed for Inference overages on a monthly basis, based on the model-specific pricing.

To learn more, see the pricing page and W&B Inference model costs.

Using Weave, you can trace, evaluate, monitor, and iterate on your W&B Inference-powered applications.

ModelModel ID (for API usage)Type(s)Context WindowParametersDescription
DeepSeek R1-0528deepseek-ai/DeepSeek-R1-0528Text161K37B - 680B (Active - Total)Optimized for precise reasoning tasks including complex coding, math, and structured document analysis.
DeepSeek V3-0324deepseek-ai/DeepSeek-V3-0324Text161K37B - 680B (Active - Total)Robust Mixture-of-Experts model tailored for high-complexity language processing and comprehensive document analysis.
Llama 3.1 8Bmeta-llama/Llama-3.1-8B-InstructText128K8B (Total)Efficient conversational model optimized for responsive multilingual chatbot interactions.
Llama 3.3 70Bmeta-llama/Llama-3.3-70B-InstructText128K70B (Total)Multilingual model excelling in conversational tasks, detailed instruction-following, and coding.
Llama 4 Scoutmeta-llama/Llama-4-Scout-17B-16E-InstructText, Vision64K17B - 109B (Active - Total)Multimodal model integrating text and image understanding, ideal for visual tasks and combined analysis.
Phi 4 Minimicrosoft/Phi-4-mini-instructText128K3.8B (Active - Total)Compact, efficient model ideal for fast responses in resource-constrained environments.

This guide provides the following information:

Prerequisites​

The following prerequisites are required to access the W&B Inference service via the API or the W&B Weave UI.

  1. A W&B account. Sign up here.
  2. A W&B API key. Get your API key at https://wandb.ai/authorize.
  3. A W&B project.
  4. If you are using the Inference service via Python, see Additional prerequisites for using the API via Python.

Additional prerequisites for using the API via Python​

To use the Inference API via Python, first complete the general prerequisites. Then, install the openai and weave libraries in your local environment:

pip install openai weave
note

The weave library is only required if you'll be using Weave to trace your LLM applications. For information on getting started with Weave, see the Weave Quickstart.

For usage examples demonstrating how to use the W&B Inference service with Weave, see the API usage examples.

API specification​

The following section provides API specification information and API usage examples.

Endpoint​

The Inference service can be accessed via the following endpoint:

https://api.inference.wandb.ai/v1
important

To access this endpoint, you must have a W&B account with Inference service credits allocated, a valid W&B API key, and a W&B entity (also referred to as "team") and project. In the code samples in this guide, entity (team) and project are referred to as <your-team>\<your-project>.

Available methods​

The Inference service supports the following API methods:

Chat completions​

The primary API method available is /chat/completions, which supports OpenAI-compatible request formats for sending messages to a supported model and receiving a completion. For usage examples demonstrating how to use the W&B Inference service with Weave, see the API usage examples.

To create a chat completion, you will need:

  • The Inference service base URL https://api.inference.wandb.ai/v1
  • Your W&B API key <your-api-key>
  • Your W&B entity and project names <your-team>/<your-project>
  • The ID for the model you want to use, one of:
    • meta-llama/Llama-3.1-8B-Instruct
    • deepseek-ai/DeepSeek-V3-0324
    • meta-llama/Llama-3.3-70B-Instruct
    • deepseek-ai/DeepSeek-R1-0528
    • meta-llama/Llama-4-Scout-17B-16E-Instruct
    • microsoft/Phi-4-mini-instruct
curl https://api.inference.wandb.ai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <your-api-key>" \
-H "OpenAI-Project: <your-team>/<your-project>" \
-d '{
"model": "<model-id>",
"messages": [
{ "role": "system", "content": "You are a helpful assistant." },
{ "role": "user", "content": "Tell me a joke." }
]
}'

List supported models​

Use the API to query all currently available models and their IDs. This is useful for selecting models dynamically or inspecting what's available in your environment.

curl https://api.inference.wandb.ai/v1/models \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <your-api-key>" \
-H "OpenAI-Project: <your-team>/<your-project>" \

Usage examples​

This section provides several examples demonstrating how to use W&B Inference with Weave:

Basic example: Trace Llama 3.1 8B with Weave​

The following Python code sample shows how to send a prompt to the Llama 3.1 8B model using the W&B Inference API and trace the call in Weave. Tracing lets you capture the full input/output of the LLM call, monitor performance, and analyze results in the Weave UI.

tip

Learn more about tracing in Weave.

In this example:

  • You define a @weave.op()-decorated function, run_chat, which makes a chat completion request using the OpenAI-compatible client.
  • Your traces are recorded and associated with your W&B entity and project project="<your-team>/<your-project>
  • The function is automatically traced by Weave, so its inputs, outputs, latency, and metadata (like model ID) are logged.
  • The result is printed in the terminal, and the trace appears in your Traces tab at https://wandb.ai under the specified project.

To use this example, you must complete the general prerequisites and Additional prerequisites for using the API via Python.

import weave
import openai

# Set the Weave team and project for tracing
weave.init("<your-team>/<your-project>")

client = openai.OpenAI(
base_url='https://api.inference.wandb.ai/v1',

# Get your API key from https://wandb.ai/authorize
api_key="<your-api-key>",

# Required for W&B inference usage tracking
project="wandb/inference-demo",
)

# Trace the model call in Weave
@weave.op()
def run_chat():
response = client.chat.completions.create(
model="meta-llama/Llama-3.1-8B-Instruct",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Tell me a joke."}
],
)
return response.choices[0].message.content

# Run and log the traced call
output = run_chat()
print(output)

Once you run the code sample, you can view the trace in Weave by clicking the link printed in the terminal (e.g. https://wandb.ai/<your-team>/<your-project>/r/call/01977f8f-839d-7dda-b0c2-27292ef0e04g), or:

  1. Navigate to https://wandb.ai.
  2. Select the Traces tab to view your Weave traces.

Next, try the advanced example.

Traces display

Advanced example: Use Weave Evaluations and Leaderboards with the inference service​

In addition to using Weave with the Inference service to trace model calls, you can also evaluate performance, and publish a leaderboard. The following Python code sample compares two models on a simple question–answer dataset.

To use this example, you must complete the general prerequisites and Additional prerequisites for using the API via Python.

import os
import asyncio
import openai
import weave
from weave.flow import leaderboard
from weave.trace.ref_util import get_ref

# Set the Weave team and project for tracing
weave.init("<your-team>/<your-project>")

dataset = [
{"input": "What is 2 + 2?", "target": "4"},
{"input": "Name a primary color.", "target": "red"},
]

@weave.op
def exact_match(target: str, output: str) -> float:
return float(target.strip().lower() == output.strip().lower())

class WBInferenceModel(weave.Model):
model: str

@weave.op
def predict(self, prompt: str) -> str:
client = openai.OpenAI(
base_url="https://api.inference.wandb.ai/v1",
# Get your API key from https://wandb.ai/authorize
api_key="<your-api-key>",
# Required for W&B inference usage tracking
project="<your-team>/<your-project>",
)
resp = client.chat.completions.create(
model=self.model,
messages=[{"role": "user", "content": prompt}],
)
return resp.choices[0].message.content

llama = WBInferenceModel(model="meta-llama/Llama-3.1-8B-Instruct")
deepseek = WBInferenceModel(model="deepseek-ai/DeepSeek-V3-0324")

def preprocess_model_input(example):
return {"prompt": example["input"]}

evaluation = weave.Evaluation(
name="QA",
dataset=dataset,
scorers=[exact_match],
preprocess_model_input=preprocess_model_input,
)

async def run_eval():
await evaluation.evaluate(llama)
await evaluation.evaluate(deepseek)

asyncio.run(run_eval())

spec = leaderboard.Leaderboard(
name="Inference Leaderboard",
description="Compare models on a QA dataset",
columns=[
leaderboard.LeaderboardColumn(
evaluation_object_ref=get_ref(evaluation).uri(),
scorer_name="exact_match",
summary_metric_path="mean",
)
],
)

weave.publish(spec)

After you run the following code sample, navigate to your W&B account at https://wandb.ai/ and:

View your model evaluations

View your traces

UI​

The following section describes how to use the Inference service from the W&B UI. Before you can access the Inference service via the UI, complete the prerequisites.

Access the Inference service​

You can access the Inference service via the Weave UI from two different locations:

Navigate to https://wandb.ai/inference.

From the Inference tab​

  1. Navigate to your W&B account at https://wandb.ai/.
  2. From the left sidebar, select Inference. A page with available models and model information displays.

The Inference tab

From the Playground tab​

  1. From the left sidebar, select Playground. The Playground chat UI displays.
  2. From the LLM dropdown list, mouseover W&B Inference. A dropdown with available W&B Inference models displays to the right.
  3. From the W&B Inference models dropdown, you can:

The Inference models dropdown in Playground

Try a model in the Playground​

Once you've selected a model using one of the access options, you can try the model in Playground. The following actions are available:

Using an Inference model in the Playground

Compare multiple models​

You can compare multiple Inference models in the Playground. The Compare view can be accessed from two different locations:

Access the Compare view from the Inference tab​

  1. From the left sidebar, select Inference. A page with available models and model information displays.
  2. To select models for comparison, click anywhere on a model card (except for the model name). The border of the model card is highlighted in blue to indicate the selection.
  3. Repeat step 2 for each model you want to compare.
  4. In any of the selected cards, click the Compare N models in the Playground button (N is the number of models you are comparing. For example, when 3 models are selected, the button displays as Compare 3 models in the Playground). The comparison view opens.

Now, you can compare models in the Playground, and use any of the features described in Try a model in the Playground.

Select multiple models to compare in Playground

Access the Compare view from the Playground tab​

  1. From the left sidebar, select Playground. The Playground chat UI displays.
  2. From the LLM dropdown list, mouseover W&B Inference. A dropdown with available W&B Inference models displays to the right.
  3. From the dropdown, select Compare. The Inference tab displays.
  4. To select models for comparison, click anywhere on a model card (except for the model name). The border of the model card is highlighted in blue to indicate the selection.
  5. Repeat step 4 for each model you want to compare.
  6. In any of the selected cards, click the Compare N models in the Playground button (N is the number of models you are comparing. For example, when 3 models are selected, the button displays as Compare 3 models in the Playground). The comparison view opens.

Now, you can compare models in the Playground, and use any of the features described in Try a model in the Playground.

View billing and usage information​

Organization admins can track current Inference credit balance, usage history, and upcoming billing (if applicable) directly from the W&B UI:

  1. In the W&B UI, navigate to the W&B Billing page.
  2. In the bottom righthand corner, the Inference billing information card is displayed. From here, you can:
  • Click the View usage button in the Inference billing information card to view your usage over time.
  • If you're on a paid plan, view your upcoming inference charges.

Usage information and limits​

The following section describes important usage information and limits. Familiarize yourself with this information before using the service.

Geographic restrictions​

The Inference service is only accessible from supported geographic locations. For more information, see the Terms of Service.

Concurrency limits​

To ensure fair usage and stable performance, the W&B Inference API enforces rate limits at the user and project level. These limits help:

  • Prevent misuse and protect API stability
  • Ensure access for all users
  • Manage infrastructure load effectively

If a rate limit is exceeded, the API will return a 429 Concurrency limit reached for requests response. To resolve this error, reduce the number of concurrent requests.

Pricing​

For model pricing information, visit http://wandb.com/site/pricing/inference.

API errors​

Error CodeMessageCauseSolution
401Invalid AuthenticationInvalid authentication credentials or your W&B project entity and/or name are incorrect.Ensure the correct API key is being used and/or that your W&B project name and entity are correct.
403Country, region, or territory not supportedAccessing the API from an unsupported location.Please see Geographic restrictions
429Concurrency limit reached for requestsToo many concurrent requests.Reduce the number of concurrent requests.
429You exceeded your current quota, please check your plan and billing detailsOut of credits or reached monthly spending cap.Purchase more credits or increase your limits.
500The server had an error while processing your requestInternal server error.Retry after a brief wait and contact support if it persists.
503The engine is currently overloaded, please try again laterServer is experiencing high traffic.Retry your request after a short delay.