luvv to helpDiscover the Best Free Online Tools

Model Serving APIs

Learn Model Serving APIs for Machine Learning Engineer for free: roadmap, examples, subskills, and a skill exam.

Published: January 1, 2026 | Updated: January 1, 2026

Why Model Serving APIs matter for Machine Learning Engineers

Model Serving APIs turn trained models into reliable products. As a Machine Learning Engineer, you design endpoints that accept inputs, run inference efficiently, return predictable outputs, and stay stable under load. This skill unlocks user-facing features (recommendations, ranking, fraud detection), internal tools (search, embeddings), and batch pipelines (nightly scoring, reporting).

  • Ship models as REST/JSON or gRPC services with clear contracts.
  • Make predictions fast (low latency) and at scale (high throughput).
  • Handle errors, retries, auth, rate limits, and versioned rollouts.
  • Control training–serving skew with robust preprocessing at inference time.

Who this is for

  • Machine Learning Engineers turning notebooks into production services.
  • Data Scientists learning to deploy models safely.
  • Backend Engineers integrating ML inference into systems.

Prerequisites

  • Comfortable with Python and basic web APIs (HTTP methods, status codes, JSON).
  • Familiar with a model framework (e.g., scikit-learn, PyTorch, TensorFlow).
  • Basic Linux CLI and packaging (virtualenv/conda, pip).

Learning path

Step 1 — Design endpoint contracts

Define request/response schemas, validation rules, and error formats. Decide REST/JSON vs gRPC based on latency and language needs.

Step 2 — Implement online inference

Load model once at startup, add preprocessing, warm-up, and a health endpoint. Measure p50/p95 latency locally.

Step 3 — Add batch inference

Support lists/files of inputs. Understand when batch is cheaper and acceptable for delayed results.

Step 4 — Hardening

Add authentication, rate limits, idempotency, error handling, and structured logging.

Step 5 — Performance

Tune concurrency, enable micro-batching where appropriate, and profile hotspots. Establish SLOs.

Step 6 — Versioning and rollouts

Expose versioned endpoints, implement canary releases, and capture metrics for safe promotion.

Worked examples

1) REST/JSON FastAPI service with validation, preprocessing, and warm-up
from fastapi import FastAPI, HTTPException, Header
from pydantic import BaseModel, Field, validator
import joblib
import time

app = FastAPI(title="ToxicityClassifier")

class PredictRequest(BaseModel):
    text: str = Field(..., min_length=1, max_length=2000)
    request_id: str | None = Field(None, description="For idempotency and tracing")

    @validator("text")
    def strip_spaces(cls, v):
        return v.strip()

class PredictResponse(BaseModel):
    label: str
    score: float
    model_version: str

MODEL = None
MODEL_VERSION = "v1"

# simple API key auth
API_KEY = "local-demo-key"

def preprocess(text: str) -> str:
    # keep in sync with training; avoid training-serving skew
    return text.lower()

@app.on_event("startup")
def load_model():
    global MODEL
    MODEL = joblib.load("toxicity_model.joblib")
    # warm-up run to JIT/initialize caches
    _ = MODEL.predict_proba([preprocess("warmup")])

@app.get("/healthz")
def health():
    if MODEL is None:
        return {"status": "starting"}
    return {"status": "ok", "model_version": MODEL_VERSION}

@app.post("/v1/predict", response_model=PredictResponse)
def predict(payload: PredictRequest, x_api_key: str = Header(default="")):
    if x_api_key != API_KEY:
        raise HTTPException(status_code=401, detail="Invalid API key")
    try:
        x = preprocess(payload.text)
        proba = float(MODEL.predict_proba([x])[0][1])
        label = "toxic" if proba >= 0.5 else "clean"
        return {"label": label, "score": proba, "model_version": MODEL_VERSION}
    except Exception as e:
        # do not leak internals; log server-side
        raise HTTPException(status_code=503, detail="Temporary inference failure")

Try a request (replace API key):

curl -s -X POST http://localhost:8000/v1/predict \
 -H 'Content-Type: application/json' \
 -H 'X-API-Key: local-demo-key' \
 -d '{"text":"You are nice"}'
2) Batch vs online inference (simple batch endpoint)
from typing import List
from fastapi import FastAPI
from pydantic import BaseModel

app = FastAPI()

class BatchRequest(BaseModel):
    texts: List[str]

class BatchResponseItem(BaseModel):
    label: str
    score: float

@app.post("/v1/batch_predict")
def batch_predict(req: BatchRequest) -> list[BatchResponseItem]:
    results = []
    for t in req.texts:
        x = preprocess(t)
        proba = float(MODEL.predict_proba([x])[0][1])
        label = "toxic" if proba >= 0.5 else "clean"
        results.append({"label": label, "score": proba})
    return results

Use batch for non-interactive workloads (reports, nightly scoring) to reduce overhead and costs.

3) gRPC service for low-latency, strongly-typed inference
// toxicity.proto
syntax = "proto3";
package toxicity;

service Toxicity {
  rpc Predict (PredictRequest) returns (PredictResponse) {}
}

message PredictRequest { string text = 1; }
message PredictResponse { string label = 1; double score = 2; string model_version = 3; }
# server.py (simplified)
import grpc
from concurrent import futures
import toxicity_pb2, toxicity_pb2_grpc

class ToxicityServicer(toxicity_pb2_grpc.ToxicityServicer):
    def Predict(self, request, context):
        x = preprocess(request.text)
        proba = float(MODEL.predict_proba([x])[0][1])
        label = "toxic" if proba >= 0.5 else "clean"
        return toxicity_pb2.PredictResponse(label=label, score=proba, model_version=MODEL_VERSION)

server = grpc.server(futures.ThreadPoolExecutor(max_workers=8))
toxicity_pb2_grpc.add_ToxicityServicer_to_server(ToxicityServicer(), server)
server.add_insecure_port('[::]:50051')
server.start(); server.wait_for_termination()

gRPC provides compact Protobuf messages and bi-directional streaming for high throughput scenarios.

4) Latency and throughput: concurrency and micro-batching
# Run UVicorn with multiple workers (CPU-bound models)
# uvicorn app:app --host 0.0.0.0 --port 8000 --workers 4

# Example of simple micro-batching aggregator (conceptual)
import time, threading
from queue import Queue

REQ_Q, RESP_Q = Queue(), {}
BATCH_SIZE, MAX_WAIT = 16, 0.01

def worker_loop():
    while True:
        start = time.time()
        items = []
        while len(items) < BATCH_SIZE and (time.time() - start) < MAX_WAIT:
            try:
                items.append(REQ_Q.get_nowait())
            except:
                time.sleep(0.001)
        if not items:
            continue
        ids, payloads = zip(*items)
        X = [preprocess(t) for t in payloads]
        probas = MODEL.predict_proba(X)[:,1]
        for i, p in zip(ids, probas):
            RESP_Q[i] = float(p)

threading.Thread(target=worker_loop, daemon=True).start()

Micro-batching improves throughput for GPU/NN models but may add slight latency. Tune batch size and max wait.

5) Robustness: errors, retries, idempotency, and simple rate limiting
from fastapi import Request
import time

# standardized error payload
# {"error_code":"BAD_REQUEST","message":"...","request_id":"..."}

# Simple token bucket per API key (in-memory demo)
from collections import defaultdict
TOKENS = defaultdict(lambda: {"tokens": 10, "last": time.time()})
RATE = 10  # tokens
INTERVAL = 1.0  # per second

@app.middleware("http")
async def rate_limit(request: Request, call_next):
    api_key = request.headers.get("X-API-Key", "")
    state = TOKENS[api_key]
    now = time.time()
    # refill
    state["tokens"] = min(RATE, state["tokens"] + (now - state["last"]) * (RATE/INTERVAL))
    state["last"] = now
    if state["tokens"] < 1:
        from fastapi.responses import JSONResponse
        return JSONResponse(status_code=429, content={"error_code":"RATE_LIMIT","message":"Too Many Requests"})
    state["tokens"] -= 1
    return await call_next(request)
# Client with retry + exponential backoff (transient 429/503)
import requests, time

def post_with_retry(url, json, headers, max_retries=4):
    delay = 0.25
    for i in range(max_retries):
        r = requests.post(url, json=json, headers=headers, timeout=3)
        if r.status_code in (429, 503):
            time.sleep(delay)
            delay *= 2
            continue
        return r
    return r

Use 4xx for client errors (validation/auth), 5xx for server/transient errors. Include a request_id for correlation in logs.

6) Versioned endpoints and canary releases
import random

@app.post("/v2/predict")
def predict_v2(payload: PredictRequest, x_api_key: str = Header(default="")):
    # Assume improved model and threshold
    # ... similar logic ...
    return {"label": "clean", "score": 0.42, "model_version": "v2"}

@app.post("/predict")
def stable_router(payload: PredictRequest, x_api_key: str = Header(default="")):
    # 10% canary to v2
    if random.random() < 0.10:
        return predict_v2(payload, x_api_key)
    return predict(payload, x_api_key)

Route a small percentage to v2, compare metrics/feedback, then gradually increase traffic if results meet targets.

Drills / exercises

  • Define a JSON schema for inputs that includes types, ranges, and required fields.
  • Add a /healthz and /readyz endpoint; simulate model not loaded and confirm readiness flips after warm-up.
  • Implement API key auth; ensure 401 for missing/invalid keys.
  • Return 400 for invalid payloads and 422 for unprocessable values; verify with tests.
  • Measure p50/p95 latency locally under load (e.g., multiple curl calls); record results.
  • Implement a simple rate limiter and observe 429 under sustained load.
  • Create /v1 and /v2 with a minor behavior change; add a 10% canary router.

Common mistakes and debugging tips

Training–serving skew

Symptoms: worse accuracy in production than in offline validation. Fix by sharing preprocessing code between training and serving or by packaging the same feature functions. Add unit tests comparing outputs on sample inputs.

Loading model per request

Symptoms: huge latency and high CPU. Fix by loading the model once at process startup and reusing it; add a warm-up call.

Unbounded concurrency

Symptoms: timeouts and 5xx under load. Fix by limiting worker count, adding backpressure, and returning 503 when overloaded so clients can retry.

Non-deterministic API responses

Symptoms: consumers cannot parse responses. Fix by pinning response schema and versioning any breaking changes (e.g., /v2).

Leaking sensitive data

Symptoms: logs contain PII or secrets. Fix by redacting inputs in logs, never logging API keys or raw personal data.

Mini project: Real-time toxicity API with batch, auth, and canary

Build a FastAPI service that exposes:

  • /healthz and /readyz endpoints
  • /v1/predict (REST/JSON) with Pydantic validation and API key auth
  • /v1/batch_predict for arrays of inputs
  • /v2/predict with a slightly different threshold
  • /predict router with 10% traffic to v2 (canary)
  • Basic rate limiting and structured error responses
Acceptance criteria
  • Cold start < 2s; p95 latency < 150ms on small inputs (local test).
  • Invalid payload returns 400 with clear error_code.
  • Unauthorized requests return 401; rate limited return 429.
  • Canary routing confirmed by counters in logs (90/10 split).
  • Batch endpoint returns results for all inputs and handles empty list with 400.

Practical projects

  • Embeddings service: expose /embed to return vector embeddings, plus /batch_embed with micro-batching.
  • Image classification gateway: REST + gRPC endpoints, multipart image upload, and GPU warm-up.
  • Feature flag rollout: build /v1 and /v2 recommenders with a gradual rollout controller and metrics logging.

Next steps

  • Instrument metrics (latency histograms, throughput, error rates) and set targets.
  • Add input/output schema snapshots to prevent accidental breaking changes.
  • Explore model monitoring and drift detection after serving is stable.

Model Serving APIs — Skill Exam

This timed exam checks your understanding of Model Serving APIs: contracts, performance, robustness, and safe rollouts. You can take it for free. Your progress and results are saved only if you are logged in; otherwise you can still complete the exam without saving.Rules: closed-book; no external tools required. Aim for 70% to pass. You can retake the exam.

15 questions70% to pass

Have questions about Model Serving APIs?

AI Assistant

Ask questions about this tool