Skip to main content
This page is a work in progress.
TODO: Edit, Streamline, Format & Style
Gateways are the entry point for applications into the Livepeer decentralized compute network. They provide the coordination layer that connects real-time AI and video workloads to the orchestrators who perform the GPU compute.

What Gateways Do

Gateways handle all service-level logic required to operate a scalable, low-latency AI video network:
  • Job Intake
    They receive workloads from applications using Livepeer APIs, PyTrickle, or BYOC integrations.
  • Capability & Model Matching
    Gateways determine which orchestrators support the required GPU, model, or pipeline.
  • Routing & Scheduling
    They dispatch jobs to the optimal orchestrator based on performance, availability, and pricing.
  • Marketplace Exposure
    Gateway operators can publish the services they offer, including supported models, pipelines, and pricing structures.
Gateways do not perform GPU compute. Instead, they focus on coordination and service routing.

Gateway Functions & Services

Learn More About Gateway Functions & Services

Why Gateways Matter

As Livepeer transitions into a high-demand, real-time AI network, Gateways become essential infrastructure. They enable:
  • Low-latency workflows for Daydream, ComfyStream, and other real-time AI video tools
  • Dynamic GPU routing for inference-heavy workloads
  • A decentralized marketplace of compute capabilities
  • Flexible integration via the BYOC pipeline model
Gateways simplify the developer experience while preserving the decentralization, performance, and competitiveness of the Livepeer network.

Summary

Gateways are the coordination and routing layer of the Livepeer ecosystem. They expose capabilities, price services, accept workloads, and dispatch them to orchestrators for GPU execution. This design enables a scalable, low-latency, AI-ready decentralized compute marketplace. This architecture enables Livepeer to scale into a global provider of real-time AI video infrastructure.



WIP: Unsure where below section belongs currently

Key Marketplace Features

1. Capability Discovery

Gateways and orchestrators list:
  • AI model support
  • Versioning and model weights
  • Pipeline compatibility
  • GPU type and compute class
Applications can programmatically choose the best provider.

2. Dynamic Pricing

Pricing can vary by:
  • GPU class
  • Model complexity
  • Latency SLA
  • Throughput requirements
  • Region
Gateways expose pricing APIs for transparent selection.

3. Performance Competition

Orchestrators compete on:
  • Speed
  • Reliability
  • GPU quality
  • Cost efficiency
Gateways compete on:
  • Routing quality
  • Supported features
  • Latency
  • Developer ecosystem fit
This creates a healthy decentralized market.

4. BYOC Integration

Any container-based pipeline can be brought into the marketplace:
  • Run custom AI models
  • Run ML workflows
  • Execute arbitrary compute
  • Support enterprise workloads
Gateways advertise BYOC offerings; orchestrators execute containers.

Protocol Overview

Understand the Full Livepeer Network Design

Marketplace Benefits

  • Developer choice — choose the best model, price, and performance
  • Economic incentives — better nodes earn more work
  • Scalability — network supply grows independently of demand
  • Innovation unlock — new models and pipelines can be added instantly
  • Decentralization — no single operator controls the workload flow

Summary

The Marketplace turns Livepeer into a competitive, discoverable, real-time AI compute layer.
  • Gateways expose services
  • Orchestrators execute them
  • Applications choose the best fit
  • Developers build on top of it
  • Users benefit from low-latency, high-performance AI

References

Unverified Reference
https://github.com/videoDAC/livepeer-gateway

Gateway Architecture

Flow Diagram

Layered Architecture

Orchestrators are GPU operators who execute the actual workload—transcoding, AI inference, or BYOC containers. Gateways route jobs to orchestrators, collect results, and return them to the application.Applications → Gateway → Orchestrator → Gateway → ApplicationThis separation allows:
  • Clean abstraction for developers
  • Efficient load balancing
  • Competition and specialization across operators
  • Support for a broad range of real-time AI pipelines