Skip to main content
Livepeer uses two core node types—Gateways and Orchestrators—that work together to provide real-time AI video compute at scale. Although closely connected, they serve entirely different purposes. This page breaks down how they differ and why both roles matter for a decentralized compute marketplace.

Overview

RoleFunctionPerforms GPU Work?External-Facing?
GatewayJob intake, pricing, routing, capability match❌ No✅ Yes
OrchestratorGPU compute, inference, transcoding, BYOC✅ Yes❌ No
Gateways coordinate.
Orchestrators compute.
Together, they form the backbone of the Livepeer AI video pipeline.

Gateway Responsibilities

Gateways act as the front door to the network:
  • Receive jobs from applications
  • Determine required model, pipeline, or GPU
  • Select the best orchestrator based on performance and pricing
  • Route the workload with low latency
  • Return results to the client
  • Publish marketplace offerings (models, pipelines, cost per frame, etc.)
Gateways provide service intelligence, not compute.

Orchestrator Responsibilities

Orchestrators are GPU operators who run:
  • Real-time AI inference
  • Daydream / ComfyStream pipelines
  • BYOC containers
  • Traditional transcoding
They provide:
  • GPU horsepower
  • Model execution
  • Deterministic and verifiable output
  • Performance guarantees
They do not expose external APIs directly—Gateways handle that.

How They Work Together