Review and Edit Page
Diagram of actors here
About Gateways
Gateways are Livepeer’s off-chain coordination layer: the nodes that applications connect to in order to submit real-time AI and video compute jobs. Instead of talking directly to orchestrators, apps send workloads to a Gateway, which handles service discovery, capability matching, pricing, and low-latency routing. The Gateway selects the best orchestrator (GPU compute node) based on GPU capacity, model support, performance, and availability, ensuring efficient execution of AI inference, transcoding, and BYOC pipelines. Gateways expose their offered services and enable a competitive marketplace while orchestrators focus solely on GPU compute - running the job! Put simply: applications → Gateways → orchestrators → results! Gateways are off-chain service nodes that sit between applications and orchestrators. They provide the entry point for real workloads (AI inference, transcoding, BYOC containers) to reach the decentralized GPU network.Gateway Workflow
Gateway Services
- Accept jobs
- Match to orchestrators
- Expose capabilities, pricing, model support
- Enable a marketplace
- Provide APIs (for Daydream, BYOC, ComfyStream, StreamDiffusionTD)
From -> https://github.com/videoDAC/livepeer-gateway?tab=readme-ov-file
They are not block producers, not orchestrators, and not on-chain components.
They are an infrastructure role that apps connect to for low-latency coordination.
Why Gateways Exist
Across the transcripts and ecosystem discussions, gateways consistently appear in one context: Gateways solve the “coordination layer” problem Orchestrators handle compute (GPU jobs). Gateways handle:- Job intake
- Routing
- Pricing
- Matching workloads to the right orchestrator
- Providing an interface for applications
Why Gateways Matter
As Livepeer transitions into a high-demand, real-time AI network, Gateways become essential infrastructure. They enable:- Low-latency workflows for Daydream, ComfyStream, and other real-time AI video tools
- Dynamic GPU routing for inference-heavy workloads
- A decentralized marketplace of compute capabilities
- Flexible integration via the BYOC pipeline model
Understanding Gateways
Gateways are the entry point for applications into the Livepeer decentralized compute network. They provide the coordination layer that connects real-time AI and video workloads to the orchestrators who perform the GPU compute.What Gateways Do
Gateways handle all service-level logic required to operate a scalable, low-latency AI video network:-
Job Intake
They receive workloads from applications using Livepeer APIs, PyTrickle, or BYOC integrations. -
Capability & Model Matching
Gateways determine which orchestrators support the required GPU, model, or pipeline. -
Routing & Scheduling
They dispatch jobs to the optimal orchestrator based on performance, availability, and pricing. -
Marketplace Exposure
Gateway operators can publish the services they offer, including supported models, pipelines, and pricing structures.
Relationship to Orchestrators
Orchestrators are GPU operators who execute the actual workload—transcoding, AI inference, or BYOC containers. Gateways route jobs to orchestrators, collect results, and return them to the application. Applications → Gateway → Orchestrator → Gateway → Application This separation allows:- Clean abstraction for developers
- Efficient load balancing
- Competition and specialization across operators
- Support for a broad range of real-time AI pipelines
Summary
Gateways are the coordination and routing layer of the Livepeer ecosystem. They expose capabilities, price services, accept workloads, and dispatch them to orchestrators for GPU execution. This design enables a scalable, low-latency, AI-ready decentralized compute marketplace.CALLOUT YOU CAN RUN A GATEWAY & EARN!
Random Notes only below here
Random Notes to remove
Random Notes to remove
This is referenced directly in Peter Schroedl’s talk, where he describes the Gateway + Orchestrator stack as the system PyTrickle connects into, enabling apps to run arbitrary AI pipelines via Livepeer.
Demo of Livepeer Gateway Single Click Deployment with Playback Stream Test:https://www.youtube.com/watch?v=csJjzoIw_pM
→ GWIDhttps://forum.livepeer.org/t/get-to-know-gwid-and-the-team-a-fully-managed-devop-platform-for-livepeer/2851
Gateways appear again in the marketplace discussion:
This shows that gateways are the commercial / service-facing nodes.
Gateway vs Orchestrator: What’s the Difference?
Livepeer uses two core node types—Gateways and Orchestrators—that work together to provide real-time AI video compute at scale. Although closely connected, they serve entirely different purposes. This page breaks down how they differ and why both roles matter for a decentralized compute marketplace.Overview
| Role | Function | Performs GPU Work? | External-Facing? |
|---|---|---|---|
| Gateway | Job intake, pricing, routing, capability match | ❌ No | ✅ Yes |
| Orchestrator | GPU compute, inference, transcoding, BYOC | ✅ Yes | ❌ No |
Orchestrators compute. Together, they form the backbone of the Livepeer AI video pipeline.
Gateway Responsibilities
Gateways act as the front door to the network:- Receive jobs from applications
- Determine required model, pipeline, or GPU
- Select the best orchestrator based on performance and pricing
- Route the workload with low latency
- Return results to the client
- Publish marketplace offerings (models, pipelines, cost per frame, etc.)
Orchestrator Responsibilities
Orchestrators are GPU operators who run:- Real-time AI inference
- Daydream / ComfyStream pipelines
- BYOC containers
- Traditional transcoding
- GPU horsepower
- Model execution
- Deterministic and verifiable output
- Performance guarantees