AI compute market signals

Learn

What are Prometheus and Hyperion?

Meta’s multi-gigawatt AI campuses and what industrial-scale compute really means.

Prometheus and Hyperion are large Meta AI infrastructure projects built to support training and inference at a scale that reaches beyond a single ordinary data center. They matter because multi-gigawatt AI campuses show how compute is becoming an industrial system built from power, land, cooling, networking, and many coordinated facilities.

Multi-gigawatt scaleProject

These projects are designed at a size where compute becomes an industrial infrastructure problem.

More than one buildingMarket

At this scale, capacity depends on campuses, power systems, and coordinated deployment - not a single server hall.

2026-05-18Last reviewed

Time-sensitive project details; verify primary sources.

Example

How compute scale changes

As AI infrastructure grows, the planning unit changes. The market moves from thinking about servers, to data centers, to full campuses and regional infrastructure systems.

1

Server

Individual hardware.

2

Cluster

Many systems working together.

3

Data center

A facility supporting large deployments.

4

Campus

Multiple facilities and power systems operating as one large compute platform.

Prometheus and Hyperion help explain what happens when compute becomes campus-scale infrastructure.

Project

What Prometheus and Hyperion are

Prometheus and Hyperion are Meta AI infrastructure projects designed to support very large-scale compute. Prometheus represents the nearer-term cluster buildout, while Hyperion represents the larger follow-on campus scale that pushes the idea of AI infrastructure into multi-gigawatt territory.

  • They are designed for AI workloads at far larger scale than conventional enterprise computing.
  • They require multiple data-center buildings and highly coordinated infrastructure.
  • They show why power delivery, thermal management, and networking become system-level concerns.
  • They are useful examples of how frontier AI demand changes the physical form of compute.

Why it matters

Why multi-gigawatt compute matters

  • It shows that leading AI operators are planning capacity at industrial scale.
  • It connects compute growth directly to grid, land, construction, and regional infrastructure.
  • It raises the importance of deployment timing, not just headline chip counts.
  • It helps readers understand why future compute supply may increasingly be measured in campuses and gigawatts.

Common mistake

A bigger data center is not just more of the same

Once AI infrastructure reaches multi-gigawatt scale, the challenge changes. The problem is no longer only adding more servers; it is coordinating power, buildings, cooling, networking, and operations across a much larger physical system.

Hardware

More chips

Adds raw hardware capacity.

Systems

More infrastructure

Requires power, cooling, networking, and facilities to scale with it.

Scale

Industrial compute

What emerges when many facilities operate as one large AI platform.

Watchlist

What to watch next

  • Buildout timing and when each phase becomes operational.
  • Power sourcing, grid upgrades, and long-term energy strategy.
  • Whether capacity is concentrated in one campus or distributed across several facilities.
  • Chip deployment, networking design, and cooling readiness.
  • Whether other operators move toward similar campus-scale projects.

Keep learning

Related lessons

Infrastructure

Why power matters

Why electricity and site capacity shape AI compute markets.

Infrastructure

What is a data center?

The physical site where chips, power, cooling, networking, and operations come together.

Infrastructure

Why cooling matters

Why heat limits how densely AI chips can be deployed and operated.