Guide
Carbon-aware scheduling
Shift deferrable workloads — data-pipeline batches, ML training, EV charging, water heating — toward low-carbon hours on the Indian grid. End-to-end example using /carbon-intensity/latest and /forecast/carbon-intensity.
A 30-line Python service that picks the cleanest hour in the next 24 to run a batch job, cutting its embedded emissions by 10–25% at zero cost.
Why shift load
India's grid carbon intensity swings between ~550 gCO₂/kWh at solar-heavy midday and ~800+ gCO₂/kWh in the late evening coal peak. Any workload that doesn't need to finish at a specific instant — a nightly ETL, model training, document indexing — can be scheduled against that curve for meaningful carbon savings with zero performance penalty.
Biggest wins
The best candidates are (1) long-running and (2) latency-insensitive. A 6-hour model fine-tune that runs "sometime tonight" can land in a 300 g/kWh trough instead of a 750 g/kWh peak — a >2x reduction in embedded emissions for the same compute cost.
The approach
- Fetch the next 24 hours of forecasted carbon intensity from
/forecast/carbon-intensity. - Find the contiguous window that minimises average intensity over your job's estimated duration.
- Schedule the job to start at the window's start time.
- (Optional) Poll
/carbon-intensity/latestwhile running and log actual intensity for your green-ops dashboard.
Implementation
Minimal Python example. Assumes ATLAS_API_KEY is on a Pro tier or above (forecasts require Pro).
import os
import datetime as dt
import requests
API = "https://api.energymap.in/developer/v1"
HEADERS = {"X-API-Key": os.environ["ATLAS_API_KEY"]}
def best_start_time(job_hours: int) -> dt.datetime:
"""Return the UTC start time of the cleanest window of length `job_hours`."""
r = requests.get(f"{API}/forecast/carbon-intensity", params={"horizon_h": 24}, headers=HEADERS, timeout=10)
r.raise_for_status()
forecast = r.json()["forecast"]
best_idx, best_avg = 0, float("inf")
for i in range(len(forecast) - job_hours + 1):
window = forecast[i : i + job_hours]
avg = sum(p["intensity_gco2_per_kwh"] for p in window) / job_hours
if avg < best_avg:
best_avg, best_idx = avg, i
start = dt.datetime.fromisoformat(forecast[best_idx]["ts"].replace("Z", "+00:00"))
print(f"cleanest {job_hours}h window: starts {start.isoformat()}, avg {best_avg:.0f} gCO2/kWh")
return start
def current_intensity() -> int:
r = requests.get(f"{API}/carbon-intensity/latest", headers=HEADERS, timeout=10)
r.raise_for_status()
return r.json()["intensity_gco2_per_kwh"]
if __name__ == "__main__":
start_at = best_start_time(job_hours=6)
# Wire up to systemd-run --on-calendar, Airflow, or your scheduler of choice.
# For crontab: convert start_at to local time and `at <time>` the job.
For Airflow, wrap best_start_time in a short Python operator at the top of your DAG and pass its output as the schedule for the downstream task group. For systemd, systemd-run --on-calendar accepts an absolute timestamp.
Verifying the savings
Log the intensity at the moment each scheduled job starts and compare against a naive baseline (e.g. "ran at 22:00 IST every night"). After a month of runs you typically see 10–25% lower average intensity, higher on days with a clear midday solar peak, lower on overcast days where the curve is flatter.
Reporting
If you need to report this in carbon accounting, multiply hours × kW × intensity to get gCO₂. Use /carbon-intensity/latest at job start and end rather than the forecast — forecasts carry ±5% error that the latest does not.
