title: "Scheduling AI Tasks with OpenClaw Cron Jobs" date: "2026-02-12" description: "Learn how to schedule recurring AI tasks with OpenClaw's built-in cron system. Automate reports, monitoring, data processing, and more on a reliable schedule." category: "Tutorial" author: "OpenClaw Team" tags: ["cron", "scheduling", "automation"] readTime: "10 min"
The most powerful AI workflows are the ones that run without you. A daily digest email, an hourly uptime check, a weekly competitive analysis ā these are tasks that should run on a schedule and land in your inbox while you sleep. OpenClaw's built-in cron system makes this straightforward: define a schedule, attach an agent task, and OpenClaw handles the rest.
This tutorial walks you through everything from basic cron syntax to production-grade scheduled pipelines with error handling, logging, and health monitoring.
What You'll Build
By the end of this tutorial, you'll have:
- A daily report agent that summarizes key metrics every morning
- An hourly monitoring agent that checks system health
- A weekly summary agent that aggregates data across the week
- Error handling and alerting so failed jobs don't go unnoticed
- A logging setup and cron health dashboard
Prerequisites
Before you begin, make sure you have:
- OpenClaw installed (installation guide)
- An API key for at least one LLM provider (Claude, OpenAI, or Ollama)
- Basic familiarity with cron syntax (we'll cover it below)
- A server or cloud environment where OpenClaw runs persistently (a $5 VPS works ā see run OpenClaw on a $5 VPS)
Step 1: Understand Cron Syntax
If you've used cron before, skip ahead. If not, here's what you need to know.
A cron expression has five fields:
āāāāāāāāāāāā minute (0ā59)
ā āāāāāāāāāā hour (0ā23)
ā ā āāāāāāāā day of month (1ā31)
ā ā ā āāāāāā month (1ā12)
ā ā ā ā āāāā day of week (0ā7, where 0 and 7 are Sunday)
ā ā ā ā ā
* * * * *
Common patterns:
| Expression | Meaning |
|-----------------|-------------------------------|
| 0 8 * * * | Every day at 8:00 AM |
| 0 * * * * | Every hour on the hour |
| 0 8 * * 1 | Every Monday at 8:00 AM |
| */15 * * * * | Every 15 minutes |
| 0 9 1 * * | First day of each month, 9 AM |
| 0 8,12,17 * * 1-5 | 8 AM, noon, 5 PM on weekdays |
OpenClaw also supports named shortcuts:
schedule: "@daily" # Same as 0 0 * * *
schedule: "@hourly" # Same as 0 * * * *
schedule: "@weekly" # Same as 0 0 * * 0
schedule: "@monthly" # Same as 0 0 1 * *
Step 2: Define Your First Scheduled Task
OpenClaw cron jobs live in your configuration file. Let's start with a simple daily report.
# .openclaw/cron.yaml
jobs:
daily_report:
name: "Daily Metrics Report"
schedule: "0 8 * * *" # Every day at 8:00 AM
timezone: "America/New_York"
agent:
role: "Data Analyst"
model: claude-3-sonnet
instructions: |
Generate a concise daily report covering:
1. Key metrics from the past 24 hours (provided in context)
2. Notable trends or anomalies
3. Top 3 action items for the day
Format as a clean markdown document.
input:
source: metrics_api
endpoint: "https://your-api.com/metrics/daily"
output:
type: email
to: "team@yourcompany.com"
subject: "Daily Report ā {{date}}"
Load and start the cron scheduler:
openclaw cron start --config .openclaw/cron.yaml
# Verify the job is scheduled
openclaw cron list
# Output:
# JOB ID NAME SCHEDULE NEXT RUN
# daily_report Daily Metrics Report 0 8 * * * 2026-02-13 08:00 EST
Step 3: Schedule Hourly Monitoring
Monitoring agents need to run frequently and alert fast. Here's a pattern for hourly health checks:
jobs:
hourly_monitor:
name: "System Health Monitor"
schedule: "0 * * * *" # Every hour
timezone: "UTC"
timeout: 300 # Kill the job if it takes more than 5 minutes
agent:
role: "System Monitor"
model: claude-3-haiku # Faster, cheaper model for frequent runs
instructions: |
Check the health data provided and determine:
1. Are all services responding within acceptable latency?
2. Are error rates within normal bounds?
3. Is there any anomaly that requires immediate attention?
If everything is healthy, respond with STATUS: OK.
If there is a warning condition, respond with STATUS: WARNING and details.
If there is a critical issue, respond with STATUS: CRITICAL and details.
input:
source: multi_endpoint
endpoints:
- name: api_latency
url: "https://your-api.com/health"
metric: response_time_ms
- name: error_rate
url: "https://your-api.com/metrics/errors/1h"
metric: rate_percent
- name: queue_depth
url: "https://your-api.com/queue/depth"
metric: count
output:
type: conditional
rules:
- condition: "STATUS: CRITICAL"
action: pagerduty_alert
severity: critical
- condition: "STATUS: WARNING"
action: slack_message
channel: "#ops-alerts"
- condition: "STATUS: OK"
action: log_only
The conditional output type lets you route results based on the agent's response ā no custom code required.
Step 4: Build a Weekly Summary Agent
Weekly summaries benefit from richer context and a more capable model since they run less frequently:
jobs:
weekly_summary:
name: "Weekly Business Summary"
schedule: "0 9 * * 1" # Every Monday at 9:00 AM
timezone: "America/Los_Angeles"
agent:
role: "Business Analyst"
model: claude-3-opus # Full capability for the weekly deep dive
instructions: |
You are preparing the weekly business summary for the leadership team.
Using the data provided, write a comprehensive summary including:
## Executive Summary (3ā4 sentences)
## Key Metrics This Week
- Growth metrics vs. last week and last month
- Revenue / conversion highlights
- Top performing segments
## What Worked
List 3ā5 things that drove positive results.
## What Needs Attention
List 2ā3 areas that underperformed or carry risk.
## Next Week's Focus
Recommended priorities based on this week's data.
Keep the tone professional and data-driven.
input:
source: data_warehouse
query: |
SELECT *
FROM weekly_kpis
WHERE week_ending = DATE_TRUNC('week', CURRENT_DATE)
connection: $DATABASE_URL
output:
type: multi
destinations:
- type: email
to: "leadership@yourcompany.com"
subject: "Weekly Summary ā Week of {{week_start}}"
format: html
- type: slack
channel: "#leadership"
format: markdown
truncate: 2000 # Slack message limit
Step 5: Handle Errors in Scheduled Tasks
Scheduled tasks run unattended, so robust error handling is essential.
# .openclaw/cron.yaml
jobs:
daily_report:
name: "Daily Metrics Report"
schedule: "0 8 * * *"
error_handling:
# Retry on transient failures
retry:
max_attempts: 3
delay: 60s # Wait 1 minute between retries
backoff: exponential # 1m, 2m, 4m
retry_on:
- timeout
- rate_limit
- api_error
# Alert when the job fails after all retries
on_failure:
type: multi
destinations:
- type: email
to: "oncall@yourcompany.com"
subject: "[ALERT] daily_report cron job failed"
include_logs: true
- type: slack
channel: "#alerts"
message: "daily_report failed after 3 retries. Check logs: {{log_url}}"
# Alert if the job hasn't run within the expected window
on_missed:
grace_period: 30m # Allow 30 minutes of slack
alert:
type: slack
channel: "#alerts"
message: "daily_report job was expected at 8:00 AM but has not run."
You can also handle errors programmatically:
from openclaw import CronClient, JobError
client = CronClient(config=".openclaw/cron.yaml")
@client.on_error("daily_report")
def handle_report_error(job_id: str, error: JobError, attempt: int):
print(f"Job {job_id} failed (attempt {attempt}): {error}")
if attempt == 3:
# Final failure ā escalate
send_pagerduty_alert(
title=f"Cron job {job_id} failed permanently",
details=str(error),
severity="high"
)
client.start()
Step 6: Configure Logging
Good logs make debugging cron failures fast. OpenClaw writes structured JSON logs by default.
# .openclaw/cron.yaml
logging:
level: info # debug, info, warn, error
format: json
output:
- type: file
path: /var/log/openclaw/cron.log
rotation:
max_size_mb: 100
max_files: 30 # Keep 30 days of logs
- type: stdout # Also stream to stdout for log aggregators
Query logs from the CLI:
# View logs for a specific job
openclaw cron logs daily_report --last 7d
# Filter for failures only
openclaw cron logs --level error --last 24h
# Follow live log output
openclaw cron logs --follow
# Export logs as JSON for analysis
openclaw cron logs daily_report --format json --output report-logs.json
Step 7: Handle Timezones
Timezone bugs are subtle and painful. OpenClaw handles timezones explicitly at the job level.
jobs:
# This runs at 8 AM New York time, even during DST transitions
east_coast_report:
schedule: "0 8 * * *"
timezone: "America/New_York"
# This runs at 8 AM London time
uk_report:
schedule: "0 8 * * *"
timezone: "Europe/London"
# Use UTC explicitly for infrastructure jobs
backup_job:
schedule: "0 2 * * *"
timezone: "UTC"
Check when a job will actually run in your local time:
openclaw cron next daily_report --count 5
# Output:
# Next 5 runs for daily_report (America/New_York):
# 1. 2026-02-13 08:00:00 EST (in 14 hours)
# 2. 2026-02-14 08:00:00 EST (in 38 hours)
# 3. 2026-02-15 08:00:00 EST (in 62 hours)
# 4. 2026-02-16 08:00:00 EST (in 86 hours)
# 5. 2026-02-17 08:00:00 EST (in 110 hours)
Step 8: Monitor Cron Health
A cron job that silently stops running is worse than one that fails loudly. Set up health monitoring to catch dead schedulers.
# .openclaw/cron.yaml
health:
endpoint:
enabled: true
port: 8080
path: /health
# Ping a dead-man's switch service after each successful run
heartbeat:
url: "https://hc-ping.com/your-uuid-here"
on: success
Check the health endpoint:
curl http://localhost:8080/health
# Response:
# {
# "status": "healthy",
# "scheduler": "running",
# "jobs": {
# "daily_report": { "status": "ok", "last_run": "2026-02-12T08:00:01Z", "last_result": "success" },
# "hourly_monitor": { "status": "ok", "last_run": "2026-02-12T15:00:02Z", "last_result": "success" },
# "weekly_summary": { "status": "ok", "last_run": "2026-02-09T09:00:03Z", "last_result": "success" }
# },
# "uptime_seconds": 345600
# }
Set up a cron view in the OpenClaw dashboard:
# Print a live dashboard in the terminal
openclaw cron dashboard
# Output (refreshes every 30 seconds):
# āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
# ā OpenClaw Cron Dashboard 2026-02-12 15:23 UTC ā
# āāāāāāāāāāāāāāāāāāāā¬āāāāāāāāāāāāā¬āāāāāāāāāāāā¬āāāāāāāāāā¬āāāāāāāāāāāā¤
# ā Job ā Schedule ā Last Run ā Result ā Next Run ā
# āāāāāāāāāāāāāāāāāāāā¼āāāāāāāāāāāāā¼āāāāāāāāāāāā¼āāāāāāāāāā¼āāāāāāāāāāāā¤
# ā daily_report ā 0 8 * * * ā 08:00 EST ā SUCCESS ā Tomorrow ā
# ā hourly_monitor ā 0 * * * * ā 15:00 UTC ā SUCCESS ā 16:00 UTC ā
# ā weekly_summary ā 0 9 * * 1 ā Mon 09:00 ā SUCCESS ā Next Mon ā
# āāāāāāāāāāāāāāāāāāāā“āāāāāāāāāāāāā“āāāāāāāāāāāā“āāāāāāāāāā“āāāāāāāāāāāā
Common Issues
Job runs at the wrong time
Always specify an explicit timezone. If you omit it, OpenClaw defaults to the server's local timezone, which may not match your expectation.
Job runs but produces no output
Check the output section of your job config. If the output destination (email server, Slack webhook) has a configuration error, the agent runs successfully but the result is silently dropped. Test outputs manually with openclaw cron test daily_report --dry-run.
Job takes too long and gets killed
Increase the timeout value, or break the job into smaller tasks. For heavy data processing, consider splitting into a "fetch" job and a "process" job that runs sequentially.
Duplicate runs after a server restart Enable the built-in job lock:
jobs:
daily_report:
concurrency:
max_instances: 1 # Only one instance at a time
lock_backend: redis # Use Redis for distributed locking
lock_ttl: 3600
Next Steps
You now have a robust scheduled AI task system. Take it further with:
- Build a multi-agent system to make your scheduled tasks multi-step pipelines
- Security hardening guide to protect your cron configuration and API keys
- OpenClaw for code review for scheduling nightly codebase scans
Scheduled AI tasks are the backbone of hands-off automation. Set them up once, and let OpenClaw do the work every day.