Progress Tracking
Report real-time progress for long-running jobs. See how far along a job is without waiting for it to complete or SSHing into the server.
Progress tracking lets long-running jobs report real-time progress to DeadPing using query parameters. Pass progress, total, and message values with each ping to see how far a multi-step pipeline has advanced before it stalled or completed. Available on Pro and Business plans.
How It Works
Send pings during execution with progress, total, and message query parameters. Add status=in_progress to indicate the job is still running. The final ping (without status=in_progress) marks the job as complete.
Query Parameters
| Parameter | Type | Description |
|---|---|---|
progress | number | Current progress value (e.g. 50) |
total | number | Total expected value (e.g. 100) |
message | string | Human-readable status message |
status | string | Set to in_progress to indicate the job is still running |
Basic Example
TOKEN="YOUR_TOKEN"
# Report progress during execution
curl "https://deadping.io/api/ping/$TOKEN?progress=25&total=100&message=Loading+data&status=in_progress"
# ... more work ...
curl "https://deadping.io/api/ping/$TOKEN?progress=50&total=100&message=Processing+batch+5&status=in_progress"
# ... more work ...
curl "https://deadping.io/api/ping/$TOKEN?progress=75&total=100&message=Writing+results&status=in_progress"
# Final ping (no status param = complete)
curl "https://deadping.io/api/ping/$TOKEN?progress=100&total=100&message=Done"ETL Pipeline Example
A data pipeline that processes records in batches and reports progress after each batch:
#!/bin/bash
set -euo pipefail
TOKEN="YOUR_TOKEN"
TOTAL_BATCHES=20
for i in $(seq 1 $TOTAL_BATCHES); do
# Process batch
python process_batch.py --batch=$i
if [ $i -lt $TOTAL_BATCHES ]; then
# Report progress (still running)
curl -fsS "https://deadping.io/api/ping/$TOKEN?progress=$i&total=$TOTAL_BATCHES&message=Processing+batch+$i&status=in_progress"
else
# Final ping (complete)
curl -fsS "https://deadping.io/api/ping/$TOKEN?progress=$i&total=$TOTAL_BATCHES&message=All+batches+complete"
fi
donePython Example
import requests
from urllib.parse import quote
TOKEN = "YOUR_TOKEN"
BASE = f"https://deadping.io/api/ping/{TOKEN}"
records = get_all_records() # Your data source
total = len(records)
batch_size = 1000
for i in range(0, total, batch_size):
batch = records[i:i + batch_size]
process_batch(batch)
progress = min(i + batch_size, total)
message = quote(f"Processed {progress}/{total} records")
if progress < total:
requests.get(
f"{BASE}?progress={progress}&total={total}"
f"&message={message}&status=in_progress"
)
else:
requests.get(
f"{BASE}?progress={progress}&total={total}"
f"&message={message}"
)Dashboard View
When a job is in progress, the dashboard shows a live progress bar with the percentage complete and the latest status message. The monitor card displays the current progress and updates in real time via Supabase Realtime.
After completion, the monitor detail view shows a history of all progress updates for that run, so you can see how the job progressed over time.
Use Cases
- Data migrations – Track row counts as tables are migrated between databases
- Backup jobs – Report progress as each database or volume is backed up
- Batch processing – Monitor ETL pipelines processing thousands of records
- Report generation – Track progress as reports are compiled from multiple data sources
- ML training – Report epoch progress during model training runs
Combining with Output Capture
Progress tracking works alongside output capture. Send a POST request with both query parameters and a body:
curl -X POST \
"https://deadping.io/api/ping/$TOKEN?progress=100&total=100&message=Done&exit_code=0" \
-H "Content-Type: text/plain" \
-d "Migration complete. 42,000 rows processed in 3m 22s."