Worker Scheduling Benchmark Report ¶
Date: 2026-03-29
Benchmark: OnceDailyWorker vs EveryMinuteWorker — subscription expiry processing
Table of Contents ¶
Executive Summary ¶
This benchmark tests whether running a subscription expiry worker once per day versus every minute makes a measurable difference in database load, cost, CPU, and memory — when both strategies produce exactly the same outcome.
The answer is clear: the every-minute pattern is strictly worse across every dimension measured. With 2,000 subscriptions and a 20% expiry rate, EveryMinuteWorker issued 1,440× more database reads, consumed 148–731% more CPU, allocated 292% more memory, and cost 36–166% more depending on the pricing model — all to produce the same result as a single daily run.
The benchmark was run across three rounds to isolate different cost dimensions: Round 1 establishes a zero-latency baseline using net heap memory; Round 2 introduces 1ms simulated network latency per DB call to model cloud database overhead; Round 3 repeats Round 1 but switches memory measurement to total allocated bytes (GC.GetTotalAllocatedBytes) to remove GC distortion and reveal true allocation pressure.
The absolute dollar cost gap is minimal at this sample size (4,000 records across 2 tables), but the percentage overhead is consistent and grows with data volume and network latency. At just 1ms of simulated cloud network latency, the time gap between the two workers grew from +49% to +166%. At realistic cloud latencies of 5–10ms, the gap would be in the hundreds of percent.
The root cause is a single architectural decision: polling when you should be scheduling.
Results are specific to this hardware. Absolute timing figures will differ on other machines, but the percentage ratios and directional conclusions are expected to hold. Note: Windows default timer resolution is 15.6ms —
Task.Delay(1)in Round 2 may sleep longer than 1ms, potentially inflating latency figures.
What This Benchmark Tests ¶
Two workers with identical logic, different schedules:
- OnceDailyWorker — runs once per day, fetches all expired subscriptions in a single query, and bulk-updates them in one operation.
- EveryMinuteWorker — runs every minute (1,440 times per day), executing the same fetch-and-update logic on each iteration.
When a counsellor's last active subscription expires, their role is automatically downgraded from Counsellor → User.
Key Findings ¶
1. Both workers produce identical results ¶
Both workers expire the same 400 subscriptions and downgrade the same 400 counsellors (800 total UPDATE commands — one per subscription, one per counsellor role change). Every difference in the tables below is pure overhead — not additional correctness.
2. 1,439 out of 1,440 reads are wasted ¶
| Worker | SELECTs/day | Useful | Wasted |
|---|---|---|---|
| OnceDailyWorker | 1 | 1 | 0 |
| EveryMinuteWorker | 1,440 | 1 | 1,439 |
After the first iteration finds and updates the expired subscriptions, every subsequent iteration queries the database and finds nothing — yet still pays the cost of the round trip.
3. Per-request cost is consistently +36% higher ¶
Model A (DynamoDB) shows a stable +36% cost difference across all three rounds regardless of latency. Per-request pricing is latency-blind — the gap is fixed at 1,439 extra reads per day. Since reads are cheaper than writes, the relative increase is modest despite the volume difference.
4. Compute-time cost compounds with latency ¶
Adding just 1ms of network latency per DB call pushed the execution time gap from +49% to +166%. The 1,439 wasted SELECTs each pay the latency tax:
| Latency | Extra time/day | RDS cost increase |
|---|---|---|
| 0ms (local) | +579ms | +49% |
| 1ms (light cloud) | +11,731ms | +166% |
| 5ms (typical cloud) | ~+50,000ms est. | ~+600% est. |
| 10ms (cross-region) | ~+100,000ms est. | ~+1,200% est. |
5. CPU usage is 148–731% higher ¶
Round 1 showed +148% CPU for EveryMinuteWorker. Round 3 (GC-corrected) showed +731%. The difference between rounds is explained by Finding 6 — in Rounds 1 and 2, much of the CPU work is the garbage collector, not just the worker logic.
Notably, OnceDailyWorker CPU dropped from 1,171ms (Round 1) to 656ms (Round 2) under latency — because with Task.Delay the process spends more time waiting on I/O than doing CPU work.
6. EveryMinuteWorker allocates 292% more memory (3,305% higher memory cost) ¶
Each of the 1,440 iterations allocates a List<Subscription>. After the first, the remaining 1,439 return empty lists — but the allocation still happens and immediately becomes garbage.
Round 3 (using GC.GetTotalAllocatedBytes()) reveals the true allocation pressure:
| Worker | Total allocated (KB) | Difference |
|---|---|---|
| OnceDailyWorker | 7,955 | baseline |
| EveryMinuteWorker | 31,159 | +23,204 KB (+292%) |
Rounds 1 and 2 show a negative net memory delta for EveryMinuteWorker — the GC fires aggressively and reclaims more than the worker currently holds. This is not an error; it is the garbage collector working overtime to keep pace with the allocation rate, which also explains the elevated CPU cost.
Results ¶
Round 1 — No network latency (0ms) | Mem = net heap delta ¶
All cost figures are per day.
| Worker | SELECTs | UPDATEs | Time (ms) | Model A (DynamoDB) | Model B (RDS) | CPU (ms) | CPU Cost | Mem (KB) | Memory Cost |
|---|---|---|---|---|---|---|---|---|---|
| OnceDailyWorker | 1 | 800 | 1,178 | $0.001000 | $0.000007 | 1,171 | $0.000002 | 888 | $0.00001743 |
| EveryMinuteWorker | 1,440 | 800 | 1,757 | $0.001360 | $0.000010 | 2,906 | $0.000006 | -7,579 | ($0.00022194) |
| % difference | +49% | +36% | +49% | +148% | -1,373% |
Negative memory indicates the GC reclaimed more than the worker currently held. See Finding 6.
Round 2 — Simulated network latency (1ms/call) | Mem = net heap delta ¶
All cost figures are per day.
| Worker | SELECTs | UPDATEs | Time (ms) | Model A (DynamoDB) | Model B (RDS) | CPU (ms) | CPU Cost | Mem (KB) | Memory Cost |
|---|---|---|---|---|---|---|---|---|---|
| OnceDailyWorker | 1 | 800 | 7,070 | $0.001000 | $0.000039 | 656 | $0.000001 | -2,076 | ($0.00024462) |
| EveryMinuteWorker | 1,440 | 800 | 18,801 | $0.001360 | $0.000104 | 2,468 | $0.000005 | -10,392 | ($0.00325634) |
| % difference | +166% | +36% | +166% | +276% | -1,231% |
Extra time cost of EveryMinuteWorker with 1ms latency: ~11,731ms for the same result.
Round 3 — No network latency (0ms) | Mem = total allocated bytes (GC-aware) ¶
All cost figures are per day.
| Worker | SELECTs | UPDATEs | Time (ms) | Model A (DynamoDB) | Model B (RDS) | CPU (ms) | CPU Cost | Mem (KB) | Memory Cost |
|---|---|---|---|---|---|---|---|---|---|
| OnceDailyWorker | 1 | 800 | 62 | $0.001000 | $0.000000 | 62 | $0.000000 | 7,955 | $0.00000822 |
| EveryMinuteWorker | 1,440 | 800 | 539 | $0.001360 | $0.000003 | 515 | $0.000001 | 31,159 | $0.00027991 |
| % difference | +769% | +36% | +769% | +731% | +3,305% |
Extra memory allocated by EveryMinuteWorker: ~23,204 KB for the same result.
GC.GetTotalAllocatedBytes()captures every allocation regardless of GC reclaims, giving a true picture of allocation pressure.
Conclusion ¶
For workloads where the processing window is naturally daily, running a worker once per day is strictly better than running it every minute across every dimension:
| Dimension | OnceDailyWorker | EveryMinuteWorker |
|---|---|---|
| DB reads/day | 1 | 1,440 |
| Wasted reads | 0 | 1,439 |
| Time (no latency) | baseline | +49% |
| Time (1ms latency) | baseline | +166% |
| CPU usage | baseline | +148–731% |
| Memory allocated | baseline | +292% (+3,305% cost) |
| Cost (DynamoDB) | baseline | +36% |
| Cost (RDS, no latency) | baseline | +49% |
| Cost (RDS, 1ms latency) | baseline | +166% |
The only scenario where a more frequent schedule adds value is when time precision matters — for example, if a subscription expiring at 14:32 must immediately revoke access. In that case, the right answer is an event-driven trigger (e.g., a message queue or per-subscription scheduled job), not a polling loop.
Appendix ¶
A. Sample Size & Database Behaviour ¶
| Parameter | Value |
|---|---|
| Total counsellors | 2,000 |
| Total subscriptions | 2,000 (one per counsellor) |
| Expiry ratio | ~20% (~400 expired) |
| Active subscriptions | ~1,600 |
| Seed | Random(42) — fully reproducible |
Expiry distribution:
- Expired subscriptions have
CycleEndset to yesterday, spread randomly across all 24 hours of the day. - Active subscriptions have
CycleEndset 1–30 days in the future at random times.
Database behaviour per worker run:
- Each run issues a
SELECTwith aJOINon theCounsellorstable to load expired subscriptions and their linked counsellor. - For each expired subscription:
Status→Expired,UpdatedAt→ now. - For each affected counsellor with no remaining active subscriptions:
Role→User. - A single
SaveChangesAsync()persists all changes in one transaction.
Three rounds were run:
- Round 1 — 0ms latency, memory = net heap delta (
GC.GetTotalMemory) - Round 2 — 1ms simulated latency per DB call, memory = net heap delta
- Round 3 — 0ms latency, memory = total allocated bytes (
GC.GetTotalAllocatedBytes)
B. Cost Assumptions ¶
All pricing is based on publicly available cloud pricing as of 2026.
Model A — Per-Request Pricing (DynamoDB-style) ¶
Applicable to: AWS DynamoDB, Azure Cosmos DB, PlanetScale, Supabase
| Operation | Rate |
|---|---|
| Read (SELECT) | $0.25 per million requests |
| Write (UPDATE) | $1.25 per million requests |
Model B — Compute-Time Pricing (RDS-style) ¶
Applicable to: AWS RDS, Azure SQL, Google Cloud SQL
| Parameter | Value |
|---|---|
| Instance | db.t3.micro |
| Rate | $0.02 / hour |
| Effective rate | $0.02 / 3,600,000ms per ms of uptime |
CPU Cost ¶
| Parameter | Value |
|---|---|
| Rate | $0.000000002 per CPU-ms |
| Basis | AWS Lambda ~$0.0000166667 per GB-second at 128MB |
Memory Cost ¶
| Parameter | Value |
|---|---|
| Rate | $0.0000166667 per GB-second → converted to per KB-ms |
| Measurement | Heap delta (Rounds 1 & 2) or total allocated bytes (Round 3) × execution duration |
C. Hardware info ¶
| Parameter | Value |
|---|---|
| OS | Windows 11 Home Single Language |
| CPU | Intel Core i9-13900H (14 cores / 20 logical processors @ 2.6GHz) |
| RAM | 16 GB (2 × 8 GB) |
| Storage | NVMe Micron 2400 512GB |
| Runtime | .NET 10 |
| Database | SQLite (local file) |