Paper strategy · Cloudbet API
Cloudbet moneyline strategy: 30-day paper automation replay
A simulated 30-day Cloudbet moneyline strategy run using fixed rules, paper-only bet records, and hard exposure caps. The study describes automation behavior and rejection reasons, not real customer returns or a forecast of future profit.
Paper scenario — not customer returns · 428 evaluations · 37 simulated bets · 0 live wagers
- Evaluations
- 428
- Paper bets
- 37
- Live wagers
- 0
- Cap rejections
- 9
Paper scenario — not live performance. This case study uses a simulated 30-day strategy run against recorded Cloudbet market snapshots. No customer funds were used, no live bets were placed, and the numbers below are not a forecast of user profit.
The question
Can a simple moneyline strategy run through the Glitch Edge worker without turning into a noisy script? The purpose of this scenario was not to prove a market edge. It was to inspect automation behavior: filters, rejection reasons, paper ledger clarity, and cap enforcement.
The setup
The strategy allowed pre-match moneyline markets only. It rejected events starting inside 20 minutes, prices outside the configured odds range, and markets where the same event already had simulated exposure. Stake sizing was fixed fraction with a hard daily cap and a smaller per-event cap.
The worker evaluated recorded Cloudbet snapshots on the same cadence a live strategy would use. When a market passed the rules, it wrote a paper bet to the ledger. When a guardrail blocked the action, it wrote a rejection with a reason code.
What happened
Across 428 evaluations, 37 simulated bets passed the configured rules. Nine otherwise qualifying evaluations were rejected by cap logic. Most rejections were ordinary: the market moved out of range, the event window closed, or the strategy had already recorded exposure on the event.
That is the result we wanted. The ledger did not need to show a dramatic edge. It needed to show that the worker could say no, record why, and keep strategy behavior explainable.
What we changed
The replay exposed one product issue: cap rejections were technically correct but too terse for a user trying to debug strategy activity. We changed the copy pattern from a generic “cap exceeded” message to a more specific explanation: daily cap, event cap, or strategy allocation cap.
That is a paper-mode win. A live user should not discover unclear rejection language during a real session.
Operational takeaway
This scenario supports the platform default: start in paper mode, inspect the ledger, then decide whether the strategy deserves live permission. A strategy that cannot explain its rejected bets is not ready for live automation, even if its backtest looked attractive.