v0.1.0a1 — alpha
Calibration infrastructure for RL training.
Every RL training run today ships uncalibrated rewards. Verifiable Labs wraps any reward function with provable conformal coverage in five lines.
import vlabs_calibrate as vc calibrated = vc.calibrate(my_reward, traces, alpha=0.1) result = calibrated(prompt=..., completion=..., sigma=0.5) # → reward, interval, target_coverage
Drop-in replacement
Wrap any Python reward function with vc.calibrate(...). Returns a callable that emits reward + conformal interval per call.
Provable (1−α) coverage
Split-conformal prediction (Lei et al., 2018). Marginal coverage guaranteed under exchangeability.
Hosted or self-host
pip install vlabs-calibrate to run locally. Or call the hosted API from production for usage metering and audit history.
Ready to calibrate your reward model?
Free tier covers 10,000 traces/month. No credit card required.