API Reference

Bayesian Memory

How multi-session parameter learning works and memory format.

TrueZone uses Bayesian accumulation to refine parameter estimates across multiple sessions. Each session produces a likelihood surface over (E, Vmax, P) space, which is combined with the prior from previous sessions to produce an updated posterior.

How it works

  1. First session: The SDK grid-searches over E, Vmax, P to find the best fit. The error surface is stored as the memory.
  2. Subsequent sessions: Pass the previous memory as input. The SDK combines the new session's likelihood with the prior, weighted by confidence and time decay.
  3. Convergence: After 3–10 sessions, parameters stabilize and confidence reaches a plateau.

Memory format

{ "secsSinceEpoch": 1712000000, "confidence": 3.5, "minKey": "18504265", "minValue": 12.5, "cutoffValue": 15.0, "keys": [18504265, 18504266, 18604265, ...], "errorsX1000": [12500, 13200, 14100, ...] }
FieldDescription
secsSinceEpochTimestamp of the last analysis (for time decay)
confidenceAccumulated confidence (0–10). Higher = more certain
minKeyKey of the best parameter combination
minValueLowest error in the surface
cutoffValueError threshold for pruning
keysArray of parameter combination keys (integer-encoded)
errorsX1000Corresponding errors, multiplied by 1000 and stored as integers

Key encoding

Each key encodes a (P, speed, E) triplet as a single integer. The speed component is an internal parameter — Vmax is derived from it and E.

key = P × 100000 + speed × 36 + E × 100

Confidence decay

Confidence decays over time to allow the model to adapt as fitness changes. The decay rate is set so that confidence drops by approximately 50% over one year of inactivity. Regular training maintains high confidence; extended breaks gradually allow the model to re-learn.

Bayesian update

posterior = (likelihood_confidence × likelihood + prior_confidence × decay × prior) / (likelihood_confidence + prior_confidence × decay)

The updated confidence is capped at 10.0. The memory retains up to 1000 parameter combinations, pruning low-probability regions.

Usage

Pass memory between sessions:

# Session 1 payload1 = { "settings": json.dumps({"fixedSpeed": False, "fit": True}), "data": json.dumps(session1_data) } result1 = requests.post(ENDPOINT, json=payload1, headers={"Authorization": f"Bearer {TOKEN}"}).json() memory = result1["memory"] # Session 2 — pass the memory from session 1 payload2 = { "settings": json.dumps({"fixedSpeed": False, "fit": True}), "data": json.dumps(session2_data), "memory": json.dumps(memory) } result2 = requests.post(ENDPOINT, json=payload2, headers={"Authorization": f"Bearer {TOKEN}"}).json() # Use the Bayesian-refined estimates true_e = result2["result"]["trueE"] true_vmax = result2["result"]["trueVmax"] true_p = result2["result"]["trueP"]

The trueE, trueVmax, trueP fields are the posterior (accumulated) estimates. Always use the true values for display and downstream calculations.