Live Demo — Revenue Experiment Analyzer

Compound Beta × LogNormal model with VBEM mixture decomposition. Pick a scenario or upload your own CSV.

Clear lift in both conversion and revenue.

Upload CSV

How It Works

Full posterior distributions over treatment effects, not binary significance decisions.

Compound Models

Joint Beta × LogNormal decomposition separates conversion rate effects from revenue-per-converter effects. Handles zero-inflated data naturally.

VBEM Mixtures

Revenue distributions often contain subpopulations (casual vs. premium buyers). VBEM fits a LogNormal mixture rather than forcing a single distribution, with Dirichlet posteriors over component weights.

Conjugate Priors

Beta-Binomial and LogNormal-Normal conjugate updates give analytical posteriors. No MCMC sampling needed for standard problems.

Client-Side

All inference runs in the browser. Zero server costs, embeddable anywhere. The demo above is the real library.

Technical Details

Beta Breaking Changes Expected

  • TypeScript core with analytical conjugate updates (Beta-Binomial, LogNormal-Normal)
  • VBEM mixture decomposition fits LogNormal mixtures when revenue contains subpopulations
  • Compound model router automatically selects Beta × LogNormal for zero-inflated revenue data
  • D3 visualization components (trad-charts) for posterior inspection — ridge dotplots, density overlays, CI bands
// ModelRouter inspects data shape and selects inference strategy
const route = await ModelRouter.route(data);
// → { config: { structure: 'compound', frequencyType: 'beta',
//      valueType: 'lognormal' }, engine: CompoundInferenceEngine,
//      reasoning: ['User-level data with conversions detected',
//                  'Zero-inflated: 88% zeros → compound model'] }

// CompoundInferenceEngine decomposes into Beta × LogNormal
const result = await route.engine.fit(data, route.config);

// Posterior decomposition — each component is a full distribution
const { frequency, severity } = result.getDecomposition();
frequency.sample(10000);  // Beta posterior over conversion rate
severity.sample(10000);   // LogNormal (or mixture) posterior over $/converter

// When revenue has subpopulations, VBEM fits a mixture
// rather than forcing one LogNormal onto bimodal data
const components = result.getComponents();
// → [{ weight: 0.82, mean: 24, weightCI: [0.78, 0.86],
//      posterior: LogNormalDistribution },
//    { weight: 0.18, mean: 152, weightCI: [0.14, 0.22],
//      posterior: LogNormalDistribution }]

Motivation

I wanted a Bayesian experiment analysis tool that shows the full posterior — not just a p-value or a "significant" badge, but the compound decomposition, mixture structure, and uncertainty at every level. Tyche is that tool.

It runs entirely in the browser because that's kind of neat, and because it means zero hosting costs — I can give it away, embed it anywhere (like the demo above), and never worry about server bills.

Status & Docs

Active development. The inference engine is stable; the API is still changing.

Related: Stoa Stack

The Python counterpart (pytche) is part of Stoa, my commerce intelligence platform. Pytche extends the same Bayesian inference approach with heterogeneous treatment effect discovery via causal forests — finding which customer segments respond differently to a treatment, with proper uncertainty at each leaf. That's the current focus of my work.