v-star (v*) – High-Performance Actuarial Engine in Go
I've been building v-star — a high-performance actuarial engine written entirely in Go, with zero dependencies. It handles concurrent financial simulations, mass policy valuations, and Monte Carlo interest rate modeling at speeds that would make a spreadsheet user cry.
The Name
The name v-star (v*) comes from actuarial notation. When an annuity's payments compound at rate j but are discounted at rate i, the adjusted discount factor is:
v* = (1 + j) · v
It's an inside joke from university — one my lecturer and my coursemates (brothers and sister deployed to study Actuarial Science) will recognize. I figured if the math is niche enough, the project name should be too.
Why Build This?
Most actuarial software is either expensive enterprise tools or fragile Excel models that break when you look at them wrong. I wanted something that:
- Has zero dependencies — just the Go standard library, no third-party packages
- Is genuinely fast — leveraging goroutines for concurrent valuations
- Is auditable — pure, readable math implementations, no black boxes
- Teaches me Go deeply — building real financial systems beats reading tutorials
What It Does
v-star has a few core capabilities:
Policy Valuation
Stream-processes CSV files of insurance policies and calculates present values using standard discount factors or the v-star adjusted factor. It handles 1M+ records in under 320ms with minimal memory usage — it streams data rather than loading everything into memory.
Monte Carlo Simulation
Generates interest rate paths for stochastic modeling. 100,000 paths with 10 time steps? Done in about 100ms. The simulation uses geometric Brownian motion with configurable drift and volatility parameters.
CLI-First Design
Everything runs from the command line. No web dashboard, no config files to edit — just flags and pipes:
# Calculate discount factors
./v-star -i 0.05 -j 0.02
# Read CSV and benchmark valuation speed
./v-star read policies.csv --benchmark
# Export results as JSON
./v-star read policies.csv --output=json
# Monte Carlo with 100k paths
./v-star montecarlo --paths=100000 --steps=10 --drift=0.02 --volatility=0.15Performance
The numbers that matter:
- CSV Parsing: ~4.8M rows/sec — zero-allocation streaming parser
- Valuation: ~3.1M rows/sec — 1M records in 320ms
- Monte Carlo: ~100k paths/sec
- Memory: Minimal — streams everything, nothing buffered
I wrote Python benchmarks with both Pandas and Polars for comparison. Go's concurrency model makes a massive difference here — goroutines handle each policy independently, and the runtime schedules them efficiently across cores.
Architecture
The codebase is organized cleanly:
cmd/— CLI entry pointspkg/— Core packages (CSV parser, valuation engine, Monte Carlo simulator)docs/— Documentation
The CSV parser is a custom streaming parser — no buffering entire files into memory. Records are parsed and processed as they arrive. The valuation engine calculates PVs concurrently using a worker pool pattern. Monte Carlo paths are simulated in parallel using goroutines with a seeded RNG.
Lessons Learned
Building v-star taught me several things about Go and actuarial engineering:
- Zero dependencies is a feature. No supply chain risks, no version conflicts, no "it works on my machine." If Go compiles, it runs.
- Goroutines make concurrency simple. What would be painful threading in Python or Java is a one-liner in Go.
- Streaming beats buffering. For large datasets, processing records one at a time is faster and uses less memory than loading everything first.
- CLI tools are underrated. A tool that takes flags, reads stdin, and writes stdout can be composed into any workflow.
What's Next
A few things I want to add:
- More actuarial functions — annuities, life tables, premium calculations
- Parallel Monte Carlo with variance reduction techniques
- A
benchcommand that runs a full benchmark suite against the included test dataset - Statistical analysis of Monte Carlo results — percentiles, confidence intervals
The repo is at github.com/lubasinkal/v-star. It's MIT licensed — feel free to use it, fork it, or break it.