← Blog

In The Interest Of Things — v-star Updates

March 23, 2026·#go#actuarial#performance#ai#vba#philosophy

It's been one of those weeks where you look back and realize you've spent the weekend doing something deeply specific, and you need to tell someone about it. So yeah let me talk. This is about v-star — my high-performance actuarial engine in Go — and what happened this week. Some of it is technical. Some of it is philosophical. All of it is true enough.

What Actually Happened This Week

Between March 20 and 23, I made meaningful progress on v-star. Not the exceptional kind — no viral moments or product launches — just quiet, technical work that moved the project forward. Now here's the breakdown.

March 20 — Building the Foundation

I added four core actuarial packages in a single day. That's a lot of work laid at once:

  • pkg/rates — Handles interest rate calculations, discount factors, and rate conversions. The mathematical backbone of any actuarial calculation.
  • pkg/mortality — Mortality tables with qx/px calculations and CSV loading. Because insurance is fundamentally about measuring how long people live, and the math has to reflect that.
  • pkg/annuities — Whole life, term, and deferred annuity calculations. Payment streams that go on for years, sometimes decades. Getting the math right matters.
  • pkg/reserves — Net premium, gross premium, and both prospective and retrospective reserve calculations. This is what actuaries actually do — figure out how much money needs to be set aside to pay future claims.

I also wrote AGENTS.md — a simple, open format for guiding coding agents, used by over 60k open-source projects. Think of AGENTS.md as a README for agents: a dedicated, predictable place to provide the context and instructions to help AI coding agents work on your project.

March 21 — The CLI Gets Better

Fixed a bug in the mortality table CSV parsing where the code wasn't correctly detecting when px (probability of survival) columns were missing. This sounds like a small detail, but parsing errors in actuarial work flow through everything downstream. One bad column means wrong reserves, wrong premiums, wrong everything.

I also added a --table flag to the CLI for mortality based policy valuation. Now you can point the tool at a mortality table and get proper valuations. It works. It's decent. That's the goal.

March 22 — The Performance Story

Here's the big one. I/AI improved CSV parsing throughput by 6.8x. Let me say that again because I still don't fully believe it: I went from processing roughly 1.5 million rows per second to about 10.7 million rows per second. The parallel version hits 11 million. Don't tell anyone but I got a server and it hit 38 million rows per second

For those not familiar with the terminology: a "row" is one record in a CSV file. One policy. One customer. One calculation waiting to happen. When you're processing millions of policies, getting from 1.5M to 10.7M rows per second changes everything.

What changed? Four specific optimizations:

  • Streaming processor — Instead of loading an entire file into memory before processing it, the new system reads and processes one line at a time. This uses way less RAM and starts returning results faster.
  • Pre-allocated slices — Instead of starting with an empty container and repeatedly adding to it (which requires allocating more memory as it grows), the code now counts how many fields exist first, then creates exactly the right-sized container. No wasted memory. No resizing overhead.
  • String interning — When converting raw bytes to strings, the old code was creating entirely new copies every time. The new code uses Go's unsafe package to create "views" into existing memory. Same data, no copy. Massive savings when processing millions of rows.
  • sync.Pool usage — Added proper object pooling for field buffers. Instead of creating new temporary arrays for every single row, the system now reuses memory that's been used before. This reduces garbage collection pressure significantly.

I ran benchmarks comparing the Go implementation against Python with Pandas and Polars. The results were interesting — I beat Pandas of course but Polars is a hit or miss at times, I win, I lose. It's early to say. This isn't to say Python is bad — no, it's a reminder of what you can achieve when you optimize carefully and leverage Go's strengths.

Side note: I used AI to help identify these bottlenecks. More on that below.

March 23 — Release Prep

Updated README, ROADMAP, and LICENSE for v0.2.0. Version numbers feel ambitious for something with maybe a dozen users total, but there's something satisfying about formalizing progress. It forces you to look at what you've done and call it "a release." That's useful even if no one else is paying attention.

Using AI to Find Performance Issues

I'll be honest: I was skeptical about using AI to help with performance optimization. I'd used it for code generation, for explanations, for debugging — but optimization felt like something that required human intuition, profiler data, and countless hours of staring at flame graphs.

That changed this week. Here's what happened: I fed the CSV parser code to an AI and asked it to find places where memory was being allocated unnecessarily. And it found things. It found a lot of things.

The AI pattern-matched across my entire codebase and connected dots I would have found eventually but not quickly. It found that I was converting byte slices to strings in tight loops — creating garbage on every iteration. It noticed the sync.Pool was either missing or implemented suboptimally. It saw places where I was appending to slices without capacity pre-allocation.

Is this cheating? Its called embracing the times. Using AI to find performance issues is the same as using a profiler — it's a tool that surfaces information you'd find eventually but faster. The expertise is in knowing what to fix and why it matters. The AI just helped me find it.

The best part: after implementing the fixes, the numbers spoke for themselves. 6.8x faster. That's not AI hallucination. That's measurable, reproducible, objective improvement. I'll take that result any day.

Why VBA Needs to Die (And Why It Won't)

Let me talk about something that matters to a lot of people but gets swept under the rug: Visual Basic for Applications (VBA) is a disaster in my opinion. And I mean that with the nuance it deserves — VBA isn't going anywhere, and here's why that's a problem.

What VBA Actually Is

VBA is the programming language embedded in Microsoft Excel. It lets you write macros, automate tasks, and build "applications" inside spreadsheets. It's been around since 1993. That's over 30 years of accumulated technical debt in every insurance company, pension fund, and actuarial firm on the planet.

And here's the uncomfortable truth: most actuarial work is still done in VBA and Excel. Not because it's good. Because it's what everyone knows and the solution we'd die on. Because it's what the industry standardizes on. Because switching costs are enormous and the risk of making a mistake during migration is terrifying.

Why VBA is Technically Outdated

Let me break down the specific problems for those who haven't lived through this:

No type safety. You can pass a string to a function expecting a number and Excel will try its best. Sometimes it works. Sometimes it returns "type mismatch" at 2am before a deadline. Sometimes it silently does the wrong thing. There's no compiler catching your mistakes before they reach production. I learnt my lesson... stop writing Python without types. If you don't listen to my warning, you'll learn the hard way.

No version control. Excel files don't diff well. VBA code stored in Excel modules is essentially invisible to Git. You can't review changes, branch, or merge. Every "improvement" is a new file. Every disaster is a file you can't recover from.

Single-threaded by default. Want to process a million policies in VBA? Hope you like waiting. There's no built-in concurrency. Everything runs on one thread. One CPU core. One calculation at a time.

Fragile. Move a cell? Formula breaks. Rename a sheet? Formula breaks. Change a column order? Formula breaks. The dependencies between different parts of a spreadsheet are invisible and impossible to track. You can break something by accident and never know until a client notices.

"It works on my machine." The classic excuse. Different Excel versions, different regional settings (comma vs. period for decimals), different security settings — code that runs perfectly on your machine fails on your client's. And there's no way to test it until production. This is the reason I wrote v-star in Go with no dependencies and not Python.

Why It Won't Change

Here's why VBA persists despite all of this:

  • Institutional knowledge. The actuarial models built over decades are in VBA. Nobody fully understands them anymore, but they work. Replacing them is a massive risk with unclear upside.
  • Regulatory acceptance. Actuaries can defend Excel calculations to regulators. Can they defend a custom Go tool? Maybe. But it's an uphill battle.
  • Training pipelines. Universities still teach VBA to actuaries. Students graduate knowing Excel and VBA, not Go or Python. The cycle reinforces itself. But apparently this is changing
  • No catastrophic failure. When VBA breaks, it usually breaks quietly. Wrong numbers get caught in review. Nobody dies. So there's no urgent pressure to fix it.

This is why I'm building v-star. Not to replace every spreadsheet in existence — that's unrealistic and probably not even desirable for all cases — but to show that another way exists. Code that's version-controlled, typed, concurrent, and fast. It's possible. You just have to choose to do it.

Who This Is For — A Self-Reflection

Here's where I get honest with myself. Who is v-star actually for?

Maybe actuaries? Those of you who are tired of waiting 20 minutes for a valuation to run. Who want something faster, auditable, and doesn't require a degree in VBA archaeology to understand. The market for this is... small. Actuarial science is a niche field. High-performance actuarial software in Go is a niche within a niche.

Maybe students? People learning actuarial mathematics and concepts who want to see how the formulas translate to actual code. Not Excel magic — real implementations they can read, modify, and learn from. This is probably the most likely audience, honestly.

Maybe no one? It's possible I'm building this for myself. A personal project that scratches an itch. A way to learn Go deeply by building something real. A portfolio piece that proves I can do this.

What's The Goal?

I genuinely don't know anymore. When I started, I had a clear story: "build a fast actuarial engine to replace slow Excel models." But as I've worked on it, the goal has shifted.

Now I think the goal is simpler: make something that exists. Something that works. Something that's fast. Something that proves you can build actuarial software without VBA, without Excel, without the traditional ecosystem. Something that demonstrates it's possible, even if nobody ever uses it... Something simply to shit on VBA

There's a strange satisfaction in that. In building something just to prove it can be built. In learning things you'd never learn from tutorials because you're solving problems that actually matter. In making something that's yours.

And maybe that's the answer. The goal isn't "solve a problem the world knows it has." The goal is "learn what you'd never learn otherwise." To build something interesting to you, and trust that interesting things are worth building, even if the audience is small.

A Note to Business Owners

If you're running a business that relies on actuarial calculations — pricing, reserving, risk modeling — and you're doing it in Excel, I understand. It works. Your team knows it. The regulators accept it. Changing is scary.

But consider this: that Excel model was probably built by someone who left five years ago. It's held together by institutional knowledge and hope. It takes 20 minutes to run. It can only process one scenario at a time.

Tools like v-star aren't about replacing everything overnight. They're about having options. About demonstrating what's possible. About starting a conversation about what's worth investing in. The spreadsheet will still be there. But it could be faster, more reliable, and more maintainable if you chose to invest in it.

A Note to Students

If you're studying actuarial science and learning VBA because it's "the industry standard" — learn it, use it, pass your exams. But know that it's not the only way. It's not even the best way. It's just the established way.

Build things. Try different tools. Learn programming languages that force you to think differently about problems. The best actuaries I know are the ones who can look at a problem from multiple angles — the mathematical angle, the computational angle, the business angle. Don't limit yourself to one. Like really... shout out to Mr. Suleman Patel.

And if you're curious about how actuarial math translates to code, check out v-star. It's not perfect. It's not finished. But it's real, and it's working, and it's fast. That's more than most tutorials give you.

What's Next

More actuarial functions. Better annuity variants. Parallel Monte Carlo with variance reduction techniques (so we can get better statistical results with fewer simulations). A proper bench command. Statistical analysis — percentiles, confidence intervals, the things actuaries actually need to report. Honestly I don't know... I'll see how things go and if I'll get any feedback.

The roadmap is long and the user base is small, but that's never stopped anyone from building something they care about. We'll see where this goes.

Repo is at github.com/lubasinkal/v-star. MIT licensed. Use it if it helps. In the interest of things.