PREDICTIVE ML FOR SEED-STAGE SAAS

I ship ML services that survive their
first week in production.

A FastAPI service with monitoring and a retraining script your engineers can run themselves. One fixed price quoted before any code gets written. Code in your repo on week five.

Churn Prediction Model
AUC 0.94
Churn probability — top accounts
Acme Corp
82%
NovaTech
67%
Riverline
44%
Stackwise
29%
Model performance
94%
AUC-ROC
89%
Precision
91%
Recall
LIFO AI
Built demand forecasting and shelf-life prediction at LIFO AI
Project
Public fraud-detection project: 84% capture at 0.28% alert volume
Stack
Python, FastAPI, LightGBM, scikit-learn, PostgreSQL
Demo
Code on GitHub. Live demo on Streamlit.

PROBLEMS I HEAR EVERY WEEK

Six prediction problems I hear from operators at seed-stage SaaS.

"
We only find out a customer is going to churn when they've already submitted the cancellation.

By the time the email lands, the decision was made weeks ago. A churn model flags at-risk accounts 30 to 90 days early, when a CSM call can still change the outcome. Built on your account-level usage history, billing events, and support tickets.

"
We over-order every week and throw out product, or under-order and run out at the worst time.

Intuition-based ordering is expensive in two directions: spoilage on the high side, stockouts on the low side. A demand forecasting model trained on your historical sales and seasonal patterns cuts both at once. I built this for perishable inventory at LIFO AI across multiple retail locations.

"
Our sales team spends equal time on every lead, but 80% of revenue comes from 20% of them.

A lead scoring model ranks your inbound pipeline by conversion probability based on your CRM history. Your best reps focus on the deals most likely to close, not the ones that replied fastest. The scoring lives in your CRM as a field your team can sort by.

"
We've had ML projects before. They worked in a notebook and then sat unused for six months.

A notebook is a demo. Production needs a serving layer, monitoring, and a retraining script your team can run without me. Every system I ship is built for production from week one, with a runbook your engineers can keep alive after I'm gone.

"
I don't trust the model's output because I don't understand why it makes the predictions it does.

Black-box models kill adoption. SHAP explanations come with every model I ship: per-prediction feature attribution your team can read on the same dashboard they already use. They know why each call was made and can push back when something doesn't look right.

"
We catch fraud and anomalies after they've already caused damage, never in time to prevent them.

Real-time anomaly detection on transaction or sensor streams flags unusual patterns as they happen. The difference between an early flag and a cleanup project that runs for a quarter. The public fraud project on my GitHub catches 84% of fraud at 0.28% alert volume on a 577:1 imbalanced dataset.

How I solve it

Five moves I run on every engagement.

Each move maps back to one of the quotes above. None of them is optional.

01
Frame the prediction problem on paper.

The "we built it and never used it" failure starts with a fuzzy target. Week 1 produces a one-page scoping doc that names the business decision, the cost-of-error numbers, and the success threshold. Both of us sign off before training begins. The notebook never sits unused if both sides agreed up front on what decision the output was going to inform.

02
Build features from data your business already has.

Churn predictions don't come from a vibe. They come from feature engineering on your usage logs, billing events, and support tickets. Lead scoring runs on your CRM history. Demand forecasts run on your POS data. I built that stack at LIFO AI on Square POS data and would build it the same way on your stack.

03
Tune the threshold to your actual cost-of-error.

The "80/20 of revenue" lead problem and the "fraud caught too late" problem are both threshold problems. F1 score doesn't know that a missed fraud costs $10K and a false positive costs the ops team an hour. I run threshold tuning against your real numbers and ship the tuning script so you can re-run it as the numbers shift.

04
SHAP on every prediction.

Black-box models kill adoption. Every model I ship surfaces per-prediction feature attribution on the same dashboard your team already uses. Your CSM or fraud analyst reads the explanation in plain feature names before acting on the score.

05
FastAPI service plus monitoring plus a retraining script.

A notebook is a demo. A service your engineers can run without me is a product. I wrap the model in FastAPI, push to your repo, and ship a monitoring dashboard and a retraining script your team runs on a schedule. After week five, your team owns the system.

ML lifecycle — production setupAlways-on
Training data pipelineFeature store built from your historical data
Model trained & evaluatedCross-validation, SHAP, business-impact framing
Deployed as API or batch jobPredictions served to your existing systems
Performance monitored continuouslyData drift detection + accuracy tracking
Retrained on scheduleModel stays accurate as your data evolves

The operational lifecycle, after handoff. Training data pipeline feeds a feature store. Model trained, evaluated, and explained with SHAP. Deployed as a FastAPI endpoint or batch job. Performance and drift monitored continuously. Retrained on a schedule your team owns.

Case Study

LIFO AI: 22% food waste reduction with a 40-line forecasting model

Restaurant chains operating at scale have a brutal inventory problem: over-order and you throw product away; under-order and you disappoint customers and lose revenue. LIFO AI needed a system that could predict ingredient demand by location, day, and menu item.

The first approach — an LSTM neural network — was overengineered for the problem. The winning model was exponential smoothing with exogenous variables: simpler, faster to train, and more accurate on the available data. Deployed as a FastAPI service that updates nightly purchasing recommendations. The result: 22% reduction in food waste across pilot locations.

22%
Food waste reduction
40
Lines of core model code
Nightly
Prediction refresh cadence
Get a similar result
Lesson from this project
Week 1
Problem framing
Defined exact prediction target: daily ingredient quantity per location. Set business threshold for acceptable error.
Week 2
Data audit & feature engineering
2 years of sales history, menu composition, day-of-week, seasonality, and local event calendars.
Week 3
LSTM failed evaluation
Neural network overfit on sparse location data. Switched to exponential smoothing with external regressors.
Week 4
Simple model outperformed
MAPE 11% vs 19% for the neural net. Lesson: boring tools win at startup scale.
Week 5
Production deployment
FastAPI service live. Nightly batch predictions written to purchasing system. Drift monitoring active.

What's included

Production ML — not just a notebook

Exploratory data analysis

Full audit of your data quality, distributions, correlations, and potential leakage — before a single model is fit. The work most engagements skip, that explains most failures.

Feature engineering

Domain-informed features that actually improve predictive power — not just one-hot encoding everything and hoping. This is where most of the performance gains come from.

Model evaluation report

Precision, recall, AUC, calibration plots, and a business-impact framing. You'll know exactly what the model gets right, what it gets wrong, and what that means in dollar terms.

SHAP explainability

Global and per-prediction feature importance so every stakeholder understands why the model made each call — and can flag when something doesn't make sense.

Production deployment

REST API or batch scoring pipeline — whichever fits your stack. With CI/CD, containerisation, and integration into your existing systems. Not a notebook emailed over.

Drift monitoring & retraining

Performance tracking over time, data drift detection, and scheduled retraining pipelines. The model stays accurate as your business evolves — not just on launch day.

Ready to predict?
What would you do differently if you knew what was coming?

Book a free discovery call — I'll assess your data and tell you exactly what's predictable, what the model would look like, and what the business impact could be.

← Previous Service
Analytics Dashboards