This is where spreadsheets come to be judged. We take your tabular data, agree what you are trying to predict, and build small models you can actually interrogate.
What this teaches
How data choices, training settings, and evaluation metrics change what a model can do.
Real world use
A small workbench for quick baselines, sanity checks, and explaining metrics to humans.
Compute and limits
Free tier supports small datasets and modest epochs. Signed-in users can use a higher tier. Limits are shown before you run.
Upload a small dataset, inspect its shape, and make deliberate choices before training.
Current file limit: 350 KB (hard cap 4.0 MB).
Dataset summary
Sample customer churn
30 rows • 4 columns
Size: 12 KB
Preview (first 5 rows)
| feature_a | feature_b | category | target |
|---|---|---|---|
| 1.96 | 2.83 | alpha | yes |
| 9.17 | 4.86 | beta | no |
| 10.31 | 4.67 | gamma | yes |
| 2.36 | 1.06 | alpha | no |
| 5.67 | 2.52 | beta | yes |
Pick a target, choose a simple model, tune parameters, and train with predictable limits.
Inferred task
unknown
Training split
Train fraction: 0.80
Normalisation
Normalisation can stabilise training when columns have very different ranges.
Parameters
More epochs and higher learning rates can improve or destabilise learning.
Evaluate on a holdout set. Watch for overfitting: great train metrics and weak test metrics.
Export
Export a small JSON summary of your run for notes, review, or a model card draft. This is not a deployment artifact.
What to do next in real work