Model Forge

Model Forge

This is where spreadsheets come to be judged. We take your tabular data, agree what you are trying to predict, and build small models you can actually interrogate.

Security reminderThis studio is for education and experimentation. Do not upload production data or secrets. Outputs are demos; review before using anywhere safety-critical or financial.

Overview

What this teaches

How data choices, training settings, and evaluation metrics change what a model can do.

Real world use

A small workbench for quick baselines, sanity checks, and explaining metrics to humans.

Compute and limits

Free tier supports small datasets and modest epochs. Signed-in users can use a higher tier. Limits are shown before you run.

Data preparation

Upload a small dataset, inspect its shape, and make deliberate choices before training.

Current file limit: 350 KB (hard cap 4.0 MB).

Dataset summary

Sample customer churn

30 rows • 4 columns

Size: 12 KB

Preview (first 5 rows)

feature_afeature_bcategorytarget
1.962.83alphayes
9.174.86betano
10.314.67gammayes
2.361.06alphano
5.672.52betayes

Model training

Pick a target, choose a simple model, tune parameters, and train with predictable limits.

Inferred task

unknown

Training split

Train fraction: 0.80

Normalisation

Min-max scale numeric features
On

Normalisation can stabilise training when columns have very different ranges.

Parameters

More epochs and higher learning rates can improve or destabilise learning.

Evaluation

Evaluate on a holdout set. Watch for overfitting: great train metrics and weak test metrics.

Export and next steps

Export

Export a small JSON summary of your run for notes, review, or a model card draft. This is not a deployment artifact.

What to do next in real work

  • Define success with stakeholders before you pick a metric.
  • Confirm data lineage and label quality.
  • Run a bias review with domain experts, not only with dashboards.
  • Document limitations and decide when not to use the model.