After Training
You are now close to deploying your model — it’s the point where you turn “numbers” into a model that actually works in your real workflow.
1) Find Your Training Output (weights + config + names)
When training ends, the log/status messages show where outputs are written.
In AugeLab Studio, training outputs are typically created in a folder named:
XXX_config
right next to your dataset folder (XXX is your dataset folder name).
Typical structure:
XXX_config/
XXX.names
XXX.cfg
backup/
XXX_last.weights (if available)
XXX_best.weights (if available)At minimum, you should keep these together:
Weights file:
.weights(sometimes there is also a best vs last style file)Config file:
.cfgClass names file:
.names
Do not rename/reorder classes in your .names file after training unless you also remap label IDs. Class order must match label IDs.
Which weights should I use: best vs last?
If your training reports mAP during training, many YOLO/Darknet workflows keep a “best so far” checkpoint.
Use best when mAP improved and then later dropped (overfitting).
Use last if training ended while mAP was still improving and stable.
If you don’t have a “best” file, pick the final weights first, then validate.
2) Validate the Model Before You Deploy
Before you wire the model into production logic, do a fast validation pass.
Recommended validation sets:
Validation set: 30–100 images that represent real life (good + bad lighting, blur, clutter, edge cases)
Short videos or images: short video from the real camera (if you will deploy on a fixed camera)
What you are looking for:
The model detects the right object consistently
The boxes are “good enough” for your logic (not necessarily perfect)
False positives are acceptable (or can be filtered)
Rare-but-important cases are detected
High mAP can still fail in production if your validation split was too small or too clean. The validation set / real footage check is what prevents that.
3) Load the Model Into a Studio (Inference)
In AugeLab Studio, the usual next step is to build (or update) a .pmod scenario that runs inference.
A) Use “Object Detection - Custom” (recommended)
Use this node when you want to run your own YOLO/Darknet-trained model inside a workflow.
Workflow:
Add Object Detection - Custom to your graph (AI Applications category).
In the block UI:
Click Open Weight File and select your
.weightsClick Open Config File and select your
.cfgClick Open Class File and select your
.names
Select which classes you want to detect (checkbox list).
Set Confidence Threshold (start around 0.5–0.8 and tune).
Connect an image source to the block input and preview the output image.
Outputs you can use in your logic:
Output image with drawn detections
Object Count
Object Locations / Sizes
Object Classes
Rectangles
If “Object Detection - Custom” is not available, your build may not have CUDA/OpenCV DNN support enabled. Try the CPU block below, or install the required modules from the Module Downloader (see ai-training.md).
B) Use “Object Detection - Custom (CPU)” (fallback)
Use this block when you want the same workflow but without GPU acceleration.
It uses CPU inference, so it will be slower.
The setup is the same: weights + cfg + names.
4) Tune Thresholds (what actually matters)
Most “deployment quality” improvements come from threshold tuning, not from running training longer.
Start with these practical steps:
Increase confidence threshold if you see too many false positives.
Decrease confidence threshold if you miss objects.
Evaluate on the golden set and at least one real camera clip.
Do not tune on a single image. Always tune on a small set. Otherwise you will “overfit your threshold” to one scene.
5) Package for Sharing / Reproducibility
If you want the model to be usable later (or by someone else), package it intentionally.
Recommended folder layout:
What to write in the README:
What dataset the model was trained on (version/date)
What classes mean (if ambiguous)
Recommended confidence threshold range
Known failure cases (glare, tiny objects, extreme occlusion)
If your .pmod scenario references these resources, consider keeping them as relative project resources so the scenario remains portable. See also: headless-studio (missing-resource load behavior).
6) If It Fails in Production (what to do next)
When a model fails after deployment, the fix is usually one of these (in this order):
Collect the failures (save frames that show the miss/false-positive)
Label them correctly
Retrain or fine-tune with the new data
This is how models get robust.
Common failure modes and the fastest fix
False positives on background texture → add negatives from that exact environment
Misses on small objects → increase input size (if GPU allows) and collect more small-object examples
Misses under glare/blur → add those cases intentionally to the dataset (do not rely only on augmentation)
Boxes are consistently too loose/tight → fix annotation style consistency, then retrain
Last updated