After Training
You are now close to deploying your model — it’s the point where you turn “numbers” into a model that actually works in your real workflow.
1) Find Your Training Output (weights + config + names)
When training ends, the log/status messages show where outputs are written.
In AugeLab Studio, training outputs are typically created in a folder named:
XXX_config
right next to your dataset folder (XXX is your dataset folder name).
Typical structure:
XXX_config/
XXX.names
XXX.cfg
backup/
XXX_last.weights (if available)
XXX_best.weights (if available)At minimum, you should keep these together:
Weights file:
.weights(sometimes there is also a best vs last style file)Config file:
.cfgClass names file:
.names
Do not rename/reorder classes in your .names file after training unless you also remap label IDs. Class order must match label IDs.
2) Validate the Model Before You Deploy
Before you wire the model into production logic, do a fast validation pass.
Recommended validation sets:
Validation set: 30–100 images that represent real life (good + bad lighting, blur, clutter, edge cases)
Short videos or images: short video from the real camera (if you will deploy on a fixed camera)
What you are looking for:
The model detects the right object consistently
The boxes are “good enough” for your logic (not necessarily perfect)
False positives are acceptable (or can be filtered)
Rare-but-important cases are detected
High mAP can still fail in production if your validation split was too small or too clean. The validation set / real footage check is what prevents that.
3) Load the Model Into a Studio (Inference)
In AugeLab Studio, the usual next step is to build (or update) a .pmod scenario that runs inference.
A) Use “Object Detection - Custom” (recommended)
Use this node when you want to run your own YOLO/Darknet-trained model inside a workflow.
Workflow:
Add Object Detection - Custom to your graph (AI Applications category).
In the block UI:
Click Open Weight File and select your
.weightsClick Open Config File and select your
.cfgClick Open Class File and select your
.names
Select which classes you want to detect (checkbox list).
Set Confidence Threshold (start around 0.5–0.8 and tune).
Connect an image source to the block input and preview the output image.
Outputs you can use in your logic:
Output image with drawn detections
Object Count
Object Locations / Sizes
Object Classes
Rectangles
B) Use “Object Detection - Custom (CPU)” (fallback)
Use this block when you want the same workflow but without GPU acceleration.
It uses CPU inference, so it will be slower.
The setup is the same: weights + cfg + names.
4) Tune Thresholds (what actually matters)
Most “deployment quality” improvements come from threshold tuning, not from running training longer.
Start with these practical steps:
Increase confidence threshold if you see too many false positives.
Decrease confidence threshold if you miss objects.
Evaluate on the golden set and at least one real camera clip.
Do not tune on a single image. Always tune on a small set. Otherwise you will “overfit your threshold” to one scene.
5) Package for Sharing / Reproducibility
If you want the model to be usable later (or by someone else), package it intentionally.
Recommended folder layout:
What to write in the README:
What dataset the model was trained on (version/date)
What classes mean (if ambiguous)
Recommended confidence threshold range
Known failure cases (glare, tiny objects, extreme occlusion)
6) If It Fails in Production (what to do next)
When a model fails after deployment, the fix is usually one of these (in this order):
Collect the failures (save frames that show the miss/false-positive)
Label them correctly
Retrain or fine-tune with the new data
This is how models get robust.
Last updated
Was this helpful?