After Training

You are now close to deploying your model — it’s the point where you turn “numbers” into a model that actually works in your real workflow.


1) Find Your Training Output (weights + config + names)

When training ends, the log/status messages show where outputs are written.

In AugeLab Studio, training outputs are typically created in a folder named:

XXX_config

right next to your dataset folder (XXX is your dataset folder name).

Typical structure:

XXX_config/
    XXX.names
	XXX.cfg
	backup/
		XXX_last.weights  (if available)
		XXX_best.weights  (if available)

At minimum, you should keep these together:

  • Weights file: .weights (sometimes there is also a best vs last style file)

  • Config file: .cfg

  • Class names file: .names

circle-exclamation
chevron-rightWhich weights should I use: best vs last?hashtag

If your training reports mAP during training, many YOLO/Darknet workflows keep a “best so far” checkpoint.

  • Use best when mAP improved and then later dropped (overfitting).

  • Use last if training ended while mAP was still improving and stable.

If you don’t have a “best” file, pick the final weights first, then validate.


2) Validate the Model Before You Deploy

Before you wire the model into production logic, do a fast validation pass.

Recommended validation sets:

  • Validation set: 30–100 images that represent real life (good + bad lighting, blur, clutter, edge cases)

  • Short videos or images: short video from the real camera (if you will deploy on a fixed camera)

What you are looking for:

  • The model detects the right object consistently

  • The boxes are “good enough” for your logic (not necessarily perfect)

  • False positives are acceptable (or can be filtered)

  • Rare-but-important cases are detected

circle-exclamation

3) Load the Model Into a Studio (Inference)

In AugeLab Studio, the usual next step is to build (or update) a .pmod scenario that runs inference.

A) Use “Object Detection - Custom” (recommended)

Use this node when you want to run your own YOLO/Darknet-trained model inside a workflow.

Workflow:

  1. Add Object Detection - Custom to your graph (AI Applications category).

  2. In the block UI:

    • Click Open Weight File and select your .weights

    • Click Open Config File and select your .cfg

    • Click Open Class File and select your .names

  3. Select which classes you want to detect (checkbox list).

  4. Set Confidence Threshold (start around 0.5–0.8 and tune).

  5. Connect an image source to the block input and preview the output image.

Outputs you can use in your logic:

  • Output image with drawn detections

  • Object Count

  • Object Locations / Sizes

  • Object Classes

  • Rectangles

circle-info

If “Object Detection - Custom” is not available, your build may not have CUDA/OpenCV DNN support enabled. Try the CPU block below, or install the required modules from the Module Downloader (see ai-training.md).

B) Use “Object Detection - Custom (CPU)” (fallback)

Use this block when you want the same workflow but without GPU acceleration.

  • It uses CPU inference, so it will be slower.

  • The setup is the same: weights + cfg + names.


4) Tune Thresholds (what actually matters)

Most “deployment quality” improvements come from threshold tuning, not from running training longer.

Start with these practical steps:

  • Increase confidence threshold if you see too many false positives.

  • Decrease confidence threshold if you miss objects.

  • Evaluate on the golden set and at least one real camera clip.

circle-exclamation

5) Package for Sharing / Reproducibility

If you want the model to be usable later (or by someone else), package it intentionally.

Recommended folder layout:

What to write in the README:

  • What dataset the model was trained on (version/date)

  • What classes mean (if ambiguous)

  • Recommended confidence threshold range

  • Known failure cases (glare, tiny objects, extreme occlusion)

circle-info

If your .pmod scenario references these resources, consider keeping them as relative project resources so the scenario remains portable. See also: headless-studio (missing-resource load behavior).


6) If It Fails in Production (what to do next)

When a model fails after deployment, the fix is usually one of these (in this order):

  1. Collect the failures (save frames that show the miss/false-positive)

  2. Label them correctly

  3. Retrain or fine-tune with the new data

This is how models get robust.

chevron-rightCommon failure modes and the fastest fixhashtag
  • False positives on background texture → add negatives from that exact environment

  • Misses on small objects → increase input size (if GPU allows) and collect more small-object examples

  • Misses under glare/blur → add those cases intentionally to the dataset (do not rely only on augmentation)

  • Boxes are consistently too loose/tight → fix annotation style consistency, then retrain

Last updated

Was this helpful?