# Find Reference

This function block locates a single instance of a reference image inside a target image, estimates its position and orientation, and returns visual and geometric results that help you inspect or further process the detected object.

## 📥 Inputs <a href="#inputs" id="inputs"></a>

`Object Image` The image in which the reference should be located (target image).

`Reference Image` The small reference image used to search inside the object image.

`Reference Image ROI` Optional shape input to restrict matching to a region inside the reference image (useful to ignore irrelevant parts).

## 📤 Outputs <a href="#outputs" id="outputs"></a>

`Result Image` The object image annotated with the detected reference bounding polygon.

`Detected Object Image` A perspective-corrected crop of the detected object, transformed to match reference orientation/size.

`Keypoints Image` A visualization of feature/keypoint detections on the object image to help verify matches.

`Bounding Box` The enclosed rectangle that bounds the detected reference in the object image.

`Corner Coordinates` The four polygon corner coordinates of the detected reference area.

`Center Position` The center point (x, y) of the detected reference.

## 🕹️ Controls <a href="#controls" id="controls"></a>

This block does not expose additional widget controls beyond its inputs. Use the `Reference Image ROI` input to limit the reference area when needed.

## 🎨 Features <a href="#features" id="features"></a>

* Visual detection result with polygon overlay for quick verification.
* Returns both a perspective-corrected crop and geometric outputs (corners, bounding box, center) for downstream processing.
* Accepts an optional ROI on the reference image to focus matching on a sub-region.
* Produces a keypoints visualization to help debug matching quality.

## 📝 Usage Instructions <a href="#usage" id="usage"></a>

1. Provide the source image to `Object Image` and the template/reference to `Reference Image`.
2. Optionally connect a shape to `Reference Image ROI` to target a specific area inside the reference.
3. Run the scenario — the block will attempt to find the reference and output annotated visuals and geometry.
4. Use the outputs to draw overlays, crop, measure or feed into decision logic.

## 📊 How it runs <a href="#evaluation" id="evaluation"></a>

When executed, the block compares distinctive visual features between the reference and object images, estimates the position and orientation of the best match, and returns a visual annotation plus transformed and numeric geometry outputs that represent the found object.

## 💡 Tips and Tricks <a href="#tips-and-tricks" id="tips-and-tricks"></a>

* To focus detection on a specific area of the input image, crop beforehand with `Image ROI Select` or `Image ROI` and feed the cropped image into `Object Image`.
* If scale or orientation differs significantly between reference and object, try resizing the inputs with `Image Resize` before matching.
* Improve robustness under varying lighting by preprocessing with `Adjust Colors`, `Contrast Optimization`, or `Blur` to reduce noise.
* Use `Show Image` to preview `Result Image` or `Detected Object Image` interactively while tuning inputs.
* Combine the block output with `Draw Detections` or `Draw Result On Image` to overlay textual or rectangular annotations on live images.
* If you need a perspective matrix for other measurements, consider following up with `Perspective Transform` using the detected corner coordinates.
* When the match returns many false positives, try limiting the reference features using `Reference Image ROI` or apply `Image Threshold` on inputs to simplify textures.

(hint) For region-based workflows, use `Get ROI` after detection to extract and forward the detected area to other processing blocks.

## 🛠️ Troubleshooting <a href="#troubleshooting" id="troubleshooting"></a>

* No detection found: ensure the reference appears in the object image with sufficient visible texture; try increasing the reference size in the object or use `Image Resize`.
* Poor or unstable matches: improve lighting and contrast with `Adjust Colors` or `Contrast Optimization`; reduce noise with `Blur`.
* Detection located but warped output looks wrong: validate the corner coordinates using `Show Image` on the `Keypoints Image` and consider restricting the `Reference Image ROI`.
* Slow processing on large images: downscale with `Image Resize` before feeding into this block, then map results back to original coordinates if needed.
