Skip to main content

Object Detection

The Detect action is the foundation of most security workflows. It uses Artificial Intelligence to continuously analyze video streams and identify specific objects.


Overview

Object Detection is like having a tireless security guard who watches your cameras 24/7 and knows exactly what to look for. Whether you need to detect unauthorized people, vehicles in restricted areas, or workers without safety equipment, this action handles it all.


What It Does

1. Connects to Cameras

The action connects to the video streams (cameras) you've selected in your configuration. It supports multiple cameras simultaneously, allowing you to monitor several areas at once.

2. Analyzes Every Frame

Using AI models trained to recognize specific objects, the action examines each frame of video. The AI has been trained on millions of images and can recognize:

  • People (with various poses and clothing)
  • Vehicles (cars, trucks, motorcycles)
  • Safety Equipment (hard hats, safety vests)
  • Animals
  • Custom Objects (based on your training data)

3. Applies Zone Filtering

Not every part of a camera view is relevant. The action respects the Detection Zones you've configured, only analyzing activity within those defined areas. This prevents false alarms from:

  • Public sidewalks visible through a window
  • Trees moving in the wind
  • Areas that are supposed to be active

4. Tracks Movement (Optional)

When Tracking and Zone Dwell is enabled, the action doesn't just detect objects—it follows them over time and records zone dwell events to the database. This enables advanced behaviors through separate workflow branches:

  • Loitering Detection: Use Zone Dwell Reader to find records exceeding a duration threshold, then create violations
  • Path Analysis: Understand how people move through a space
  • Dwell Time: Measure how long someone spends in a specific zone

5. Captures Evidence

When a detection occurs, the action automatically:

  • Saves a high-resolution image of the moment
  • Records a short video clip (typically 10 seconds)
  • Draws bounding boxes around detected objects
  • Labels each detection with its type and confidence score

This evidence is attached to the workflow and can be included in violation reports.


Configuration Options

When setting up the Object Detection action, you'll configure the following:

Required Settings

SettingDescriptionTips
StreamsWhich cameras to monitorSelect cameras that cover the areas of interest. Multiple cameras can be selected.
Model LabelsWhat objects to detectChoose from the available object types (Person, Vehicle, Hard Hat, etc.)
Stream ZonesWhere to look in the camera viewOnly Detection Zones (Type 1) will appear. Make sure zones are configured in the Zone Editor first.

Optional Settings

SettingDefaultDescriptionTips
Resolution720Processing resolution in pixelsHigher = more accurate but slower. 720p is usually optimal.
Confidence Threshold0.3How certain the AI must be (0-1)Start at 0.3, increase if too many false alarms, decrease if missing events.
Max Runtime900 secondsHow long to analyze before stoppingFor continuous monitoring, use scheduled rules that restart the action.
Video Output1Output video mode1 = Standard, 2 = High quality with annotations
Enable TrackingOffTrack objects over timeEnable for loitering detection or path analysis

Understanding Results

After running, the action will report one of these results:

ResultMeaningWhat Happens Next
detectedOne or more objects were found matching your criteriaThe workflow continues to the next action (usually Create Violation or further analysis)
no_detectionNothing was found during the entire runtimeThe workflow typically ends or takes an alternate path
stream_errorCould not connect to one or more camerasCheck camera connectivity and RTSP credentials
processing_errorAn unexpected error occurred during analysisCheck system logs for details

Common Use Cases

After-Hours Intrusion Detection

  • Configuration: Select warehouse cameras, detect "Person", zones covering entrances
  • Schedule: Run from 10 PM to 6 AM
  • Workflow: If detected → Create Violation → Alert Security

Safety Equipment Compliance

  • Configuration: Select construction site cameras, detect "Person" and "Hard Hat"
  • Logic: If Person detected but no Hard Hat → Create Violation
  • Use: Ensure workers are wearing required PPE

Vehicle Monitoring

  • Configuration: Select parking lot camera, detect "Vehicle", zone covering reserved spots
  • Schedule: Run during business hours
  • Workflow: If detected → Run ANPR (license plate reading) → Log entry

Loitering Detection

Loitering detection requires a multi-step workflow:

  1. Enable Tracking: First, enable "Tracking and Zone Dwell" in the Object Detection action. This records zone dwell events to the database as objects are tracked.

  2. Create a Loitering Branch Rule: Set up a separate workflow branch that:

    • Uses Zone Dwell Reader to query records that exceeded a specified duration (e.g., duration_seconds > 300 for 5 minutes)
    • Passes matching records to Zone Dwell Violation Snapshot to prepare annotated evidence
    • Sends the evidence to Create Violation to record the incident and trigger alerts
  • Example Use Cases:
    • Retail loss prevention - detect people loitering in aisles
    • Restricted area monitoring - alert when someone stays too long in sensitive zones
    • Abandoned object detection - detect items like a milk bottle left in an area for 5+ minutes

Troubleshooting

No Detections When There Should Be

  1. Check Zone Configuration: Ensure your detection zones cover the areas where activity occurs. View the camera in the Zone Editor to verify.

  2. Lower the Confidence Threshold: Try reducing from 0.3 to 0.2 or even 0.15. The AI might be detecting objects but not confident enough to report them.

  3. Check Camera Quality: Blurry, dark, or low-resolution footage makes detection difficult. Ensure cameras are properly focused and well-lit.

  4. Verify Model Labels: Make sure you've selected the correct object types. If you want to detect vehicles but only selected "Person", vehicles will be ignored.

  5. Check Camera Connectivity: If the stream is offline, detection cannot occur. Verify the camera is accessible via its RTSP URL.

Too Many False Detections

  1. Increase the Confidence Threshold: Raise from 0.3 to 0.4 or 0.5. This makes the AI more selective.

  2. Refine Your Zones: Draw tighter zones that exclude areas with frequent movement that shouldn't trigger alerts (public sidewalks, wind-blown trees, etc.).

  3. Check for Reflections: Glass windows and mirrors can create phantom detections. Adjust zones to exclude reflective surfaces.

  4. Review the Evidence: Look at the captured images to understand what's triggering false alarms, then adjust accordingly.

Stream Connection Errors

  1. Verify RTSP URL: Ensure the camera's stream URL is correct and accessible.

  2. Check Network: Confirm the ResEngine server can reach the camera network.

  3. Test Credentials: If the stream requires authentication, verify the username and password are correct.

  4. Camera Reboot: Some cameras need periodic restarts to maintain stable streams.

Tracking Not Working

  1. Verify Tracking is Enabled: Check that "Enable Tracking and Zone Dwell" is turned on in the configuration.

  2. Check Zone Types: Ensure you have Detection Zones (Type 1) configured, not just Crop Zones (Type 2).

  3. Adjust Frame Thresholds: If tracks are lost too quickly, increase the "Max Missed Frames" setting.