Object Detection
The Detect action is the foundation of most security workflows. It uses Artificial Intelligence to continuously analyze video streams and identify specific objects.
Overview
Object Detection is like having a tireless security guard who watches your cameras 24/7 and knows exactly what to look for. Whether you need to detect unauthorized people, vehicles in restricted areas, or workers without safety equipment, this action handles it all.
What It Does
1. Connects to Cameras
The action connects to the video streams (cameras) you've selected in your configuration. It supports multiple cameras simultaneously, allowing you to monitor several areas at once.
2. Analyzes Every Frame
Using AI models trained to recognize specific objects, the action examines each frame of video. The AI has been trained on millions of images and can recognize:
- People (with various poses and clothing)
- Vehicles (cars, trucks, motorcycles)
- Safety Equipment (hard hats, safety vests)
- Animals
- Custom Objects (based on your training data)
3. Applies Zone Filtering
Not every part of a camera view is relevant. The action respects the Detection Zones you've configured, only analyzing activity within those defined areas. This prevents false alarms from:
- Public sidewalks visible through a window
- Trees moving in the wind
- Areas that are supposed to be active
4. Tracks Movement (Optional)
When Tracking and Zone Dwell is enabled, the action doesn't just detect objects—it follows them over time and records zone dwell events to the database. This enables advanced behaviors through separate workflow branches:
- Loitering Detection: Use Zone Dwell Reader to find records exceeding a duration threshold, then create violations
- Path Analysis: Understand how people move through a space
- Dwell Time: Measure how long someone spends in a specific zone
5. Captures Evidence
When a detection occurs, the action automatically:
- Saves a high-resolution image of the moment
- Records a short video clip (typically 10 seconds)
- Draws bounding boxes around detected objects
- Labels each detection with its type and confidence score
This evidence is attached to the workflow and can be included in violation reports.
Configuration Options
When setting up the Object Detection action, you'll configure the following:
Required Settings
| Setting | Description | Tips |
|---|---|---|
| Streams | Which cameras to monitor | Select cameras that cover the areas of interest. Multiple cameras can be selected. |
| Model Labels | What objects to detect | Choose from the available object types (Person, Vehicle, Hard Hat, etc.) |
| Stream Zones | Where to look in the camera view | Only Detection Zones (Type 1) will appear. Make sure zones are configured in the Zone Editor first. |
Optional Settings
| Setting | Default | Description | Tips |
|---|---|---|---|
| Resolution | 720 | Processing resolution in pixels | Higher = more accurate but slower. 720p is usually optimal. |
| Confidence Threshold | 0.3 | How certain the AI must be (0-1) | Start at 0.3, increase if too many false alarms, decrease if missing events. |
| Max Runtime | 900 seconds | How long to analyze before stopping | For continuous monitoring, use scheduled rules that restart the action. |
| Video Output | 1 | Output video mode | 1 = Standard, 2 = High quality with annotations |
| Enable Tracking | Off | Track objects over time | Enable for loitering detection or path analysis |
Understanding Results
After running, the action will report one of these results:
| Result | Meaning | What Happens Next |
|---|---|---|
| detected | One or more objects were found matching your criteria | The workflow continues to the next action (usually Create Violation or further analysis) |
| no_detection | Nothing was found during the entire runtime | The workflow typically ends or takes an alternate path |
| stream_error | Could not connect to one or more cameras | Check camera connectivity and RTSP credentials |
| processing_error | An unexpected error occurred during analysis | Check system logs for details |
Common Use Cases
After-Hours Intrusion Detection
- Configuration: Select warehouse cameras, detect "Person", zones covering entrances
- Schedule: Run from 10 PM to 6 AM
- Workflow: If detected → Create Violation → Alert Security
Safety Equipment Compliance
- Configuration: Select construction site cameras, detect "Person" and "Hard Hat"
- Logic: If Person detected but no Hard Hat → Create Violation
- Use: Ensure workers are wearing required PPE
Vehicle Monitoring
- Configuration: Select parking lot camera, detect "Vehicle", zone covering reserved spots
- Schedule: Run during business hours
- Workflow: If detected → Run ANPR (license plate reading) → Log entry
Loitering Detection
Loitering detection requires a multi-step workflow:
-
Enable Tracking: First, enable "Tracking and Zone Dwell" in the Object Detection action. This records zone dwell events to the database as objects are tracked.
-
Create a Loitering Branch Rule: Set up a separate workflow branch that:
- Uses Zone Dwell Reader to query records that exceeded a specified duration (e.g.,
duration_seconds > 300for 5 minutes) - Passes matching records to Zone Dwell Violation Snapshot to prepare annotated evidence
- Sends the evidence to Create Violation to record the incident and trigger alerts
- Uses Zone Dwell Reader to query records that exceeded a specified duration (e.g.,
- Example Use Cases:
- Retail loss prevention - detect people loitering in aisles
- Restricted area monitoring - alert when someone stays too long in sensitive zones
- Abandoned object detection - detect items like a milk bottle left in an area for 5+ minutes
Troubleshooting
No Detections When There Should Be
-
Check Zone Configuration: Ensure your detection zones cover the areas where activity occurs. View the camera in the Zone Editor to verify.
-
Lower the Confidence Threshold: Try reducing from 0.3 to 0.2 or even 0.15. The AI might be detecting objects but not confident enough to report them.
-
Check Camera Quality: Blurry, dark, or low-resolution footage makes detection difficult. Ensure cameras are properly focused and well-lit.
-
Verify Model Labels: Make sure you've selected the correct object types. If you want to detect vehicles but only selected "Person", vehicles will be ignored.
-
Check Camera Connectivity: If the stream is offline, detection cannot occur. Verify the camera is accessible via its RTSP URL.
Too Many False Detections
-
Increase the Confidence Threshold: Raise from 0.3 to 0.4 or 0.5. This makes the AI more selective.
-
Refine Your Zones: Draw tighter zones that exclude areas with frequent movement that shouldn't trigger alerts (public sidewalks, wind-blown trees, etc.).
-
Check for Reflections: Glass windows and mirrors can create phantom detections. Adjust zones to exclude reflective surfaces.
-
Review the Evidence: Look at the captured images to understand what's triggering false alarms, then adjust accordingly.
Stream Connection Errors
-
Verify RTSP URL: Ensure the camera's stream URL is correct and accessible.
-
Check Network: Confirm the ResEngine server can reach the camera network.
-
Test Credentials: If the stream requires authentication, verify the username and password are correct.
-
Camera Reboot: Some cameras need periodic restarts to maintain stable streams.
Tracking Not Working
-
Verify Tracking is Enabled: Check that "Enable Tracking and Zone Dwell" is turned on in the configuration.
-
Check Zone Types: Ensure you have Detection Zones (Type 1) configured, not just Crop Zones (Type 2).
-
Adjust Frame Thresholds: If tracks are lost too quickly, increase the "Max Missed Frames" setting.