Mid-Body Color Detection
The Mid-Body Color Detection action analyzes a person's upper body (torso) to identify the dominant color of their clothing. This is useful for uniform compliance, staff identification, or event-based tracking.
Overview
In retail, hospitality, and industrial environments, staff often wear specific uniform colors. This action enables automated verification of uniform compliance by:
- Taking an image of a person from a previous detection
- Using pose estimation to isolate the torso area
- Analyzing the colors present
- Determining the dominant color
The action specifically includes enhanced detection for the color red, as this is commonly used for staff uniforms and safety vests.
What It Does
1. Receives Input Image
This action requires an image from a previous detection action in the workflow. It cannot analyze live video directly—it works with captured images of detected persons.
2. Performs Pose Estimation
Using AI-based pose estimation, the action identifies key body points (shoulders, hips) to accurately locate the torso region. This ensures the color analysis focuses on clothing, not background or other body parts.
3. Clusters Colors
The action analyzes the pixels in the torso region and groups similar colors together. It then identifies which color group is dominant (covers the most area).
4. Special Red Detection
Because red is commonly used for uniforms and safety gear, the action includes specific logic to:
- Identify if the dominant color is red
- Return a clear "Is Red" indicator
- Handle various shades of red (bright red, dark red, etc.)
How It Works in a Workflow
This action must be placed after a detection action that captures person images. A typical workflow:
- Object Detection: Detect a person in the area of interest
- Mid-Body Color Detection: Analyze the detected person's clothing
- Decision Branch:
- If expected color (e.g., red) → Person is staff, end workflow
- If unexpected color → Potential unauthorized person, create violation
Understanding Results
The action returns detailed color information:
| Output | Description | Example |
|---|---|---|
| Dominant Color (BGR) | The primary color as a Blue-Green-Red tuple | (0, 0, 255) for red |
| Dominant Color (Hex) | The primary color as a hexadecimal code | #FF0000 for red |
| Is Red | Boolean indicating if the dominant color is red | True or False |
Results in Workflow
| Result | Meaning |
|---|---|
| color_detected | Successfully analyzed the image and determined dominant color |
| no_previous_action | No input image available (action not preceded by detection) |
| pose_estimation_failed | Could not identify body pose in the image |
Configuration
This action has minimal configuration as it works automatically with images from previous actions:
| Parameter | Type | Description |
|---|---|---|
| Red Sensitivity | Number | Adjusts how broadly "red" is defined (optional) |
Common Use Cases
Staff Identification
- Scenario: Identify staff members by their red uniform shirts
- Setup: Run after person detection in staff-only areas
- Logic: If red → Staff (allowed), If not red → Visitor (may need escort)
Safety Vest Compliance
- Scenario: Ensure workers wear high-visibility vests
- Setup: Camera at construction site entry, detect person then check color
- Logic: If high-vis color detected → Compliant, Otherwise → Alert supervisor
Retail Loss Prevention
- Scenario: Identify non-staff in restricted areas
- Setup: Camera in stockroom, detect person then check for employee uniform color
- Logic: If uniform color → Allow, If non-uniform → Create alert
Event Staff Tracking
- Scenario: Track staff movement at an event by their shirt color
- Setup: Cameras throughout venue, continuous person detection
- Use: Generate heatmaps of staff presence by color
Troubleshooting
"No Previous Action" Error
-
Check Workflow Order: This action must come AFTER a detection action (like Object Detection or Face Detection).
-
Verify Detection Success: Ensure the previous action successfully detected and captured a person image.
-
Check Action Connections: In the workflow editor, verify the actions are properly connected.
Inaccurate Color Detection
-
Lighting Conditions: Poor or inconsistent lighting significantly affects color perception.
- Incandescent lights add yellow/orange tint
- Fluorescent lights can add green tint
- Natural daylight provides the most accurate colors
-
Camera White Balance: Ensure the camera's automatic white balance is functioning or is properly calibrated.
-
Clothing Patterns: Multi-colored or patterned clothing may confuse the dominant color detection. The system works best with solid-colored garments.
-
Image Quality: Blurry images or excessive compression can distort colors.
Pose Estimation Failed
-
Person Visibility: The person must be sufficiently visible in the frame. Partially obscured individuals may cause pose estimation to fail.
-
Camera Angle: Extreme top-down angles make pose estimation difficult. Cameras positioned at chest-to-head level work best.
-
Image Cropping: If the detection crop is too tight, shoulder points may be cut off. Ensure the Object Detection action captures sufficient body area.
Red Not Detected When Expected
-
Lighting: Under certain lighting conditions, red can appear orange or brown. Improve lighting or adjust camera settings.
-
Shade of Red: Very dark reds (maroon, burgundy) may not be classified as "red." Consider adjusting the red sensitivity parameter.
-
Reflection/Glare: Shiny fabrics with glare may appear white or washed out rather than their actual color.
Wrong Body Area Analyzed
-
Person Orientation: The person should be facing somewhat towards the camera for accurate pose estimation.
-
Multiple People: If multiple people are in the crop, the action may analyze the wrong person. Ensure Object Detection is configured to return individual person crops.
Best Practices
-
Consistent Lighting: Use uniform, neutral lighting (daylight-balanced) for the most accurate color detection.
-
Solid Color Uniforms: Single-color uniforms work best. If patterns are required, ensure the dominant area is the target color.
-
Camera Quality: Use cameras with good color reproduction. Low-quality cameras may distort colors.
-
Testing: Before deploying, test with actual uniforms under actual lighting conditions to verify detection accuracy.
-
Combine with Other Checks: Don't rely solely on color—combine with other identifiers (badges, face recognition) for critical access control.
Technical Notes
-
Color Space: The action works in BGR (Blue-Green-Red) color space, which is the standard for OpenCV and most image processing.
-
Clustering Algorithm: K-means clustering is used to identify dominant colors, which handles variations in shade and lighting.
-
Pose Model: Uses a lightweight pose estimation model optimized for upper body detection.