What Video Analytics (aka Video Content Analysis) does Camio perform?

For examples of the search and alerts detected automatically, see this article.

The video analysis pipeline reduces false motion alerts using adaptive motion filters, ranks the importance of events using continuous machine learning, and labels events for real-time search using zone intersections, color-blocking, direction of movement, and object classification. These analyses drive both bandwidth and storage efficiencies while enabling search and alerts like [people approaching loading dock between 2am and 6am].

Each video stream has its own Neural Nets that isolate the significant moving objects in the scene as distinct from the general pixel motion detection included in the camera's video encoding. The importance of those moving objects is then determined by another layer of AI that considers which zones were intersected, the direction of movement, the type of object and its relative importance as learned from comparison with similar events in the past. The salient portions of the event are then highlighted via representative thumbnail images. The final stage of analysis includes Convolutional Neural Net  classification of the most salient moving objects. The labels from each stage of the analysis are indexed for fast search and alerts.

The multi-stage video processing pipeline is also open and extensible to include any other services required like optical character recognition, face detection, and vehicle classification. You can also deploy custom classifiers. So if you have other needs, just contact us.

Have more questions? Submit a request

Comments