Monitoring Human Activity | |||||||||||||||||||||||||||||||||||
A project of the
Artifical Intelligence, Robotics and Vision Laboratory University of Minnesota, Department of Computer Science and Engineering |
|||||||||||||||||||||||||||||||||||
|
|||||||||||||||||||||||||||||||||||
Detection of Abandoned Objects Automatic detection of abandoned objects is of great importance in security and surveillance applications. In this project we attempt to detect such objects based on several criteria. Our approach is based a combination of short-term and long-term blob logic, and the analysis of connected components. It is robust to many disturbances that may occur in the scene such as the presence of moving objects and occlusions. N. Bird, S. Atev, N. Caramelli, R. Martin, O. Masoud, N. Papanikolopoulos, "Real Time, Online Detection of Abandoned Objects in Public Areas", IEEE International Conference on Robotics and Automation ICRA2006, pp. 3775-3780, May 2006.
Detection of Thrown Objects The problem of automatically detecting thrown objects has important applications in both security/surveillance and sports video analysis. This method uses a sophisticated inter-frame differencing implementation and Expectation Maximization in order to identify the parabolic trajectories of thrown objects in the scene. It successfully distinguishes between thrown objects and other fast motion.
Detection of Unusual Crowd Activity This module succesfully distinguishes between normal and abnormal crowd activities. Specifically, it automatically detects events such as crowds running in one direction, or a crowd of people dispersing from a central point. It uses a Hidden Markov Model (HMM) to classify based on features extracted from the optical flow.
Camera Tampering Detection In general, any event that drastically changes the appearance of the scene seen by the camera is considered camera tampering. Examples include a person holding a hand or other object in front of the camera, shining a light or laser pointer into the lens, or a person reaching up and turning the camera so that it points in a different direction. However, there are some events that change the appearance of the scene but should not be considered camera tampering. These include normal, expected events such as large vehicles, trains, or crowds of people moving through the scene, or gradual changes in illumination. This method uses several measures of image dissimilarity to robustly compare older frames of video to more recent frames in order to determine if camera tampering has occurred. E. Ribnick, S. Atev, O. Masoud, N. Papanikolopoulos, R. Voyles, "Real-Time Detection of Camera Tampering", IEEE International Conference on Advanced Video and Signal Based Surveillance AVSS 2006, Sydney, Australia, Nov. 2006. Perimeter Breach Detection Here the objective is to detect instances of people crossing a virtual, user-defined perimeter in the scene. This is accomplished using mixture of Gaussians background segmentation, followed by blob extraction and analysis. The system is also capable of detecting Directional Perimeter Breach, where the goal is to detect only people who cross the virtual perimeter in a certain direction. Both of these detection capabilities have applications in security, surveillance, and crowd flow control and monitoring. Detection of Motion in Restricted Areas This module detects any motion in a user-defined restricted area. It is useful for security/surveillance applications in settings where access is restricted and human presence is not expected to occur frequently. Detection of Loitering Individuals The purpose of this module was originally to automatically detect individuals loitering about bus stops, and has recently been extended for use in other applications. This detection capability is important, since prolonged loitering (hanging around a bus stop much longer than is necessary to catch a bus) is indicative of drug dealing or other anamolous behavior. Using a stationary camera view of a bus stop, pedestrians are segmented and tracked throughout the scene. The system takes snapshots of individuals when a clean, non-obstructed view of a pedestrian is found. The snapshots are then used to classify the individual images into a database, using an appearance-based method. The features used to correlate individual images are based on short-term biometrics, which are changeable but stay valid for short periods of time--this system uses clothing color. A linear discriminant method is applied to the color information to enhance the differences and minimize similarities between the different individuals in the feature space. To determine if a given individual is loitering, timestamps collected with the snapshots in their corresponding database class can be used to judge how long an individual has been present. An experiment was performed using a 30 minute video of a busy bus stop with six individuals loitering about it. Our results show that the system successfully classifies images of all six individuals as loitering. | |||||||
|