International Journal of Image and Graphics
We present a novel technique for transcribing crowds in video scenes that allows extracting the positions of moving objects in video frames. The technique can be used as a more precise alternative to image processing methods, such as background-removal or automated pedestrian detection based on feature extraction and classification. By manually projecting pedestrian actors on a two-dimensional plane and translating screen coordinates to absolute real-world positions using the cross ratio, we provide highly accurate and complete results at the cost of increased processing time. We are able to completely avoid most errors found in other automated annotation techniques, resulting from sources such as noise, occlusion, shadows, view angle or the density of pedestrians. It is further possible to process scenes that are difficult or impossible to transcribe by automated image processing methods, such as low-contrast or low-light environments. We validate our model by comparing it to the results of both background-removal and feature extraction and classification in a variety of scenes.
Fuchsberger, Alexander; Ricks, Brian; and Chen, Zhicheng, "A Semi-Automated Technique for Transcribing Accurate Crowd Motions" (2020). Computer Science Faculty Publications. 74.
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.
Funded by the University of Nebraska at Omaha Open Access Fund