crime fighting

FORENSOR - FOREnsic evidence gathering autonomous seNSOR

Law enforcement Agencies (LEAs) are still using conventional, manpower based techniques to gather forensic evidence. Concealed surveillance devices can provide irrefutable evidence, but current video surveillance systems are usually bulky and complicated, are often used as simple video recorders, and require complex, expensive infrastructure to supply power, bandwidth, storage and illumination.

Recent years have seen significant advances in the surveillance industry, but these were rarely targeted to forensic applications. The imaging community is fixated on cameras for mobile phones, where the figures of merit are resolution, image quality, and low profile. A mobile phone with its camera on would consume its battery in under two hours. Industrial surveillance cameras are even more power hungry, while intelligent algorithms such as face detection often require extremely high processing power, such as back-end server farms, and are not available in conventional surveillance systems.

The FORENSOR project (FOREnsic evidence gathering autonomous seNSOR), aims at developing an ultra-low-power, miniaturized, low-cost, wireless, autonomous sensor for evidence gathering, able to operate for up to two months without infrastructures. Its ultra-sensitive camera and built-in intelligence will allow it to operate at remote locations, automatically identify pre-defined criminal events, and alert Law enforcement Agencies (LEAs) in real time while providing and storing the relevant video, location and timing evidence. It will be manageable remotely, preserve the availability and the integrity of the collected evidence, and comply with all legal and ethical standards, in particular those related to privacy and personal data protection. The combination of built-in intelligence with ultra-low power consumption could help LEAs take the next step in fighting severe crimes.  


The contribution of TeV to the project Forensor consists in developing a software tool simulating the functionalities of the ultra-low power surveillance sensor proposed by Forensor for crime fighting. This work aims at supporting and guiding the hardware implementation of the sensor and it is mainly carried on in collaboration with the Research Unit IRIS of FBK.



Duration: from September 2015 to August 2018, 3 years


Partners

Funding: European Project H2020-FCT-2014

related publications 

Y. Zou, M. Gottardi, M. Lecca and M. Perenzoni. A Low-Power VGA Vision Sensor With Embedded Event Detection for Outdoor Edge Applications. IEEE Journal of Solid-State Circuits, 2020

Y. Zou, M. Lecca, M. Gottardi, G. Urlini, N. Vretos, L. Gymnopoulos and P. Daras. A battery powered vision sensor for forensic evidence gathering. 13th International Conference on Distributed Smart Cameras - ICDSC, pp. 15:1--15:6, 2019

some results

The video panel provides an example of the frame-by-frame processing embedded on the vision sensor developed in this project. The sensor captures a gray-level VGA image, sub-samples it to QQVGA and detects motion pixels according to a dynamic background subtraction algorithm. This latter basically compares the intensity of each pixel within two thresholds that model the background and are dynamically updated. The motion map is then denoised by a morphological filter; the horizontal and vertical projections of the motion map are then computed, thresholded and binarized to remove further noise and finally used to define the condition for the alarm generation. Only the frames containing motion pixels are delivered to the processor along with their corresponding motion maps.

The video shows the QQVGA input frame (top, left), the motion map before and after de-noising along with the horizontal and vertical projections (top, right), a spot indicating when the alarm is generated (bottom, left) and the data delivered to the processor (bottom, right).


The videos (01 02 03 04 05) show a comparison of the proposed motion detection algorithm (FORENSOR) with a frame difference approach (FD) and a background subtraction approach (BS). All the algorithms take as input a VGA video (INPUT), sub-sample it to QQVGA and process it to detect the motion pixels. Precisely, FD computes the absolute difference D between pairs of subsequent frames, smooths D and de-noise the result by an erosion filter. The resulting map is binarized and scaled up to VGA. BS implements the technique presented in the paper Z. Zivkovic and F. van der Heijden, "Efficient adaptive density estimation per image pixel for the task of background subtraction", Pattern Recognition Letters 27(7):773-780, 2006. Here we use the OpenCV C++ code available here. As for FD, also for BS the resulting binary motion map is up-scaled back to VGA.

Contact

Massimo Gottardi (FBK-IRIS) and Michela Lecca (FBK-TeV)