Usually the interesting part in a video scene is not the background but the objects in the foreground. These objects of interest could be any object; humans, cars, animals etc. Foreground detection also called background subtraction is a method where these objects of interest are separated from the background in a video.
If the background of a scene remains unchanged the detection of foreground objects would be easy. Just take a picture in the beginning of an empty scene and then compare future frames to that first picture. The first picture can be called the background model.
This method is not really useful in real life. Almost in every scene the background changes or at least there is video noise. That is why a threshold should be adapted to the detection.
You can test this non-adaptive background subtraction with a threshold written in Python (2.7.x) and OpenCV (2.4.x).
# Non-apative background subtraction with threshold
# Kim Salmi, kim.salmi(at)iki(dot)fi
# https://tunn.us/arduino/
# License: GPLv3
import sys
import cv2
threshold = 100
camera = cv2.VideoCapture(0)
_, backgroundFrame = camera.read()
backgroundFrame = cv2.cvtColor(backgroundFrame, cv2.COLOR_BGR2GRAY)
while 1:
_, currentFrame = camera.read()
currentFrame = cv2.cvtColor(currentFrame, cv2.COLOR_BGR2GRAY)
foreground = cv2.absdiff(backgroundFrame, currentFrame)
foreground = cv2.threshold(foreground, threshold, 255, cv2.THRESH_BINARY)[1]
cv2.imshow("backgroundFrame", backgroundFrame)
cv2.imshow("foreground", foreground)
key = cv2.waitKey(1) & 0xFF
if key == ord("q"):
cv2.destroyAllWindows()
camera.release()
sys.exit()
As soon as the background change, e.g. someone opens a curtain in a room, this method fails. That is why one could use an adaptive background model where the background model adapts to changes in the environment. Here is a variation of this adaptive model.
# A variation of an adaptive backgrounding model
# Kim Salmi, kim.salmi(at)iki(dot)fi
# https://tunn.us/arduino/
# License: GPLv3
import sys
import cv2
threshold = 10
camera = cv2.VideoCapture(0)
_, backgroundFrame = camera.read()
backgroundFrame = cv2.cvtColor(backgroundFrame, cv2.COLOR_BGR2GRAY)
i = 1
while 1:
_, currentFrame = camera.read()
currentFrame = cv2.cvtColor(currentFrame, cv2.COLOR_BGR2GRAY)
foreground = cv2.absdiff(backgroundFrame, currentFrame)
foreground = cv2.threshold(foreground, threshold, 255, cv2.THRESH_BINARY)[1]
cv2.imshow("foreground", foreground)
alpha = (1.0/i)
backgroundFrame = cv2.addWeighted(currentFrame, alpha, backgroundFrame, 1.0-alpha, 0)
cv2.imshow("backgroundFrame", backgroundFrame)
i += 1
key = cv2.waitKey(1) & 0xFF
if key == ord("q"):
cv2.destroyAllWindows()
camera.release()
sys.exit()
This is the basic idea of background subtraction. You can read more about video analysis in my thesis (still working on it) or if you want to look in to modern backgrounding methods you can start with the Gaussian mixture model and for further reading: Xu et al. (2016) Background modeling methods in video analysis: A review and comparative evaluation.