Python + OpenCv + RaspberryPi
Fall detector with an overhead webcam (work in progress)

Check out the second part here

This is my final prototype for Tero Karvinen's course (at Haaga-Helia UAS), building a prototype. However I will write my thesis around this subject so I will update this site frequently over the next months (14 March 2016). Project pitchdeck in Finnish is available here.

The main idea is to track humans and detect falls with an overhead webcam and a Raspberry Pi. I used Python (2.7.11) + OpenCV (3.1.0) which is a library mainly aimed for real-time computer vision. There are a lot of OpenCv tutorials but I did really like tutorial series from sentdex.

In my code I basicly take the first frame and compare later frames to that. If there is a difference I draw a box on it. In the video below you can see the tracking in progress. There is some problems eg. my shadow is detected.



firstFrame (bottom right) - the first picture, a bit blurred to minimize minor noise errors
frameDelta (top left) - difference between current frame and firstFrame
thresh (bottom left) - everything that is brighter than thresholdLimit is white and everything else is black. Then the white areas are dilated (expanded) for dilatePixels
Feed (top right) - a rectangle is drawn over the white areas that are bigger than the minArea

Currently you always have to play around with the detect parameters to get the best recognition in different lightconditions. This is one of the things that should be definitely automated in future versions.

Detecting falls
Falling is "detected" if the object we are tracking is 40% wider than in the last frame. This should detect falls that last 0.4-0.8 seconds. It should not detect if someone is squatting or picking something up from the floor. Currently there is a problem if there is multiple objects in the frame. This is something that should be improved in the next versions.

Detecting dogs
Okey, the main point in the video below is not to detect dogs but to demostrate the usage of a wide lens or a fisheye lens to get a wider capture area at the same range.



# Fall detector
# Kim Salmi, kim.salmi(at)iki(dot)fi
# https://tunn.us/arduino/falldetector
# License: GPLv3

debug = 1
import cv2
import numpy as np
import time


def convertFrame (frame):
  r = 750.0 / frame.shape[1]
  dim = (750, int(frame.shape[0] * r))
  frame = cv2.resize(frame, dim, interpolation = cv2.INTER_AREA)
  gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
  if useGaussian:
    gray = cv2.GaussianBlur(gray, (gaussianPixels, gaussianPixels), 0)
  return frame, gray


# Video or camera
camera = cv2.VideoCapture(1)
# camera = cv2.VideoCapture("file.mov")
time.sleep(1.0)

firstFrame = None
start = time.time()
i = 0
lastH = [0]*100
lastW = [0]*100


# Detect parameters
minArea = 30*30
thresholdLimit = 20
dilationPixels = 20 # 10
useGaussian = 1
gaussianPixels = 31

# loop for each frame in video
while (1):
  detectStatus = "Empty"
  grabbed, frame = camera.read()
  frame, gray = convertFrame(frame)

  # eof
  if not grabbed:
    break
 
  # firstFrame (this should updated every time light conditions change)
  if firstFrame is None:
    time.sleep(1.0) # let camera autofocus + autosaturation settle
    grabbed, frame = camera.read()
    frame, gray = convertFrame(frame)
    firstFrame = gray
    continue

  # difference between the current frame and firstFrame
  frameDelta = cv2.absdiff(firstFrame, gray)
  thresh = cv2.threshold(frameDelta, thresholdLimit, 255, cv2.THRESH_BINARY)[1]
  thresh = cv2.dilate(thresh, None, iterations=dilationPixels) # dilate thresh
  _, contours, _ = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) #find contours
 
  for contour in contours:
    if cv2.contourArea(contour) < minArea:
      continue

    # Drawing rect over contour
    (x, y, w, h) = cv2.boundingRect(contour)
    cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 0, 255), 2)
    if w > lastW[i]*1.40:
      print "Alarm: " + format(time.time())
#   if lastH < h*1.20:
#     print "Alarm!"
    lastW[i] = w
    lastH[i] = h
#   cv2.putText(frame, "{}".format(cv2.contourArea(contour)), (x, y+h+20), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 1)
    cv2.putText(frame, "{}".format(i), (x, y+20), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (0, 140, 255), 1)
    detectStatus = "Ok"
    i+=1
  # Hud + fps
  if debug:
    end = time.time()
    seconds = end - start
    fps  = round((1 / seconds), 1)
    start = time.time()

    cv2.putText(frame, "Detect: {}".format(detectStatus), (10, 20), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (0, 140, 255), 1)
    cv2.putText(frame, "FPS: {}".format(fps), (400, 20), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (0, 140, 255), 1)
    cv2.imshow("frameDelta", frameDelta)
    cv2.imshow("Thresh", thresh)
    cv2.imshow("firstFrame", firstFrame)

  cv2.imshow("Feed", frame)
  
  i = 0
  

  key = cv2.waitKey(1) & 0xFF
  if key == ord("q"):
    break
  if key == ord("n"):
    firstFrame = None
    

# Release and destroy
camera.release()
cv2.destroyAllWindows()

Check out the second part here

Kim Salmi


tunn.us