How to Detect Motion in Python OpenCV

Salman Mehmood Feb 02, 2024
How to Detect Motion in Python OpenCV

In this article, we will learn how we can create a motion detection project with the help of OpenCV and Python.

Create Motion Detection Project Using OpenCV and Python

First of all, let’s talk about the requirement of this project. The first requirement obviously will be needed to install Python, and we also need an external package to be installed which is called opencv.

We need to open the command prompt and run this command to install this package on your PC. Let’s jump to our editor and start writing our code.

The first thing we will import is our required libraries, cv2 and time, and the next thing is that we will take the data from our webcam using the OpenCV’s VideoCapture() method.

Let’s create an object called Video, and we have to pass 0 to the VideoCapture() because we are using the 0 channel for the webcam.

import cv2
import time

Video = cv2.VideoCapture(0)
First_Frame = None

Now we will create a while True loop or an infinite loop because we will extract a video, and a video is the continuous moving on a slideshow of images.

Now we will define several instructions in a while loop, and in the first line, we will create two variables, Check and frame, and read the data extracted by the VideoCapture() method. In the next instruction, we will convert this extracted image into grayscale.

But why are we converting this into grayscale? We are doing this because we want to increase the accuracy of the feature detection.

We use the cvtColor() method to change to grayscale and have two parameters. First is the frame or an image that we want to convert into grayscale, and then the next is COLOR_BGR2GRAY, which will convert an image into gray color.

Now we will make an image blur or smoothen, which is why the object detection or the motion of an object will be much easier. We use the GaussianBlur() method to apply smoothening and pass it a grayscale image, kernel size, and sigma.

while True:
    Check, frame = Video.read()
    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
    gray = cv2.GaussianBlur(gray, (21, 21), 0)

We will create an if statement that will check whether the frame is coming or not, and we are doing this because we want the First_Frame as our reference frame.

Let’s take a look at what physics says about motion? Motion is identified from a reference point, and we explain this with an example.

Let’s say you are sitting on a train, and for you, the trees are moving, but they are not moving; they are still, but you are moving from your reference point. In that case, trees are reference points, but the frame is a reference in our case.

We are fixing the First_Frame as our reference frame; if any change happens from the reference frame, then we can say that the motion is there.

Now we will set a statement like if the First_Frame variable is None which is true in the first case, then we will make the First_Frame variable equal to the grayscale image that is the gray variable.

if First_Frame is None:
    First_Frame = gray
    continue

We will use the absdiff() method to find the difference between the frames. Let’s create a delta frame variable and pass the two parameters to the absdiff() method for comparison.

We need to set a threshold or a limit to which we want the motion to be detected because we do not want the noises being detected as motion.

To do this, we use the threshold() method, and it has a few parameters, first is the delta_frame, the second is the intensity, the third is the color shade which is white in this case, and then the next one is THRESH_BINARY since it is a tuple, so we need to select the first element.

We also need to apply one more smoothening layer in the next instruction. To do this, we need to use one more smoothening function called dilate(), and it accepts three parameters, the first is the threshold, the second is None, and the third parameter is the iterations.

The iterations parameter defines how accurate your smoothening will be; your program will also capture the noises if you increase this parameter value.

This time we will create the contours, so what are contours? Contours are the points at which the motion is happening.

If the frame is still and the hand is moving, so the portion of the hand is the contour.

The findContours() method helps to find contours, and it accepts three parameters, first is the frame, and we are using the copy() method to create the copy of the frame array.

delta_frame = cv2.absdiff(First_Frame, gray)
Threshold_frame = cv2.threshold(delta_frame, 50, 255, cv2.THRESH_BINARY)[1]
Threshold_frame = cv2.dilate(Threshold_frame, None, iterations=2)
(cntr, _) = cv2.findContours(
    Threshold_frame.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE
)

Now we will get contours through the iteration and define the approximate area to be a motion. If we do not define the area, we will get a very noisy motion detection.

First of all, we will check that if the contour area is less than a thousand, then we do not consider this as a motion area, and we will continue the iteration, and if it is greater than a thousand, then we will draw a triangle.

for contour in cntr:
    if cv2.contourArea(contour) < 1000:
        continue

The findContours() method gives four values (x, y, height, width), and we will extract these points using the boundingRect() method, which will bind the area of the rectangle. Now we will create the rectangle with the help of the rectangle() method.

The first parameter is the frame or image on which we want to draw the rectangle. The next is (x,y) coordinate points, the next is height and width, the next is the color of the frame, and then the last parameter is the size of the pen selected to draw the rectangle.

(x, y, w, h) = cv2.boundingRect(contour)
cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 3)

Complete source code:

import cv2
import time

Video = cv2.VideoCapture(0)
First_Frame = None

while True:
    Check, frame = Video.read()
    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
    gray = cv2.GaussianBlur(gray, (21, 21), 0)
    if First_Frame is None:
        First_Frame = gray
        continue
    delta_frame = cv2.absdiff(First_Frame, gray)
    Threshold_frame = cv2.threshold(delta_frame, 50, 255, cv2.THRESH_BINARY)[1]
    Threshold_frame = cv2.dilate(Threshold_frame, None, iterations=2)
    (cntr, _) = cv2.findContours(
        Threshold_frame.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE
    )
    for contour in cntr:
        if cv2.contourArea(contour) < 1000:
            continue
        (x, y, w, h) = cv2.boundingRect(contour)
        cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 3)
    cv2.imshow("Frame", frame)
    Key = cv2.waitKey(1)
    if Key == ord("q"):
        break

Video.release()
cv2.destroyAllWindows()

Now we can see that motion detection happens when the hand is moving.

Create Motion Detection Project Using OpenCV and Python

Salman Mehmood avatar Salman Mehmood avatar

Hello! I am Salman Bin Mehmood(Baum), a software developer and I help organizations, address complex problems. My expertise lies within back-end, data science and machine learning. I am a lifelong learner, currently working on metaverse, and enrolled in a course building an AI application with python. I love solving problems and developing bug-free software for people. I write content related to python and hot Technologies.

LinkedIn

Related Article - OpenCV Video