top of page
  • Writer's pictureNagesh Singh Chauhan

Implementing Face Detection using Python and OpenCV

Learn how to build a face detection model

Computer vision is all the rage in the machine learning and deep learning community these days. And one of the most popular applications of this domain is face detection.

That is what we will learn to build in this detailed article. But before we start with that, let’s have a look at two real-world use cases:


1. Making Cars Safer Car manufacturers around the world like Mercedes, Tesla, BMW etc. are increasingly focusing on making cars more personal and safe for the driver to drive. In their attempt to build smarter car features, it makes sense for manufacturers to use AI/ML to help them better understand human emotions. Using facial detection smart cars can alert the driver when he is feeling drowsy or lethargic.

The US Department of Transportation reported that driving-related errors cause around 95% of fatal road accidents. Facial Emotion Detection can find subtle changes in facial micro-expressions that precedes drowsiness and send personalized alerts to the driver asking him to stop the car and go for a coffee break, change music or temperature etc.


2. Facial Emotion Detection in Interviews A candidate-interviewer interaction is influenced too many categories of judgment and some sort of misinterpretation. Such sort of judgment makes it hard to determine whether the candidate is actually fit for the job. Identifying what a candidate is trying to convey is out of interviewer’s hands because of the multiple layers of language interpretation, cognitive biases, and context that lie in between. That’s where AI comes in, which can measure the candidate’s facial expressions to capture their moods and further assess their personality traits.


Employee morale can also be perceived using this technology by holding and recording interactions on the job. As an HR tool, it can help not only in devising recruiting strategies but also in designing HR policies that bring about the best performance from employees.


As we have seen how facial detection technology can effectively help in making better decisions, let us dig deeper and understand what exactly is face detection and how we can create a simple model that can detect our face.


What is Face detection?


Face detection is the ability of computer technology to identify people’s faces within digital images. Face detection applications employ algorithms focused on detecting human faces within larger images that may contain landscapes, objects etc.


In order to work, face detection applications use machine learning algorithms to detect human faces within images of any size. The larger images might contain numerous objects that aren't facing such as landscapes, objects, animals, buildings and other parts of humans (e.g. legs, shoulders and arms).


Facial detection/recognition technology was previously associated with only security sector but today there is active expansion into other industries including retail, marketing, healthcare etc.


How Face detection works?


While the process is somewhat complex, face detection algorithms often begin by searching for human eyes. Eyes constitute what is known as a valley region and are one of the easiest features to detect. Once eyes are detected, the algorithm might then attempt to detect facial regions including eyebrows, the mouth, nose, nostrils, and the iris. Once the algorithm surmises that it has detected a facial region, it can then apply additional tests to validate whether it has, in fact, detected a face. — https://www.facefirst.com/blog/face-detection-vs-face-recognition/


OpenCV



OpenCV(Open Source Computer Vision Library), which is an image and video processing library with bindings in C++, C, Python, and Java. OpenCV is used for all sorts of image and video analysis, like facial recognition and detection, license plate reading, photo editing, advanced robotic vision, optical character recognition, and a whole lot more.


OpenCV has three built-in face recognizers and thanks to its clean coding, you can use any of them just by changing a single line of code. Here are the names of those face recognizers and their OpenCV calls:


EigenFaces — cv2.face.createEigenFaceRecognizer() FisherFaces — cv2.face.createFisherFaceRecognizer() Local Binary Patterns Histograms (LBPH) — cv2.face.createLBPHFaceRecognizer()


How to find faces using OpenCV?


There are basically two primary ways to find faces using OpenCV:

Haar Classifier LBP Cascade Classifier


Most developers use Haar because it is more accurate, but it is also much slower than LBP. I’m also going with Haar Classifier for this tutorial. The OpenCV package actually has all the data you need to use Haar effectively. Basically, you just need an XML file with the right face data in it. You can also create your own if you knew what you were doing or you can just use what comes with OpenCV. To know more about Haar Classifier and LBP Cascade Classifier click on this and this.

So now we’ll write a simple python program that will take sample images as input and try to detect faces and eyes using OpenCV. You can download Haar Classifier XML file for face detection and eye detection from here and here and keep XML files in your working directory.


Dependencies:

pip install numpy pip install matplotlib pip install opencv-python


Input images :



Source Code:

import cv2
import numpy as np
from matplotlib import pyplot as plt
import globtxtfiles = [] 
for file in glob.glob("*.jpg"):
    txtfiles.append(file)
    
for ix in txtfiles:
    img = cv2.imread(ix,cv2.IMREAD_COLOR)
    imgtest1 = img.copy()
    imgtest = cv2.cvtColor(imgtest1, cv2.COLOR_BGR2GRAY)    
    facecascade = 
cv2.CascadeClassifier('D:\\KJ\\Nagesh\\Downloads\\face_recognition\\haarcascade_frontalface_default.xml')    
    eye_cascade = cv2.CascadeClassifier('D:\\KJ\\Nagesh\\Downloads\\face_recognition\\haarcascade_eye.xml')
   
    faces = facecascade.detectMultiScale(imgtest, scaleFactor=1.2, minNeighbors=5) 
    print(’Total number of Faces found’,len(faces))
    
    for (x, y, w, h) in faces:
        face_detect = cv2.rectangle(imgtest, (x, y), (x+w, y+h), (255, 0, 255), 2)
        roi_gray = imgtest[y:y+h, x:x+w]
        roi_color = imgtest[y:y+h, x:x+w]        plt.imshow(face_detect)
        eyes = eye_cascade.detectMultiScale(roi_gray)
        for (ex,ey,ew,eh) in eyes:
            eye_detect = cv2.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(255,0,255),2)
            plt.imshow(eye_detect)

Now, let's understand this simple program:


Step 1: Import numpy, matplotlib, open-cv and glob

import numpy as np
from matplotlib import pyplot as plt
import cv2
import glob

Step 2: Using a glob loop through each of the .jpg files present in your current working directory and store them in a list ‘txtfiles’. To know how to read files using glob click here.

txtfiles = [] 
for file in glob.glob("*.jpg"):
    txtfiles.append(file)

Step 3 : Read each of the .jpg files using cv2.imread(). The function cv2.imread() requires two arguments: the first is the path to the image itself and the second specifies the way the image should be read. We can use any one of the below three as our second argument.


cv2.IMREAD_COLOR — used to load a color image. It neglects the image transparency and is the default flag. It is for 8-bit images that don’t have the alpha channel. cv2.IMREAD_GRAYSCALE — responsible for loading our images in grayscale. cv2.IMREAD_UNCHANGED — It loads an image using alpha channel(RGBA).


When you load an image using OpenCV, it loads it into BGR color space by default.

After that make a copy of the image passed so that passed image is not altered.

img = cv2.imread(ix,cv2.IMREAD_COLOR)
    imgtest1 = img.copy()

Step 4: Convert the image to gray image as OpenCV face detector expects gray images.

imgtest = cv2.cvtColor(imgtest1, cv2.COLOR_BGR2GRAY)

Step 5: Now, we have to load our Haar classifiers(downloaded XML files) for face detection and eye detection, which takes as input the training file of the Haar classifier.

facecascade = cv2.CascadeClassifier('D:\\KJ\\Nagesh\\Downloads\\face_recognition\\haarcascade_frontalface_default.xml')eye_cascade = cv2.CascadeClassifier('D:\\KJ\\Nagesh\\Downloads\\face_recognition\\haarcascade_eye.xml')

step 6: Now, how do we detect a face from an image using the CascadeClassifier ?

Well, again OpenCV’s CascadedClassifier has made it simple for us because of detectMultiScale(), which detects exactly what you need.

detectMultiScale(image, scaleFactor, minNeighbors)

Below are arguments which should pass to detectMultiScale().


This is a general function to detect objects, in this case, it’ll detect faces since we called in the face cascade. If it finds a face, it returns a list of positions of said face in the form “Rect(x,y,w,h).”, if not, then returns “None”.

  • Image: The first input is the grayscale image.

  • scaleFactor: This function compensates a false perception in size that occurs when one face appears to be bigger than the other simply because it is closer to the camera.

  • minNeighbors: Detection algorithm that uses a moving window to detect objects, it does so by defining how many objects are found near the current one before it can declare the face found.


Step 7: Now print the number of faces from each image :

print(’Total number of Faces found ',len(faces))

Step 8: Loop through the list of faces and draw rectangles on the images. Here basically we’re finding faces, breaking the faces, their sizes, and drawing rectangles

for (x, y, w, h) in faces:
    face_detect = cv2.rectangle(imgtest, (x, y), (x+w, y+h),     (255, 0, 255), 2)
    roi_gray = imgtest[y:y+h, x:x+w]
    roi_color = imgtest[y:y+h, x:x+w]
plt.imshow(face_detect)

Step 9: Next we’ll perform eye detection and the interesting part is it probably wouldn’t find an eyeball to detect an eye. Most of the eye detection algorithms use the surrounding skin, eyelids, eyelashes, and eyebrows to also make the detection.

eyes = eye_cascade.detectMultiScale(roi_gray)
for (ex,ey,ew,eh) in eyes:
    eye_detect = cv2.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(255,0,255),2)
plt.imshow(eye_detect)

Finally our output :


Conclusion


So we can see that with just a few lines of code we got started with face detection. From here, we can go ahead and create a face recognition model using openCV which will predict the name and other information related to that particular person. This blog is for the beginners who want to build something cool using this amazing program language Python.


There are many available Haar classifiers for detecting various human body parts. Check it out here.


Well, that's all in this blog. Thanks for reading :)


happy Learning !!!


You can reach me out in LinkedIn.

180 views0 comments

Recent Posts

See All
bottom of page