Tutorial

Android Face Detection

Published on August 4, 2022
author

Anupam Chugh

Android Face Detection

With the release of Google Play services 7.8, Google has brought in the Mobile Vision API that lets you do Face Detection, Barcode Detection and Text Detection. In this tutorial, we’ll develop an android face detection application that lets you do detect human faces in an image.

Android Face Detection

Android Face detection API tracks face in photos, videos using some landmarks like eyes, nose, ears, cheeks, and mouth. Rather than detecting the individual features, the API detects the face at once and then if defined, detects the landmarks and classifications. Besides, the API can detect faces at various angles too.

Android Face Detection - Landmarks

A landmark is a point of interest within a face. The left eye, right eye, and nose base are all examples of landmarks. Following are the landmarks that are possible to find currently with the API:

  1. left and right eye
  2. left and right ear
  3. left and right ear tip
  4. base of the nose
  5. left and right cheek
  6. left and right corner of the mouth
  7. base of the mouth

When ‘left’ and ‘right’ are used, they are relative to the subject. For example, the LEFT_EYE landmark is the subject’s left eye, not the eye that is on the left when viewing the image.

Classification

Classification determines whether a certain facial characteristic is present. The Android Face API currently supports two classifications:

  • eyes open : getIsLeftEyeOpenProbability() and getIsRightEyeOpenProbability() method are used.
  • smiling : getIsSmilingProbability() method is used.

Face Orientation

The orientation of the face is determined using Euler Angles. These refer to the rotation angle of the face around the X, Y and Z axes.- Euler Y tells us if the face is looking left or right.

  • Euler Z tells us if the face is rotated/slated
  • Euler X tells us if the face is looking up or down (currently not supported) Note: If a probability can’t be computed, it’s set to -1. Let’s jump into the business end of this tutorial. Our application shall contain a few sample images along with the functionality to capture your own image. Note: The API supports face detection only. Face Recognition isn’t available with the current Mobile Vision API.

Android face detection example project structure

android face detection api project structure

Android face detection code

Add the following dependency inside the build.gradle file of your application.

compile 'com.google.android.gms:play-services-vision:11.0.4'

Add the following meta-deta inside the application tag in the AndroidManifest.xml file as shown below.

<meta-data
            android:name="com.google.android.gms.vision.DEPENDENCIES"
            android:value="face"/>

This lets the Vision library know that you plan to detect faces within your application. Add the following permissions inside the manifest tag in the AndroidManifest.xml for camera permissions.

<uses-feature
        android:name="android.hardware.camera"
        android:required="true"/>
    <uses-permission
        android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>

The code for the activity_main.xml layout file is given below.

<?xml version="1.0" encoding="utf-8"?>

<ScrollView xmlns:android="https://schemas.android.com/apk/res/android"
    xmlns:app="https://schemas.android.com/apk/res-auto"
    xmlns:tools="https://schemas.android.com/tools"
    android:layout_width="match_parent"
    android:layout_height="match_parent">

    <android.support.constraint.ConstraintLayout xmlns:app="https://schemas.android.com/apk/res-auto"
        xmlns:tools="https://schemas.android.com/tools"
        android:layout_width="match_parent"
        android:layout_height="wrap_content"
        tools:context="com.journaldev.facedetectionapi.MainActivity">

        <ImageView
            android:id="@+id/imageView"
            android:layout_width="300dp"
            android:layout_height="300dp"
            android:layout_marginTop="8dp"
            android:src="@drawable/sample_1"
            app:layout_constraintLeft_toLeftOf="parent"
            app:layout_constraintRight_toRightOf="parent"
            app:layout_constraintTop_toTopOf="parent" />

        <Button
            android:id="@+id/btnProcessNext"
            android:layout_width="wrap_content"
            android:layout_height="wrap_content"
            android:layout_marginTop="8dp"
            android:text="PROCESS NEXT"
            app:layout_constraintHorizontal_bias="0.501"
            app:layout_constraintLeft_toLeftOf="parent"
            app:layout_constraintRight_toRightOf="parent"
            app:layout_constraintTop_toBottomOf="@+id/imageView" />

        <ImageView
            android:id="@+id/imgTakePic"
            android:layout_width="250dp"
            android:layout_height="250dp"
            android:layout_marginTop="8dp"
            app:layout_constraintLeft_toLeftOf="parent"
            app:layout_constraintRight_toRightOf="parent"
            app:layout_constraintTop_toBottomOf="@+id/txtSampleDescription"
            app:srcCompat="@android:drawable/ic_menu_camera" />

        <Button
            android:id="@+id/btnTakePicture"
            android:layout_width="wrap_content"
            android:layout_height="wrap_content"
            android:layout_marginTop="8dp"
            android:text="TAKE PICTURE"
            app:layout_constraintLeft_toLeftOf="parent"
            app:layout_constraintRight_toRightOf="parent"
            app:layout_constraintTop_toBottomOf="@+id/imgTakePic" />

        <TextView
            android:id="@+id/txtSampleDescription"
            android:layout_width="match_parent"
            android:layout_height="wrap_content"
            android:layout_marginBottom="8dp"
            android:layout_marginTop="8dp"
            android:gravity="center"
            app:layout_constraintBottom_toTopOf="@+id/txtTakePicture"
            app:layout_constraintLeft_toLeftOf="parent"
            app:layout_constraintRight_toRightOf="parent"
            app:layout_constraintTop_toBottomOf="@+id/btnProcessNext"
            app:layout_constraintVertical_bias="0.0" />

        <TextView
            android:id="@+id/txtTakePicture"
            android:layout_width="wrap_content"
            android:layout_height="wrap_content"
            android:layout_marginTop="8dp"
            android:gravity="center"
            app:layout_constraintLeft_toLeftOf="parent"
            app:layout_constraintRight_toRightOf="parent"
            app:layout_constraintTop_toBottomOf="@+id/btnTakePicture" />

    </android.support.constraint.ConstraintLayout>

</ScrollView>

We’ve defined two ImageViews, TextViews and Buttons. One that would loop through the sample images and display the results. The other is used for capturing an image from the camera. The code for the MainActivity.java file is given below.

package com.journaldev.facedetectionapi;

import android.Manifest;
import android.content.Context;
import android.content.Intent;
import android.content.pm.PackageManager;
import android.graphics.Bitmap;
import android.graphics.BitmapFactory;
import android.graphics.Canvas;
import android.graphics.Color;
import android.graphics.Paint;
import android.net.Uri;
import android.os.Environment;
import android.provider.MediaStore;
import android.support.annotation.NonNull;
import android.support.v4.app.ActivityCompat;
import android.support.v7.app.AppCompatActivity;
import android.os.Bundle;
import android.util.SparseArray;
import android.view.View;
import android.widget.Button;
import android.widget.ImageView;
import android.widget.TextView;
import android.widget.Toast;

import com.google.android.gms.vision.Frame;
import com.google.android.gms.vision.face.Face;
import com.google.android.gms.vision.face.FaceDetector;
import com.google.android.gms.vision.face.Landmark;

import java.io.File;
import java.io.FileNotFoundException;

public class MainActivity extends AppCompatActivity implements View.OnClickListener {

    ImageView imageView, imgTakePicture;
    Button btnProcessNext, btnTakePicture;
    TextView txtSampleDesc, txtTakenPicDesc;
    private FaceDetector detector;
    Bitmap editedBitmap;
    int currentIndex = 0;
    int[] imageArray;
    private Uri imageUri;
    private static final int REQUEST_WRITE_PERMISSION = 200;
    private static final int CAMERA_REQUEST = 101;

    private static final String SAVED_INSTANCE_URI = "uri";
    private static final String SAVED_INSTANCE_BITMAP = "bitmap";

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_main);

        imageArray = new int[]{R.drawable.sample_1, R.drawable.sample_2, R.drawable.sample_3};
        detector = new FaceDetector.Builder(getApplicationContext())
                .setTrackingEnabled(false)
                .setLandmarkType(FaceDetector.ALL_CLASSIFICATIONS)
                .setClassificationType(FaceDetector.ALL_CLASSIFICATIONS)
                .build();

        initViews();

    }

    private void initViews() {
        imageView = (ImageView) findViewById(R.id.imageView);
        imgTakePicture = (ImageView) findViewById(R.id.imgTakePic);
        btnProcessNext = (Button) findViewById(R.id.btnProcessNext);
        btnTakePicture = (Button) findViewById(R.id.btnTakePicture);
        txtSampleDesc = (TextView) findViewById(R.id.txtSampleDescription);
        txtTakenPicDesc = (TextView) findViewById(R.id.txtTakePicture);

        processImage(imageArray[currentIndex]);
        currentIndex++;

        btnProcessNext.setOnClickListener(this);
        btnTakePicture.setOnClickListener(this);
        imgTakePicture.setOnClickListener(this);
    }

    @Override
    public void onClick(View v) {
        switch (v.getId()) {
            case R.id.btnProcessNext:
                imageView.setImageResource(imageArray[currentIndex]);
                processImage(imageArray[currentIndex]);
                if (currentIndex == imageArray.length - 1)
                    currentIndex = 0;
                else
                    currentIndex++;

                break;

            case R.id.btnTakePicture:
                ActivityCompat.requestPermissions(MainActivity.this, new
                        String[]{Manifest.permission.WRITE_EXTERNAL_STORAGE}, REQUEST_WRITE_PERMISSION);
                break;

            case R.id.imgTakePic:
                ActivityCompat.requestPermissions(MainActivity.this, new
                        String[]{Manifest.permission.WRITE_EXTERNAL_STORAGE}, REQUEST_WRITE_PERMISSION);
                break;
        }
    }

    @Override
    public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions, @NonNull int[] grantResults) {
        super.onRequestPermissionsResult(requestCode, permissions, grantResults);
        switch (requestCode) {
            case REQUEST_WRITE_PERMISSION:
                if (grantResults.length > 0 && grantResults[0] == PackageManager.PERMISSION_GRANTED) {
                    startCamera();
                } else {
                    Toast.makeText(getApplicationContext(), "Permission Denied!", Toast.LENGTH_SHORT).show();
                }
        }
    }

    @Override
    protected void onActivityResult(int requestCode, int resultCode, Intent data) {
        if (requestCode == CAMERA_REQUEST && resultCode == RESULT_OK) {
            launchMediaScanIntent();
            try {
                processCameraPicture();
            } catch (Exception e) {
                Toast.makeText(getApplicationContext(), "Failed to load Image", Toast.LENGTH_SHORT).show();
            }
        }
    }

    private void launchMediaScanIntent() {
        Intent mediaScanIntent = new Intent(Intent.ACTION_MEDIA_SCANNER_SCAN_FILE);
        mediaScanIntent.setData(imageUri);
        this.sendBroadcast(mediaScanIntent);
    }

    private void startCamera() {
        Intent intent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE);
        File photo = new File(Environment.getExternalStorageDirectory(), "photo.jpg");
        imageUri = Uri.fromFile(photo);
        intent.putExtra(MediaStore.EXTRA_OUTPUT, imageUri);
        startActivityForResult(intent, CAMERA_REQUEST);
    }

    @Override
    protected void onSaveInstanceState(Bundle outState) {
        if (imageUri != null) {
            outState.putParcelable(SAVED_INSTANCE_BITMAP, editedBitmap);
            outState.putString(SAVED_INSTANCE_URI, imageUri.toString());
        }
        super.onSaveInstanceState(outState);
    }


    private void processImage(int image) {

        Bitmap bitmap = decodeBitmapImage(image);
        if (detector.isOperational() && bitmap != null) {
            editedBitmap = Bitmap.createBitmap(bitmap.getWidth(), bitmap
                    .getHeight(), bitmap.getConfig());
            float scale = getResources().getDisplayMetrics().density;
            Paint paint = new Paint(Paint.ANTI_ALIAS_FLAG);
            paint.setColor(Color.GREEN);
            paint.setTextSize((int) (16 * scale));
            paint.setShadowLayer(1f, 0f, 1f, Color.WHITE);
            paint.setStyle(Paint.Style.STROKE);
            paint.setStrokeWidth(6f);
            Canvas canvas = new Canvas(editedBitmap);
            canvas.drawBitmap(bitmap, 0, 0, paint);
            Frame frame = new Frame.Builder().setBitmap(editedBitmap).build();
            SparseArray<Face> faces = detector.detect(frame);
            txtSampleDesc.setText(null);

            for (int index = 0; index < faces.size(); ++index) {
                Face face = faces.valueAt(index);
                canvas.drawRect(
                        face.getPosition().x,
                        face.getPosition().y,
                        face.getPosition().x + face.getWidth(),
                        face.getPosition().y + face.getHeight(), paint);


                canvas.drawText("Face " + (index + 1), face.getPosition().x + face.getWidth(), face.getPosition().y + face.getHeight(), paint);

                txtSampleDesc.setText(txtSampleDesc.getText() + "FACE " + (index + 1) + "\n");
                txtSampleDesc.setText(txtSampleDesc.getText() + "Smile probability:" + " " + face.getIsSmilingProbability() + "\n");
                txtSampleDesc.setText(txtSampleDesc.getText() + "Left Eye Is Open Probability: " + " " + face.getIsLeftEyeOpenProbability() + "\n");
                txtSampleDesc.setText(txtSampleDesc.getText() + "Right Eye Is Open Probability: " + " " + face.getIsRightEyeOpenProbability() + "\n\n");

                for (Landmark landmark : face.getLandmarks()) {
                    int cx = (int) (landmark.getPosition().x);
                    int cy = (int) (landmark.getPosition().y);
                    canvas.drawCircle(cx, cy, 8, paint);
                }


            }

            if (faces.size() == 0) {
                txtSampleDesc.setText("Scan Failed: Found nothing to scan");
            } else {
                imageView.setImageBitmap(editedBitmap);
                txtSampleDesc.setText(txtSampleDesc.getText() + "No of Faces Detected: " + " " + String.valueOf(faces.size()));
            }
        } else {
            txtSampleDesc.setText("Could not set up the detector!");
        }
    }

    private Bitmap decodeBitmapImage(int image) {
        int targetW = 300;
        int targetH = 300;
        BitmapFactory.Options bmOptions = new BitmapFactory.Options();
        bmOptions.inJustDecodeBounds = true;

        BitmapFactory.decodeResource(getResources(), image,
                bmOptions);

        int photoW = bmOptions.outWidth;
        int photoH = bmOptions.outHeight;

        int scaleFactor = Math.min(photoW / targetW, photoH / targetH);
        bmOptions.inJustDecodeBounds = false;
        bmOptions.inSampleSize = scaleFactor;

        return BitmapFactory.decodeResource(getResources(), image,
                bmOptions);
    }

    private void processCameraPicture() throws Exception {
        Bitmap bitmap = decodeBitmapUri(this, imageUri);
        if (detector.isOperational() && bitmap != null) {
            editedBitmap = Bitmap.createBitmap(bitmap.getWidth(), bitmap
                    .getHeight(), bitmap.getConfig());
            float scale = getResources().getDisplayMetrics().density;
            Paint paint = new Paint(Paint.ANTI_ALIAS_FLAG);
            paint.setColor(Color.GREEN);
            paint.setTextSize((int) (16 * scale));
            paint.setShadowLayer(1f, 0f, 1f, Color.WHITE);
            paint.setStyle(Paint.Style.STROKE);
            paint.setStrokeWidth(6f);
            Canvas canvas = new Canvas(editedBitmap);
            canvas.drawBitmap(bitmap, 0, 0, paint);
            Frame frame = new Frame.Builder().setBitmap(editedBitmap).build();
            SparseArray<Face> faces = detector.detect(frame);
            txtTakenPicDesc.setText(null);

            for (int index = 0; index < faces.size(); ++index) {
                Face face = faces.valueAt(index);
                canvas.drawRect(
                        face.getPosition().x,
                        face.getPosition().y,
                        face.getPosition().x + face.getWidth(),
                        face.getPosition().y + face.getHeight(), paint);


                canvas.drawText("Face " + (index + 1), face.getPosition().x + face.getWidth(), face.getPosition().y + face.getHeight(), paint);

                txtTakenPicDesc.setText("FACE " + (index + 1) + "\n");
                txtTakenPicDesc.setText(txtTakenPicDesc.getText() + "Smile probability:" + " " + face.getIsSmilingProbability() + "\n");
                txtTakenPicDesc.setText(txtTakenPicDesc.getText() + "Left Eye Is Open Probability: " + " " + face.getIsLeftEyeOpenProbability() + "\n");
                txtTakenPicDesc.setText(txtTakenPicDesc.getText() + "Right Eye Is Open Probability: " + " " + face.getIsRightEyeOpenProbability() + "\n\n");

                for (Landmark landmark : face.getLandmarks()) {
                    int cx = (int) (landmark.getPosition().x);
                    int cy = (int) (landmark.getPosition().y);
                    canvas.drawCircle(cx, cy, 8, paint);
                }


            }

            if (faces.size() == 0) {
                txtTakenPicDesc.setText("Scan Failed: Found nothing to scan");
            } else {
                imgTakePicture.setImageBitmap(editedBitmap);
                txtTakenPicDesc.setText(txtTakenPicDesc.getText() + "No of Faces Detected: " + " " + String.valueOf(faces.size()));
            }
        } else {
            txtTakenPicDesc.setText("Could not set up the detector!");
        }
    }

    private Bitmap decodeBitmapUri(Context ctx, Uri uri) throws FileNotFoundException {
        int targetW = 300;
        int targetH = 300;
        BitmapFactory.Options bmOptions = new BitmapFactory.Options();
        bmOptions.inJustDecodeBounds = true;
        BitmapFactory.decodeStream(ctx.getContentResolver().openInputStream(uri), null, bmOptions);
        int photoW = bmOptions.outWidth;
        int photoH = bmOptions.outHeight;

        int scaleFactor = Math.min(photoW / targetW, photoH / targetH);
        bmOptions.inJustDecodeBounds = false;
        bmOptions.inSampleSize = scaleFactor;

        return BitmapFactory.decodeStream(ctx.getContentResolver()
                .openInputStream(uri), null, bmOptions);
    }


    @Override
    protected void onDestroy() {
        super.onDestroy();
        detector.release();
    }
}

Few inferences drawn from the above code are:

  • imageArray holds the sample images that’ll be scanned for faces when the “PROCESS NEXT” button is clicked.

  • The detector is instantiated with the below code snippet:

    FaceDetector detector = new FaceDetector.Builder( getContext() )
            .setTrackingEnabled(false)
            .setLandmarkType(FaceDetector.ALL_LANDMARKS)
            .setMode(FaceDetector.FAST_MODE)
            .build();
    

    Landmarks add up to the computation time, hence they need to be explicitly set. Face Detector can be set to FAST_MODE or ACCURATE_MODE as per our requirements. We’ve set tracking to false in the above code since we’re dealing with still images. It can be set to true for detecting faces in a video.

  • processImage() and processCameraPicture() methods contain the code where we actually detect the faces and draw a rectangle over them

  • detector.isOperational() is used to check whether the current Google Play Services library in your phone supports the vision API(If it doesn’t Google Play downloads the required native libraries to allow support).

  • The code snippet that actually does the work of face detection is :

    Frame frame = new Frame.Builder().setBitmap(editedBitmap).build();
                SparseArray faces = detector.detect(frame);
    
  • Once detected, we loop through the faces array to find the position and attributes of each face.

  • The attributes for each face are appended in the TextView beneath the button.

  • This works the same when an image is captured by camera except the fact that we need to ask for the camera permissions at runtime and save the uri, bitmap returned by the camera application.

The output of the above application in action is given below. android face detection app Try capturing the photo of a dog and you’ll see that the Vision API doesn’t detect its face (The API detects human faces only). This brings an end to this tutorial. You can download the final Android Face Detection API Project from the link below.

Download Android Face Detection Project

Reference: Official Documentation

Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.

Learn more about our products

About the authors
Default avatar
Anupam Chugh

author

While we believe that this content benefits our community, we have not yet thoroughly reviewed it. If you have any suggestions for improvements, please let us know by clicking the “report an issue“ button at the bottom of the tutorial.

Still looking for an answer?

Ask a questionSearch for more help

Was this helpful?
 
JournalDev
DigitalOcean Employee
DigitalOcean Employee badge
February 23, 2022

App crashed on startcamera(); help needed Can u please help me

- sirisha

    JournalDev
    DigitalOcean Employee
    DigitalOcean Employee badge
    May 7, 2021

    Face is detected if I capture image from an image. Please help really in need.

    - Monish Bhole

      JournalDev
      DigitalOcean Employee
      DigitalOcean Employee badge
      April 22, 2020

      it is not working for face detection first set permission for camera and it does not detect and match the image

      - varun

        JournalDev
        DigitalOcean Employee
        DigitalOcean Employee badge
        March 27, 2020

        i am facing problem in the code do you have a video of the facedetection API

        - Shubham Kumar

          JournalDev
          DigitalOcean Employee
          DigitalOcean Employee badge
          December 18, 2019

          As said, the samples are taken from the image array ,which in turn are stored in drawable. but if i want to take an image from the files of the external storage and calculate the smile detection probability, where and in what way can i change the path and how . please provide some help , i am new to the programming , and i am totally confused.

          - Itachi

            JournalDev
            DigitalOcean Employee
            DigitalOcean Employee badge
            December 13, 2019

            how can i send the value of smile probability to another activity please give me a insight

            - Itachi

              JournalDev
              DigitalOcean Employee
              DigitalOcean Employee badge
              August 29, 2019

              I tried to use the camera to capture for black faces it didnt work

              - Windstalker

                JournalDev
                DigitalOcean Employee
                DigitalOcean Employee badge
                June 11, 2019

                Hi Anupam, In my app i need to do a KYC by comparing two images taken from a camera. Sharing any reference is greatly helpful. Thank you.

                - Vinod Singh

                  JournalDev
                  DigitalOcean Employee
                  DigitalOcean Employee badge
                  June 7, 2019

                  Hi Anupam Chugh, In my app, i have a requirement for Face Recognizing. I mean we need to match two photos with different expressions . Do you have any idea how to do it or any reference is also greatly helpful. Thank You.

                  - Vinod Kumar

                    JournalDev
                    DigitalOcean Employee
                    DigitalOcean Employee badge
                    April 12, 2019

                    App crashed on startcamera(); help needed

                    - Av

                      Try DigitalOcean for free

                      Click below to sign up and get $200 of credit to try our products over 60 days!

                      Sign up

                      Join the Tech Talk
                      Success! Thank you! Please check your email for further details.

                      Please complete your information!

                      Become a contributor for community

                      Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.

                      DigitalOcean Documentation

                      Full documentation for every DigitalOcean product.

                      Resources for startups and SMBs

                      The Wave has everything you need to know about building a business, from raising funding to marketing your product.

                      Get our newsletter

                      Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.

                      New accounts only. By submitting your email you agree to our Privacy Policy

                      The developer cloud

                      Scale up as you grow — whether you're running one virtual machine or ten thousand.

                      Get started for free

                      Sign up and get $200 in credit for your first 60 days with DigitalOcean.*

                      *This promotional offer applies to new accounts only.