SoFunction
Updated on 2025-04-09

Sample code for Android dynamic face detection (appropriate face number)

Face detection

The face detection here is not face recognition, but it can identify whether there is someone. When there is someone, you can use the frame map for face recognition (the SDK of Face++ is recommended here). Of course, the demo I wrote does not include face recognition, and friends who are interested can add it.face++

Android's own face detection

Here we use the face detection class to provide FaceDetector. This class provides powerful face detection functions, which can facilitate us to detect faces. Therefore, we use it to perform dynamic face detection. The implementation principle is actually quite simple. It mainly operates on the frame map through Carmen's callback PreviewCallback, and uses FaceDetector to detect whether there is a face in the frame map. Of course, if you want to draw the range of a face in the surfaceview, you can bind the canvas to it and unbind it after drawing.

first step

Let's first define a surfaceview to cover it on the surfaceview used by our Carmen to draw the face range.

public class FindFaceView extends SurfaceView implements  {

  private SurfaceHolder holder;
  private int mWidth;
  private int mHeight;
  private float eyesDistance;

  public FindFaceView(Context context, AttributeSet attrs) {
    super(context, attrs);
    holder = getHolder();
    (this);
    ();
    (true);
  }

  @Override
  public void surfaceChanged(SurfaceHolder holder, int format, int width,
                int height) {
    mWidth = width;
    mHeight = height;
  }

  @Override
  public void surfaceCreated(SurfaceHolder holder) {

  }

  @Override
  public void surfaceDestroyed(SurfaceHolder holder) {

  }

  public void drawRect([] faces, int numberOfFaceDetected) {
    Canvas canvas = ();
    if (canvas != null) {
      Paint clipPaint = new Paint();
      (true);
      ();
      clipPaint
          .setXfermode(new PorterDuffXfermode());
      (clipPaint);
      (getResources().getColor());
      Paint paint = new Paint();
      (true);
      ();
      ();
      (5.0f);
      for (int i = 0; i < numberOfFaceDetected; i++) {
        Face face = faces[i];
        PointF midPoint = new PointF();
        // Get the intermediate point between the eyes        (midPoint);
        // Get the distance between the two eyes        eyesDistance = ();
        // Convert the proportional parameters of the preview picture and the screen display area        float scale_x = mWidth / 500;
        float scale_y = mHeight / 600;
        ("eyesDistance=", eyesDistance + "");
        ("=",  + "");
        ("=",  + "");
        // Because the shot is mirrored with the actual displayed image, the middle point of the two eyes obtained on the picture is opposite to the one displayed on the phone        ((int) (240 -  - eyesDistance)
                * scale_x, (int) ( * scale_y),
            (int) (240 -  + eyesDistance) * scale_x,
            (int) ( + 3 * eyesDistance) * scale_y, paint);
      }
      (canvas);
    }
  }
}

Important places

1. holder = getHolder(); get surfaceholder to bind to the canvas where we want to draw the face range. Canvas canvas = (); so that we can draw happily, of course, the premise is that we have to get the coordinate position of the face.

2. Another important point is to allow the Surfaceview we use to cover on Carema to have the same name and set the level of the view tree to the highest.

 ();
 (true);

Step 2

We detect faces, of course, the premise is that we need to obtain frame maps

public class FaceRecognitionDemoActivity extends Activity implements
    OnClickListener {

  private SurfaceView preview;
  private Camera camera;
  private  parameters;
  private int orientionOfCamera;// Installation angle of front camera  private int faceNumber;// Number of faces identified  private [] faces;
  private FindFaceView mFindFaceView;
  private ImageView iv_photo;
  private Button bt_camera;
  TextView mTV;

  /**
   * Called when the activity is first created.
   */
  @Override
  public void onCreate(Bundle savedInstanceState) {
    (savedInstanceState);
    setContentView();
  }

  @Override
  protected void onStart() {
    ();
    iv_photo = (ImageView) findViewById(.iv_photo);
    bt_camera = (Button) findViewById(.bt_camera);
    mTV = (TextView) findViewById(.show_count);
    bt_camera.setOnClickListener(this);

    mFindFaceView = (FindFaceView) findViewById(.my_preview);

    preview = (SurfaceView) findViewById();
    // Set buffer type (essential)    ().setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS);
    // Set the resolution of the surface    ().setFixedSize(176, 144);
    // Set the screen to keep on light (essential)    ().setKeepScreenOn(true);

    ().addCallback(new SurfaceCallback());
  }

  private final class MyPictureCallback implements PictureCallback {

    @Override
    public void onPictureTaken(byte[] data, Camera camera) {
      try {
        Bitmap bitmap = (data, 0,
            );
        Matrix matrix = new Matrix();
        (-90);
        Bitmap bmp = (bitmap, 0, 0, bitmap
            .getWidth(), (), matrix, true);
        ();
        iv_photo.setImageBitmap(bmp);
        ();
      } catch (Exception e) {
        ();
      }
    }

  }

  private final class SurfaceCallback implements Callback {

    @Override
    public void surfaceChanged(SurfaceHolder holder, int format, int width,
                  int height) {
      if (camera != null) {
        parameters = ();
        ();
        // Set the size of the preview area        (width, height);
        // Set the number of preview frames per second        (20);
        // Set the size of the preview image        (width, height);
        (80);
      }
    }

    @Override
    public void surfaceCreated(SurfaceHolder holder) {
      int cameraCount = 0;
       cameraInfo = new ();
      cameraCount = ();
      //Set the camera parameters      for (int i = 0; i < cameraCount; i++) {
        (i, cameraInfo);
        if ( == .CAMERA_FACING_FRONT) {
          try {
            camera = (i);
            (holder);
            setCameraDisplayOrientation(i, camera);
            //The most important setting: callback of frame diagram            (new MyPreviewCallback());
            ();
          } catch (Exception e) {
            ();
          }
        }
      }
    }

    @Override
    public void surfaceDestroyed(SurfaceHolder holder) {
    //Remember to release to avoid OOM and occupancy      if (camera != null) {
        (null);
        ();
        ();
        camera = null;
      }
    }

  }

  private class MyPreviewCallback implements PreviewCallback {

    @Override
    public void onPreviewFrame(byte[] data, Camera camera) {
    //It should be noted here that the data that the callback is not the RGB diagram in our direct sense but the YUV diagram, so we need to    //Convert YUV to bitmap and perform corresponding face detection. At the same time, please note that RGB_565 must be used to perform face detection, and the rest are invalid       size = ().getPreviewSize();
      YuvImage yuvImage = new YuvImage(data, ImageFormat.NV21,
          , , null);
      ByteArrayOutputStream baos = new ByteArrayOutputStream();
      (new Rect(0, 0, , ),
          80, baos);
      byte[] byteArray = ();
      detectionFaces(byteArray);
    }
  }

  /**
    * Detect faces
    *
    * @param data Previewed image data
    */
  private void detectionFaces(byte[] data) {
     options = new ();
    Bitmap bitmap1 = (data, 0, ,
        options);
    int width = ();
    int height = ();
    Matrix matrix = new Matrix();
    Bitmap bitmap2 = null;
    FaceDetector detector = null;
    //Set the cameras at each angle so that our detection effect is the best    switch (orientionOfCamera) {
      case 0:
        //Initialize face detection (same below)        detector = new FaceDetector(width, height, 10);
        (0.0f, width / 2, height / 2);
        // Create a variable bitmap with the specified width and height (the image format must be RGB_565, otherwise the face will not be detected)        bitmap2 = (width, height, .RGB_565);
        break;
      case 90:
        detector = new FaceDetector(height, width, 1);
        (-270.0f, height / 2, width / 2);
        bitmap2 = (height, width, .RGB_565);
        break;
      case 180:
        detector = new FaceDetector(width, height, 1);
        (-180.0f, width / 2, height / 2);
        bitmap2 = (width, height, .RGB_565);
        break;
      case 270:
        detector = new FaceDetector(height, width, 1);
        (-90.0f, height / 2, width / 2);
        bitmap2 = (height, width, .RGB_565);
        break;
    }
    //Set the supported number of faces (maximum support to detect how many faces of people can be adjusted as needed, but it needs to be the same as the parameter value in findFaces, otherwise an exception will be thrown)    faces = new [10];
    Paint paint = new Paint();
    (true);
    Canvas canvas = new Canvas();
    (bitmap2);
    (matrix);
    // Draw bitmap1 on bitmap2 (the offset parameter here may need to be modified according to the actual situation)    (bitmap1, 0, 0, paint);
    // Here, by passing the converted bitmap and the maximum detected face number face to findFaces, the detected face number is returned by returning the detected face number by passing the detected face number    faceNumber = (bitmap2, faces);
    ("facnumber----" + faceNumber);
    ();
    //This is our face recognition, drawing the recognized face area class    if (faceNumber != 0) {
      ();
      (faces, faceNumber);
    } else {
      ();
    }
    ();
    ();
  }

  /**
    * Set the display direction of the camera (it must be set like this here, otherwise the face will not be detected)
    *
    * @param cameraId CameraID (0 is the rear camera, 1 is the front camera)
    * @param camera object
    */
  private void setCameraDisplayOrientation(int cameraId, Camera camera) {
     info = new ();
    (cameraId, info);
    int rotation = getWindowManager().getDefaultDisplay().getRotation();
    int degree = 0;
    switch (rotation) {
      case Surface.ROTATION_0:
        degree = 0;
        break;
      case Surface.ROTATION_90:
        degree = 90;
        break;
      case Surface.ROTATION_180:
        degree = 180;
        break;
      case Surface.ROTATION_270:
        degree = 270;
        break;
    }

    orientionOfCamera = ;
    int result;
    if ( == .CAMERA_FACING_FRONT) {
      result = ( + degree) % 360;
      result = (360 - result) % 360;
    } else {
      result = ( - degree + 360) % 360;
    }
    (result);
  }

  @Override
  public void onClick(View v) {
    switch (()) {
      case .bt_camera:
        if (camera != null) {
          try {
            (null, null, new MyPictureCallback());
          } catch (Exception e) {
            ();
          }
        }
        break;
    }
  }
}

At this point, our facial recognition has been completed.demo address

If you want to know more about face recognition, just follow and learn about OpenCV first.

The above is all the content of this article. I hope it will be helpful to everyone's study and I hope everyone will support me more.