React + TypeScript: Face detection with Tensorflow

Yuiko Koyanagi - May 25 '21 - - Dev Community

Hello guys,

I have developed an application with face detection, that applies a mask automatically to your face in real time.

Alt Text

In this article, I will explain how to develop this application.

DEMO→https://mask-app-one.vercel.app/
github→https://github.com/YuikoIto/mask-app

This application has no loading animation, so you have to wait for some seconds at the first loading.

Setup react application and install react-webcam

$ npx create-react-app face-mask-app --template typescript
$ yarn add react-webcam @types/react-webcam
Enter fullscreen mode Exit fullscreen mode

Then, try setting up web camera.

// App.tsx

import { useRef } from "react";
import "./App.css";
import Webcam from "react-webcam";

const App = () => {
  const webcam = useRef<Webcam>(null);

  return (
    <div className="App">
      <header className="header">
        <div className="title">face mask App</div>
      </header>
      <Webcam
        audio={false}
        ref={webcam}
        style={{
          position: "absolute",
          margin: "auto",
          textAlign: "center",
          top: 100,
          left: 0,
          right: 0,
        }}
      />
    </div>
  );
}

export default App;
Enter fullscreen mode Exit fullscreen mode

yarn start and access http://localhost:3000/.

Alt Text

Yay! Web camera is now available.

Try Face detection using Tensorflow

Here, we are using this model. https://github.com/tensorflow/tfjs-models/tree/master/face-landmarks-detection

$ yarn add @tensorflow-models/face-landmarks-detection @tensorflow/tfjs-core @tensorflow/tfjs-converter @tensorflow/tfjs-backend-webgl
Enter fullscreen mode Exit fullscreen mode
  • If you don't use TypeScript, you don't have to install all of them. Install @tensorflow/tfjs instead of @tensorflow/tfjs-core, @tensorflow/tfjs-converter, and @tensorflow/tfjs-backend-webgl.

library versions

    "@tensorflow-models/face-landmarks-detection": "^0.0.3",
    "@tensorflow/tfjs-backend-webgl": "^3.6.0",
    "@tensorflow/tfjs-converter": "^3.6.0",
    "@tensorflow/tfjs-core": "^3.6.0",
Enter fullscreen mode Exit fullscreen mode
// App.tsx

import "@tensorflow/tfjs-core";
import "@tensorflow/tfjs-converter";
import "@tensorflow/tfjs-backend-webgl";
import * as faceLandmarksDetection from "@tensorflow-models/face-landmarks-detection";
import { MediaPipeFaceMesh } from "@tensorflow-models/face-landmarks-detection/dist/types";

const App = () => {
  const webcam = useRef<Webcam>(null);

  const runFaceDetect = async () => {
    const model = await faceLandmarksDetection.load(
      faceLandmarksDetection.SupportedPackages.mediapipeFacemesh
    );
  /*
     Please check your library version.
     The new version is a bit different from the previous.
     You should write as followings in the new one.
     You will see more information from https://github.com/tensorflow/tfjs-models/tree/master/face-landmarks-detection.
     const model = faceLandmarksDetection.SupportedModels.MediaPipeFaceMesh;
      const detectorConfig = {
        runtime: 'mediapipe', // or 'tfjs'
        solutionPath: 'https://cdn.jsdelivr.net/npm/@mediapipe/face_mesh',
      }
      const detector = await faceLandmarksDetection.createDetector(model, detectorConfig);
    */
    detect(model);
  };

  const detect = async (model: MediaPipeFaceMesh) => {
    if (webcam.current) {
      const webcamCurrent = webcam.current as any;
      // go next step only when the video is completely uploaded.
      if (webcamCurrent.video.readyState === 4) {
        const video = webcamCurrent.video;
        const predictions = await model.estimateFaces({
          input: video,
        });
        if (predictions.length) {
          console.log(predictions);
        }
      }
    };
  };

  useEffect(() => {
    runFaceDetect();
  // eslint-disable-next-line react-hooks/exhaustive-deps
  }, [webcam.current?.video?.readyState])

Enter fullscreen mode Exit fullscreen mode

Check logs.

image

OK, seems good.

Setup canvas to overlay the mask on your face

Add <canvas> under <Webcam>.

//App.tsx
const App = () => {
  const webcam = useRef<Webcam>(null);
  const canvas = useRef<HTMLCanvasElement>(null);

  return (
    <div className="App">
      <header className="header">
        <div className="title">face mask App</div>
      </header>
      <Webcam
        audio={false}
        ref={webcam}
      />
      <canvas
        ref={canvas}
      />

Enter fullscreen mode Exit fullscreen mode

Match the size of the canvas with the video.

    const videoWidth = webcamCurrent.video.videoWidth;
    const videoHeight = webcamCurrent.video.videoHeight;
    canvas.current.width = videoWidth;
    canvas.current.height = videoHeight;
Enter fullscreen mode Exit fullscreen mode

Then, let's see this map and check where we should fill out.

By this map, No. 195 is around the nose. So set this point as the fulcrum. Let's draw a mask easily by using beginPath()〜closePath().

// mask.ts

import { AnnotatedPrediction } from "@tensorflow-models/face-landmarks-detection/dist/mediapipe-facemesh";
import {
  Coord2D,
  Coords3D,
} from "@tensorflow-models/face-landmarks-detection/dist/mediapipe-facemesh/util";

const drawMask = (
  ctx: CanvasRenderingContext2D,
  keypoints: Coords3D,
  distance: number
) => {
  const points = [
    93,
    132,
    58,
    172,
    136,
    150,
    149,
    176,
    148,
    152,
    377,
    400,
    378,
    379,
    365,
    397,
    288,
    361,
    323,
  ];

  ctx.moveTo(keypoints[195][0], keypoints[195][1]);
  for (let i = 0; i < points.length; i++) {
    if (i < points.length / 2) {
      ctx.lineTo(
        keypoints[points[i]][0] - distance,
        keypoints[points[i]][1] + distance
      );
    } else {
      ctx.lineTo(
        keypoints[points[i]][0] + distance,
        keypoints[points[i]][1] + distance
      );
    }
  }
};

export const draw = (
  predictions: AnnotatedPrediction[],
  ctx: CanvasRenderingContext2D,
  width: number,
  height: number
) => {
  if (predictions.length > 0) {
    predictions.forEach((prediction: AnnotatedPrediction) => {
      const keypoints = prediction.scaledMesh;
      const boundingBox = prediction.boundingBox;
      const bottomRight = boundingBox.bottomRight as Coord2D;
      const topLeft = boundingBox.topLeft as Coord2D;
      // make the drawing mask larger a bit
      const distance =
        Math.sqrt(
          Math.pow(bottomRight[0] - topLeft[0], 2) +
            Math.pow(topLeft[1] - topLeft[1], 2)
        ) * 0.02;
      ctx.clearRect(0, 0, width, height);
      ctx.fillStyle = "black";
      ctx.save();
      ctx.beginPath();
      drawMask(ctx, keypoints as Coords3D, distance);
      ctx.closePath();
      ctx.fill();
      ctx.restore();
    });
  }
};

Enter fullscreen mode Exit fullscreen mode

Import this draw function in App.tsx and use it.


    const ctx = canvas.current.getContext("2d") as CanvasRenderingContext2D;
    requestAnimationFrame(() => {
      draw(predictions, ctx, videoWidth, videoHeight);
    });
Enter fullscreen mode Exit fullscreen mode

Finish!

Thanks for reading.
This is my first time to use Tensorflow but thanks to a good README of the official github repository, I can make a small application easily. I will develop more with using Tensorflow 🐣

🍎🍎🍎🍎🍎🍎

Please send me a message if you need.

🍎🍎🍎🍎🍎🍎

References

. . . . . . . . . . . . . . . . . . . . .
Terabox Video Player