Create a facial recognition attendance app in React Native

Introduction

In this tutorial, we’ll be taking a look at how we can implement an app that uses facial recognition to verify that a student has indeed attended a class.

There are many applications for facial recognition technology. In mobile, it’s mostly used for unlocking the phone or making payments by means of taking a selfie.

Prerequisites

Basic knowledge of React Native is required to follow this tutorial.

This tutorial also assumes you have prior experience with working with bluetooth peripherals from a React Native app. If you’re new to it, be sure to check out my tutorial on creating a realtime attendance app with React Native and BLE. Otherwise, simply replace or skip the BLE integration with something like geolocation as it’s only used for determining whether the user is physically present in a specific place.

The following versions will be used in this tutorial. If you encounter any issues, be sure to try switching to those versions:

  • Node 9.0.0 - required by the BLE peripheral.
  • Node 11.2.0 - used by React Native CLI.
  • Yarn 1.13.0 - used for installing React Native modules and server modules.
  • React Native CLI 2.0.1
  • React Native 0.59.9
  • React Native Camera 2.10.2

For implementing facial recognition, you’ll need a Microsoft Azure account. Simply search “Azure sign up” or go to this page to sign up.

Optionally, you’ll need the following if you want to integrate BLE:

  • BLE Peripheral - this can be any IoT device which have bluetooth, WI-FI, and NodeJS support. For this tutorial, I’m using a Raspberry Pi 3 with Raspbian Stretch Lite installed.

App overview

We will be creating an attendance app with facial recognition features. It will have both server (NodeJS) and client-side (React Native) components.

The server is responsible for registering the faces with Microsoft Cognitive Services’ Face API as well as act as a BLE peripheral. BLE integration is needed to verify that the user is physically in the room. It’s fool proof because unlike the GPS location, it cannot be spoofed.

On the other hand, the app is responsible for the following:

  • Scanning and connecting to a BLE peripheral.
  • Asking for the user’s name.
  • Asking the user to take a selfie to check if their face is registered.

Here’s what the app will look like when you open it:

react-native-facial-recognition-img1

When you connect to a peripheral, it will ask for your full name:

react-native-facial-recognition-img2

After that, it will ask you to take a selfie. When you press on the shutter button, the image is sent to Microsoft Cognitive Services to check if the face is similar to one that is previously registered. If it is, then it responds with the following:

react-native-facial-recognition-img3

You can find the source code in this GitHub repo. The master branch is where all the latest code are, and the starter branch contains the starter code for following this tutorial.

What is Cognitive Services?

Before we proceed, let's first quickly go over what Cognitive Services is. Cognitive Services is a collection of services that allows developers to easily implement machine learning features to their applications. These services are available via an API which are grouped under the following categories:

  • Vision - for analyzing images and videos.
  • Speech - for converting speech to text and vise-versa.
  • Language - for processing natural language.
  • Decision - for content moderation.
  • Search - for implementing search algorithms that are used on Bing.

Today we're only concerned about Vision, more specifically the Face API. This API is used for identifying and finding similarities of faces in an image.

Setting up Cognitive Services

In this section, we’ll be setting up Cognitive services in the Azure portal. This section assumes that you already have an Azure account.

First, go to the Azure portal and search for “Cognitive services”. Click on the first result under the Services:

react-native-facial-recognition-img4

Once you’re there, click on the Add button. This will lead you to the page where you can search for the specific cognitive service you want to use:

react-native-facial-recognition-img5

Next, search for “face” and click on the first result:

react-native-facial-recognition-img6

On the page that follows, click on the Create button to add the service:

react-native-facial-recognition-img7

After that, it will ask for the details of the service you want to create. Enter the following details:

  • Name: attendance-app
  • Subscription: Pay-As-You-Go
  • Location: wherever the server nearest to you is
  • Pricing tier: F0 (this is within the free range so you won’t actually get charged)
  • Resource group: click on Create new
react-native-facial-recognition-img8

Enter the details of the resource group you want to add the service to. In this case, I simply put in the name then clicked OK:

react-native-facial-recognition-img9

Once the resource group is created, you can now add the cognitive service. Here’s what it looks like as it’s deploying:

react-native-facial-recognition-img10

Once it’s created, you’ll find it listed under the Cognitive Services:

react-native-facial-recognition-img11

If you click on it, you’ll see overview page. Click on the Show access keys link to see the API keys that you can use to make requests to the API. At the bottom, you can also see the number of API calls that you have made and the total allotted to the pricing tier you chose:

react-native-facial-recognition-img12

Bootstrapping the app

We will only be implementing the face recognition feature in this tutorial so I’ve prepared a starter project which you can clone and start with:

1git clone https://github.com/anchetaWern/RNFaceAttendance
2    cd RNFaceAttendance
3    git checkout starter
4    yarn
5    react-native eject
6    react-native link react-native-ble-manager
7    react-native link react-native-camera
8    react-native link react-native-vector-icons
9    react-native link react-native-exit-app

Do the same for the server as well:

1cd server
2    yarn

Next, update the android/app/build.gradle file and add the missingDimensionStrategy. This is necessary for React Native Camera to work:

1android {
2      compileSdkVersion rootProject.ext.compileSdkVersion
3    
4      compileOptions {
5        // ...    
6      }
7    
8      defaultConfig {
9        applicationId "com.rnfaceattendance"
10        minSdkVersion rootProject.ext.minSdkVersion
11        targetSdkVersion rootProject.ext.targetSdkVersion
12        versionCode 1
13        versionName "1.0"
14        missingDimensionStrategy 'react-native-camera', 'general' // add this
15      }
16    }

The starter project already includes the code for implementing the BLE peripheral and connecting to it.

Building the app

Now we’re ready to start building the app. We’ll first start with the server component. Here are some links to help you along the way as you go through this tutorial:

Server

The server is where we will add the code for registering the faces. We will create an Express server so we can simply access different routes to perform different actions. Start by importing all the modules we need:

1// server/server.js
2    const express = require("express");
3    const axios = require("axios");
4    const bodyParser = require("body-parser");
5    const app = express();
6    const fs = require('fs')
7    app.use(bodyParser.urlencoded({ extended: true }));
8    app.use(bodyParser.json());

Next, create the base variable to be used for initializing an axios instance. We will use this later on to make a request to the API. You need to supply a different URL based on your location. You can find the list of locations here. The API key (Ocp-Apim-Subscription-Key) is passed as a header value along with the Content-Type:

1const loc = 'southeastasia.api.cognitive.microsoft.com'; // replace with the server nearest to you
2    const key = 'YOUR COGNITIVE SERVICES API KEY';
3    const facelist_id = 'class-3e-facelist'; // the ID of the face list we'll be working with
4    
5    const base_instance_options = {
6      baseURL: `https://${loc}/face/v1.0`,
7      timeout: 1000,
8      headers: {
9        'Content-Type': 'application/json',
10        'Ocp-Apim-Subscription-Key': key
11      }
12    };

Next, add the route for creating a face list. This requires you to pass in the unique ID of the face list as a route segment. In this case, we’re setting it as class-3e-facelist. To describe the face list further, we’re also passing in the name:

1app.get("/create-facelist", async (req, res) => {
2      try {
3        const instance = { ...base_instance_options };
4        const facelist_id = 'class-3e-facelist';
5        const response = await instance.put(
6          `/facelists/${facelist_id}`,
7          {
8            name: "Classroom 3-E Facelist"
9          }
10        );
11    
12        console.log("created facelist: ", response.data);
13        res.send('ok');
14    
15      } catch (err) {
16        console.log("error creating facelist: ", err);
17        res.send('not ok');
18      }
19    });

Once the face list is created, we can now proceed to adding faces to it. This time, the Content-Type should be application/octet-stream as opposed to application/json. This is because the specific API endpoint that we’re using requires a file to be passed in the request body:

1app.get("/add-face", async (req, res) => {
2      try {
3        const instance_options = { ...base_instance_options };
4        instance_options.headers['Content-Type'] = 'application/octet-stream';
5        const instance = axios.create(instance_options);
6    
7        const MY_FILE_PATH = './path/to/selfie.png';
8        var file_contents = fs.readFileSync(MY_FILE_PATH); // read the contents of the file as array buffer
9    
10        const response = await instance.post(
11          `/facelists/${facelist_id}/persistedFaces`,
12          file_contents
13        );
14    
15        console.log('added face: ', response.data);
16        res.send('ok');
17    
18      } catch (err) {
19        console.log("err: ", err);
20        res.send('not ok');
21      }
22    });

The code above requires you to change the file name and refresh the page every time you register a new face. But you can also loop through the files in a specific directory and do it all in one go if you want. Just be aware that you might exceed the limits and your requests might get throttled as we’ve selected the free tier earlier.

Mobile app

Now we can proceed to coding the app. Start by importing the additional React Native modules that we need:

1// App.js
2    import {
3      Platform,
4      StyleSheet,
5      Text,
6      View,
7      SafeAreaView,
8      PermissionsAndroid,
9      NativeEventEmitter,
10      NativeModules,
11      Button,
12      FlatList,
13      Alert,
14      ActivityIndicator,
15      TouchableOpacity // add
16    } from 'react-native';
17    
18    import { RNCamera } from 'react-native-camera'; // for taking selfies
19    import base64ToArrayBuffer from 'base64-arraybuffer'; // for converting base64 images to array buffer
20    import MaterialIcons from 'react-native-vector-icons/MaterialIcons'; // for showing icons
21    import axios from 'axios'; // for making requests to the cognitive services API

Next, add the default configuration for making requests with axios:

1const key = 'YOUR COGNITIVE SERVICES API KEY';
2    const loc = 'southeastasia.api.cognitive.microsoft.com'; // replace with the server nearest to you
3    
4    const base_instance_options = {
5      baseURL: `https://${loc}/face/v1.0`,
6      timeout: 10000,
7      headers: {
8        'Content-Type': 'application/json',
9        'Ocp-Apim-Subscription-Key': key
10      }
11    };

Inside the component’s class definition, add the initial value for the visibility of the camera:

1export default class App extends Component {
2    
3      state = {
4        is_scanning: false,
5        peripherals: null,
6        connected_peripheral: null,
7        user_id: '',
8        fullname: '',
9      
10        // add these:
11        show_camera: false,
12        is_loading: false
13      }
14    
15    }

When the user enters the room, that’s the time we want to show the camera:

1enterRoom = (value) => {
2      this.setState({
3        user_id: RandomId(15),
4        fullname: value,
5        show_camera: true 
6      });
7    }

Next, update the render() method to look like the following:

1render() {
2      const { connected_peripheral, is_scanning, peripherals, show_camera, is_loading } = this.state;
3    
4      return (
5        <SafeAreaView style={{flex: 1}}>
6          <View style={styles.container}>
7            {
8              !show_camera &&
9              <View style={styles.header}>
10                <View style={styles.app_title}>
11                  <Text style={styles.header_text}>BLE Face Attendance</Text>
12                </View>
13                <View style={styles.header_button_container}>
14                  {
15                    !connected_peripheral &&
16                    <Button
17                      title="Scan"
18                      color="#1491ee"
19                      onPress={this.startScan} />
20                  }
21                </View>
22              </View>
23            }
24    
25            <View style={styles.body}>
26              {
27                !show_camera && is_scanning &&
28                <ActivityIndicator size="large" color="#0000ff" />
29              }
30    
31              {
32                show_camera &&
33                <View style={styles.camera_container}>
34                  {
35                    is_loading &&
36                    <ActivityIndicator size="large" color="#0000ff" />
37                  }
38    
39                  {
40                    !is_loading &&
41                    <View style={{flex: 1}}>
42                      <RNCamera
43                        ref={ref => {
44                          this.camera = ref;
45                        }}
46                        style={styles.preview}
47                        type={RNCamera.Constants.Type.front}
48                        flashMode={RNCamera.Constants.FlashMode.on}
49                        captureAudio={false}
50                      />
51    
52                      <View style={styles.camer_button_container}>
53                        <TouchableOpacity onPress={this.takePicture} style={styles.capture}>
54                          <MaterialIcons name="camera" size={50} color="#e8e827" />
55                        </TouchableOpacity>
56                      </View>
57                    </View>
58                  }
59    
60                </View>
61              }
62    
63              {
64                !connected_peripheral && !show_camera &&
65                <FlatList
66                  data={peripherals}
67                  keyExtractor={(item) => item.id.toString()}
68                  renderItem={this.renderItem}
69                />
70              }
71    
72            </View>
73          </View>
74        </SafeAreaView>
75      );
76    }

In the code above, all we’re doing is adding the camera and selectively showing the different components based on its visibility. We only want to show the camera (and nothing else) if show_camera is true because it’s going to occupy the entire screen.

Let’s break down the code for the RNCamera a bit and then we’ll move on. First, we set this.camera to refer to this specific camera component. This allows us to use this.camera later on to perform different operations using the camera. The type is set to front because we’re primarily catering to users taking selfies for attendance. captureAudio is set to false because its default value is true.

1<RNCamera
2      ref={ref => {
3        this.camera = ref;
4      }}
5      style={styles.preview}
6      type={RNCamera.Constants.Type.front}
7      flashMode={RNCamera.Constants.FlashMode.on}
8      captureAudio={false}
9    />

Next, we now proceed to the code for taking pictures:

1takePicture = async() => {
2      if (this.camera) { // check if camera has been initialized
3        this.setState({
4          is_loading: true
5        });
6    
7        const data = await this.camera.takePictureAsync({ quality: 0.25, base64: true });
8        const selfie_ab = base64ToArrayBuffer.decode(data.base64);
9        
10        try {
11          const facedetect_instance_options = { ...base_instance_options };
12          facedetect_instance_options.headers['Content-Type'] = 'application/octet-stream';
13          const facedetect_instance = axios.create(facedetect_instance_options);
14    
15          const facedetect_res = await facedetect_instance.post(
16            `/detect?returnFaceId=true&detectionModel=detection_02`,
17            selfie_ab
18          );
19    
20          console.log("face detect res: ", facedetect_res.data);
21         
22          if (facedetect_res.data.length) {
23    
24            const findsimilars_instance_options = { ...base_instance_options };
25            findsimilars_instance_options.headers['Content-Type'] = 'application/json';
26            const findsimilars_instance = axios.create(findsimilars_instance_options);
27            const findsimilars_res = await findsimilars_instance.post(
28              `/findsimilars`,
29              {
30                faceId: facedetect_res.data[0].faceId,
31                faceListId: 'wern-faces-01',
32                maxNumOfCandidatesReturned: 2,
33                mode: 'matchPerson'
34              }
35            );
36    
37            console.log("find similars res: ", findsimilars_res.data);
38            this.setState({
39              is_loading: false
40            });
41    
42            if (findsimilars_res.data.length) {
43              Alert.alert("Found match!", "You've successfully attended!");
44              this.attend();
45    
46            } else {
47              Alert.alert("No match", "Sorry, you are not registered");
48            }
49    
50          } else {
51            Alert.alert("error", "Cannot find any face. Please make sure there is sufficient light when taking a selfie");
52          }
53    
54        } catch (err) {
55          console.log("err: ", err);
56          this.setState({
57            is_loading: false
58          });
59        }
60      }
61    }

Breaking down the code above, we first take a picture using the this.camera.takePictureAsync(). This accepts an object containing the options for the picture to be taken. In this case, we’re setting the quality to 0.25 (25% of the maximum quality). This ensures that the API won’t reject our image because of its size. Play with this value to ensure that the images passes the size limit validation by the API but at the same time, it has enough quality for the API to be able to recognize the faces clearly. base64 is set to true which means that data will contain the base64 representation of the image once the response is available. After that, we use the base64ToArrayBuffer library to convert the image to a format understandable by the API:

1const data = await this.camera.takePictureAsync({ quality: 0.25, base64: true });
2    const selfie_ab = base64ToArrayBuffer.decode(data.base64);

Next, we make the request to the API. This is pretty much the same as what we did in the server earlier. Only this time, we’re sending it to the /detect endpoint. This detects faces in a picture and returns the position of the different face landmarks (eyes, nose, mouth).

We’re also passing in additional parameters such as returnFaceId which is a unique ID assigned to the detected face. On the other hand, detectionModel is set to detection_02 because it’s better than the default option (detection_01) when it comes to detecting faces in a slightly side view position and blurry faces as well. Do note that unlike the default option, this detection model won’t return the different landmarks (position of eyes, nose, mouth):

1const facedetect_instance_options = { ...base_instance_options };
2    facedetect_instance_options.headers['Content-Type'] = 'application/octet-stream';
3    const facedetect_instance = axios.create(facedetect_instance_options);
4    
5    const facedetect_res = await facedetect_instance.post(
6      `/detect?returnFaceId=true&detectionModel=detection_02`,
7      selfie_ab
8    );

If a face is detected, we make another request to the API. This time it’s for checking if the face detected earlier has a match within the face list we created on the server. This time, we’ll only need to send JSON data so the Content-Type is set to application/json . The endpoint is /findsimilars and it requires the faceId and faceListId to be passed in the request body. faceId is the unique ID assigned to the face detected earlier, and faceListId is the ID of the face list we created earlier on the server. maxNumOfCandidatesReturned and mode are optional:

1if (facedetect_res.data.length) {
2      const findsimilars_instance_options = { ...base_instance_options };
3      findsimilars_instance_options.headers['Content-Type'] = 'application/json';
4      const findsimilars_instance = axios.create(findsimilars_instance_options);
5      const findsimilars_res = await findsimilars_instance.post(
6        `/findsimilars`,
7        {
8          faceId: facedetect_res.data[0].faceId,
9          faceListId: faceListId,
10          maxNumOfCandidatesReturned: 2, // the maximum number of matches to return
11          mode: 'matchPerson' // the default mode. This tries to find faces of the same person as possible by using internal same-person thresholds
12        }
13      );
14       
15      // rest of the code..
16    }

If the above request returns something, it means that the person who took the selfie has their face registered previously. Each match returns a confidence level ranging between 0 and 1. The higher the confidence level, the more similar the faces are. There’s currently no way of specifying the threshold for this one (for example: only return matches which has above 80% confidence level) so we’re stuck with the defaults.

Lastly, here are the additional styles for the camera component:

1camera_container: {
2      flex: 1,
3      flexDirection: 'column',
4      backgroundColor: 'black'
5    },
6    preview: {
7      flex: 1,
8      justifyContent: 'flex-end',
9      alignItems: 'center',
10    },
11    camer_button_container: {
12      flex: 0,
13      flexDirection: 'row',
14      justifyContent: 'center',
15      backgroundColor: '#333'
16    }

Running the app

At this point you’re now ready to run the app:

1nodemon server/server.js
2    react-native run-android
3    react-native run-ios

Start by creating a face list (raspberrypi.local/create-facelist on mine), then add faces to it (raspberrypi.local/add-face). Once you’ve added the faces, you can now run the app and scan for peripherals. Connect to the peripheral that’s listed and it will ask you to enter your full name. After that, take a selfie and wait for the API to respond.

Conclusion

In this tutorial, you learned how to use Microsoft Cognitive Services to create an attendance app which uses facial recognition to identify people. Specifically, you learned how to use React Native Camera and convert its response to a format that can be understood by the API.

You can find the code in this GitHub repo.