Face Recognition with Blynk

featured.jpg

Setting up a simple app on a phone to alert a message when a face is recognised using the ESP-WHO library.

Blynk is a cloud platform and mobile phone app that allows you to receive messages from IoT devices and microcontrollers and also control these devices. There’s a library for the Arduino IDE and it works with ESP devices. In this project I’m using an ESP-EYE camera board but any ESP32 and camera combination could be used.

A system like this could be used to replace a door bell or other access control system where certain people have access.

Before following this tutorial you will need to have set up the ESP32 and camera following this tutorial: https://robotzero.one/esp-who-recognition-with-names/ and captured some faces to be recognised.

Blynk Set-up

First create a (free) account at Blynk by downloading the app from Google Play or the Apple App store: https://blynk.io/en/getting-started Follow the instructions below after downloading and creating an account.

Click New Project
Name the project and choose ESP32 and Wi-Fi options
An Auth Token will be emailed to you
You now have an empty project. Click the ‘plus’ symbol to open the widget box
Choose Value Display, Video Streaming, Notification and Eventor
The project will look something like this

In the finished application the Eventor Widget will be listening for a message from the ESP32. The Notification widget will sound an alarm when a message is received and the Value Display widget will show the name of the person recognised in the message. The Video Streaming will show the stream from the camera.

Touch each of the widgets in the new project and configure as below:

Eventor Settings
Using Send Notifications
Notification Settings
Value Display Settings
Using Virtual Pin V0
Video Streaming Settings
See section video section below

For the video stream to appear you need to set up port forwarding on your router and set it up to forward to the port set in the Sketch (81) for the stream. It ‘should’ work but I can’t test it as I have a 4G based internet system which doesn’t allow port forwarding.

Arduino Set-up

First install the Blynk Library for Arduino via Tools > Manage Libraries and searching for blynk.

Copy and paste the Sketch below and save it as a new Sketch. Add to the folder where the Sketch has been saved these two files: camera_index.h and camera_pins.h . camera_index.h is the HTML for the interface and camera_pins.h is the camera definitions.

Change the SSID, WiFi password and the auth code to your own and upload to the ESP32.

#define BLYNK_PRINT Serial

#include "esp_http_server.h"
#include "esp_camera.h"
#include "camera_index.h"
#include "Arduino.h"
#include "fd_forward.h"
#include "fr_forward.h"
#include "fr_flash.h"

#include <WiFi.h>
#include <WiFiClient.h>
#include <BlynkSimpleEsp32.h>

char auth[] = "enter auth code from email here";
char ssid[] = "NSA";
char pass[] = "orange";

#define ENROLL_CONFIRM_TIMES 5
#define FACE_ID_SAVE_NUMBER 7

// Select camera model
//#define CAMERA_MODEL_WROVER_KIT
//#define CAMERA_MODEL_ESP_EYE
//#define CAMERA_MODEL_M5STACK_PSRAM
//#define CAMERA_MODEL_M5STACK_WIDE
//#define CAMERA_MODEL_AI_THINKER
#include "camera_pins.h"

camera_fb_t * fb = NULL;

long recognise_interval = 20000; // 20 secs gap between recognitions
long last_recognised_millis = 0;

void app_facenet_main();

typedef struct
{
  uint8_t *image;
  box_array_t *net_boxes;
  dl_matrix3d_t *face_id;
} http_img_process_result;


static inline mtmn_config_t app_mtmn_config()
{
  mtmn_config_t mtmn_config = {0};
  mtmn_config.type = FAST;
  mtmn_config.min_face = 80;
  mtmn_config.pyramid = 0.707;
  mtmn_config.pyramid_times = 4;
  mtmn_config.p_threshold.score = 0.6;
  mtmn_config.p_threshold.nms = 0.7;
  mtmn_config.p_threshold.candidate_number = 20;
  mtmn_config.r_threshold.score = 0.7;
  mtmn_config.r_threshold.nms = 0.7;
  mtmn_config.r_threshold.candidate_number = 10;
  mtmn_config.o_threshold.score = 0.7;
  mtmn_config.o_threshold.nms = 0.7;
  mtmn_config.o_threshold.candidate_number = 1;
  return mtmn_config;
}
mtmn_config_t mtmn_config = app_mtmn_config();

face_id_name_list st_face_list;
static dl_matrix3du_t *aligned_face = NULL;
static dl_matrix3du_t *image_matrix = NULL;


typedef struct
{
  char enroll_name[ENROLL_NAME_LEN];
} httpd_resp_value;

httpd_resp_value st_name;
http_img_process_result out_res = {0};


#define PART_BOUNDARY "123456789000000000000987654321"
static const char* _STREAM_CONTENT_TYPE = "multipart/x-mixed-replace;boundary=" PART_BOUNDARY;
static const char* _STREAM_BOUNDARY = "\r\n--" PART_BOUNDARY "\r\n";
static const char* _STREAM_PART = "Content-Type: image/jpeg\r\nContent-Length: %u\r\n\r\n";
char * part_buf[64];
httpd_handle_t stream_httpd = NULL;
esp_err_t res = ESP_OK;

void setup()
{
  Serial.begin(115200);
  Serial.setDebugOutput(true);
  Serial.println();

  camera_config_t config;
  config.ledc_channel = LEDC_CHANNEL_0;
  config.ledc_timer = LEDC_TIMER_0;
  config.pin_d0 = Y2_GPIO_NUM;
  config.pin_d1 = Y3_GPIO_NUM;
  config.pin_d2 = Y4_GPIO_NUM;
  config.pin_d3 = Y5_GPIO_NUM;
  config.pin_d4 = Y6_GPIO_NUM;
  config.pin_d5 = Y7_GPIO_NUM;
  config.pin_d6 = Y8_GPIO_NUM;
  config.pin_d7 = Y9_GPIO_NUM;
  config.pin_xclk = XCLK_GPIO_NUM;
  config.pin_pclk = PCLK_GPIO_NUM;
  config.pin_vsync = VSYNC_GPIO_NUM;
  config.pin_href = HREF_GPIO_NUM;
  config.pin_sscb_sda = SIOD_GPIO_NUM;
  config.pin_sscb_scl = SIOC_GPIO_NUM;
  config.pin_pwdn = PWDN_GPIO_NUM;
  config.pin_reset = RESET_GPIO_NUM;
  config.xclk_freq_hz = 20000000;
  config.pixel_format = PIXFORMAT_JPEG;
  //init with high specs to pre-allocate larger buffers
  if (psramFound()) {
    config.frame_size = FRAMESIZE_UXGA;
    config.jpeg_quality = 10;
    config.fb_count = 2;
  } else {
    config.frame_size = FRAMESIZE_SVGA;
    config.jpeg_quality = 12;
    config.fb_count = 1;
  }

#if defined(CAMERA_MODEL_ESP_EYE)
  pinMode(13, INPUT_PULLUP);
  pinMode(14, INPUT_PULLUP);
#endif

  // camera init
  esp_err_t err = esp_camera_init(&config);
  if (err != ESP_OK) {
    Serial.printf("Camera init failed with error 0x%x", err);
    return;
  }

  sensor_t * s = esp_camera_sensor_get();
  s->set_framesize(s, FRAMESIZE_QVGA);

#if defined(CAMERA_MODEL_M5STACK_WIDE)
  s->set_vflip(s, 1);
  s->set_hmirror(s, 1);
#endif

  app_facenet_main();
  Blynk.begin(auth, ssid, pass);
  startCameraServer();
}

void app_facenet_main()
{
  face_id_name_init(&st_face_list, FACE_ID_SAVE_NUMBER, ENROLL_CONFIRM_TIMES);
  aligned_face = dl_matrix3du_alloc(1, FACE_WIDTH, FACE_HEIGHT, 3);
  read_face_id_from_flash_with_name(&st_face_list);

  image_matrix = dl_matrix3du_alloc(1, 320, 240, 3);
  out_res.image = image_matrix->item;
}

static esp_err_t process_camera_feed(httpd_req_t *req)
{

  res = httpd_resp_set_type(req, _STREAM_CONTENT_TYPE);
  while (true) {
    fb = esp_camera_fb_get();
    out_res.net_boxes = NULL;
    out_res.face_id = NULL;

    fmt2rgb888(fb->buf, fb->len, fb->format, out_res.image);

    out_res.net_boxes = face_detect(image_matrix, &mtmn_config);

    if (out_res.net_boxes)
    {
      if (align_face(out_res.net_boxes, image_matrix, aligned_face) == ESP_OK)
      {

        out_res.face_id = get_face_id(aligned_face);

        if (st_face_list.count > 0 && millis() - last_recognised_millis > recognise_interval)
        {
          face_id_node *f = recognize_face_with_name(&st_face_list, out_res.face_id);
          Serial.println(f->id_name);
          if (f)
          {
            char recognised_message[64];
            sprintf(recognised_message, "RECOGNISED %s", f->id_name);
            Blynk.virtualWrite(V0, f->id_name);
            last_recognised_millis = millis();
          }
          else
          {
            Serial.println("Unknown Face");
          }
          dl_matrix3d_free(out_res.face_id);
        }

      }

    }

    if (res == ESP_OK) {
      size_t hlen = snprintf((char *)part_buf, 64, _STREAM_PART, fb->len);
      res = httpd_resp_send_chunk(req, (const char *)part_buf, hlen);
    }
    if (res == ESP_OK) {
      res = httpd_resp_send_chunk(req, (const char *)fb->buf, fb->len);
    }
    if (res == ESP_OK) {
      res = httpd_resp_send_chunk(req, _STREAM_BOUNDARY, strlen(_STREAM_BOUNDARY));
    }

    esp_camera_fb_return(fb);
    fb = NULL;

  }
  return res;
}

void startCameraServer()
{
  httpd_config_t config = HTTPD_DEFAULT_CONFIG();

  httpd_uri_t stream_uri = {
    .uri       = "/stream",
    .method    = HTTP_GET,
    .handler   = process_camera_feed,
    .user_ctx  = NULL
  };

  config.server_port = 8087;
  Serial.printf("Starting stream server on port: '%d'\n", config.server_port);
  if (httpd_start(&stream_httpd, &config) == ESP_OK) {
    httpd_register_uri_handler(stream_httpd, &stream_uri);
  }
}


void loop()
{
  Blynk.run();
}


Testing

In the Blynk app on your phone, press the play icon to start the application. Point the ESP camera at a previously saved face. You should hear the notification alarm and see a message. If the video is working you will also see a stream from the camera.

4 Replies to “Face Recognition with Blynk”

  1. Leon says:

    How to modify blynk server IP for my local server?

  2. Pchyy says:

    Hello, can I ask you something?
    With this code, does IP address that I get should bring me to the same webpage like it is in this topic: https://robotzero.one/esp-who-recognition-with-names/ ?
    Is STA video streaming setting? I used that one but it showed up that no stream available.
    Thank you very much.

    1. WordBot says:

      Hi, I’m not sure what you mean but in this sketch the IP address is the ESP32 itself. STA is STAtion mode for the Wi-FI

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

scroll to top