Hello wonderful people, this is my first post here. Let me begin by thanking all the support this forum has provided
I recently built an Emotion recognition that could detect emotions of people from a live camera feed. I used raspberry pi to this. I used a a pre-trained model to recognize the facial expression of a person from a real-time video stream. The “FER2013” dataset is used to train the model with the help of a VGG-like Convolutional Neural Network (CNN).
To implement Expression Recognition on Raspberry Pi, we have to follow the three steps mentioned below.
Step-1: Detect the faces in the input video stream.
Step-2: Find the Region of Interest (ROI) of the faces.
Step-3: Apply the Facial Expression Recognition model to predict the expression of the person.
We are using Six Classes here that is ‘Angry’, ‘Fear’, ‘Happy’, ‘Neutral’, ‘Sad’, ‘Surprise’. So, the predicted images will be among these classes.
I have already documented my learnings in this article linked below so that other don’t have to go through the problems I faced.
Emotiong Recognition on Raspberry Pi using tensorflow
Enjoy! and do let me know your feedback thanks.