This project resulted from my curriculum as a part of USC's Center for AI in Society Student Branch (CAIS++). The main aim of the project was to create a CNN that could accurately identify and read emotions from an image of a face, as facial emotion recognition is an important task in mental health and education fields. I ensured the CNN had a high accuracy to make facial expression recognition a reliable and simpler process.
Key skills:
• Convolutional Neural Networks
• Pytorch and Python programming
• Data Analysis & Preprocessing
After having completed my first semester as a part of CAIS++, I was determined to showcase the expertise I gained through curriculum. Like all machine learning, I knew my first step in solving this problem statement would be searching for good data. For this step, I took to Kaggle to explore what they had to offer, and the one that stood out to me most was the facial expression recognition data set. This stood out to me for two reasons: I would need to use a CNN to address this problem which I was excited to try and no one was able to get a high accuracy. As I iterated through the process of creating a CNN with high accuracy, I realized that there were limitations to the data set that resulted in low accuracy. Despite the challenges, I ended up producing an accuracy score of 70%. Through the process, I realized the importance of data selection, as mapping out the distribution of emotion classes showed that some emotions did not have any data to represent them. Those emotions were the same emotions that the model consistently performed poorly on.