šŸ˜€Happiness detection using KerasšŸ˜’

šŸ˜€Happiness detection using KerasšŸ˜’

2021, Jan 02    

Hello folks! Are you happy or are you not sure? Alright, letā€™s build a model that will help you find out if youā€™re happy or not. Take two selfies of yourself and keep it for weā€™ll be detecting if you look happier or not.

Well, letā€™s start with some basic understanding of this tutorial and we will later dive deeper into the neural networks. Weā€™re very well known about Computer Vision, more or less have an insight of it. Indeed, itā€™s one of the most popular field of machine learning. In this tutorial weā€™ll be doing a project somehow related to the field of OpenCV. as it is also one of such field where we apply Computer Vision techniques. This is a binary classification type of problem where weā€™ll be building a model that will detect whether the person in the image is whether smiling or not.
\(\left\{\begin{matrix} 1 & if\; smiling \\ 0 & if\; not\; smiling \end{matrix}\right.\)

Implementation

This project is based on Supervised learning. That is why our dataset is already labelled as smiling or not smiling. Weā€™ll be using 600 images for training and 150 images as test dataset. Before we get our hands into the core part, letā€™s first import some libraries. Letā€™s explore more about the data. After executing the above code, youā€™ll be able to look at the number of data weā€™ve taken for training and testing the prepared model. Now itā€™s time we use Keras library to build and compile our model. If you donā€™t have enough knowledge about using keras library, you can refer to the references section for additional resources.
Note: Here, Tensorflow is being used in the backend. For knowing how to install it in backend, follow a stackoverflow question here. The network architecture that weā€™ll building is as follow: \(Input\; layer \rightarrow Conv\; layer \rightarrow Batch\; Normalization \rightarrow RELU Activation \rightarrow MAX Pool \rightarrow Flatten X + Fully\; Connected\; layer\) Letā€™s now construct the model based on this architecture. You can replicate the code and make changes as per your curiosity and see the changes. Now our model is ready. To train and test this model, there are four steps in Keras:

  1. Create the model by calling the function above
  2. Compile the model by calling model.compile(optimizer = "...", loss = "...", metrics = ["accuracy"])
  3. Train the model on train data by calling model.fit(x = ..., y = ..., epochs = ..., batch_size = ...)
  4. Test the model on test data by calling model.evaluate(x = ..., y = ...)

If you are interested to know more about model.compile(), model.fit(), model.evaluate() and their arguments, refer to the official Keras documentation. Here, by Create the model, we mean calling the function created above by passing X_train.shape[1:] as a parameter. The second step is to compile the model with a defined optimizer function, a loss function and a metric class. You can try and include 'sgd', 'adam' or others for optimizer (optimizer documentation). Similarly, the loss function that you can use in this case is only ā€˜binary_cross_entropyā€™ since ā€œhappiness detectionā€ is a binary classification problem. Now itā€™s time we train the model. For training, we simply need to pass the training set, the number of epochs and the batch size as parameter. Epoch generally means the number of passes of the entire training dataset the machine learning algorithm has completed. Since weā€™re defining the batch size as well, itā€™s the number of iterations for passing the training dataset in a single batch. Executing the following piece of code will train our model. After this, we evaluate the model with the test data which we just completed training. As similar to training the model, we pass test data (150 images) and batch size as the parameter to the evaluate method. You can analyze your modelā€™s accuracy and loss. If not very accurate, you can still improve it by making some changes. Letā€™s see how.

Tips for improving your model

If you have not yet achieved a very good accuracy (>= 80%), here are some things tips:

  1. Use blocks of \(CONV \rightarrow BATCHNORM \rightarrow RELU\) such as:
    • X = Conv2D(32, (3, 3), strides = (1, 1), name = 'conv0')(X)
    • X = BatchNormalization(axis = 3, name = 'bn0')(X)
    • X = Activation('relu')(X)
      until the height and width dimensions are quite low and the number of channels are quite large (ā‰ˆ32 for example). We can then flatten the volume and use a fully-connected layer.
  2. Use \(MAXPOOL\) after such blocks. This will help you lower the dimension in height and width.
  3. Change your optimizer. We find ā€˜\(adam\)ā€™ works well.
  4. If you get memory issues, lower your batch_size (e.g. 12 )
  5. Run more epochs until you see the train accuracy no longer improves.

Now letā€™s pass our own image to the model and predict. You can replace pic5.jpeg by your imageā€™s file name in the code below. Congratulations, you have successfully used Keras to build ā€œHappiness Detectionā€. You might want to take a look at the summary of your model you just constructed. Execute the code below to see it. You can download the notebook from here

Credits

Courseraā€™s deeplearning.ai course: Convolutional Neural Network

Comments