Happy or Sad? Use Artificial Intelligence to Classify Faces

Home > Blog  > Happy or Sad? Use Artificial Intelligence to Classify Faces

Abstract

Do you think artificial intelligence (AI) is too complex to use? Think again! In this project, you will use AI to teach a web-based tool to classify happy and sad faces, or other objects, poses, or sounds. This experiment requires no coding skills; instead, you will need curiosity, creativity, and a critical eye. Why not give it a try yourself?

Objective

Use a Teachable Machine to teach a web-based tool to classify pictures, objects, or sounds and see if more learning data leads to a more accurate tool.

Introduction

Artificial intelligence (AI) is a branch of computer science that tries to build tools that demonstrate intelligence. Machine learning is a sub-division of AI as shown in Figure 1. Its goal is to create tools that can learn and improve over time using data.

A large rectangle labeled 'computer science' with a smaller rectangle labeled 'artificial intelligence' in it. Inside the second rectangle, there is a third, smaller rectangle labeled 'Machine learning'.
Figure 1. Machine learning is a branch of artificial intelligence and is part of computer science.

Unlike traditional computer programs where the decisions and rules are built in, machine learning programs construct their algorithm from data and feedback. This allows machine learning programs to find trends and patterns in enormous quantities of data, including patterns that are hard for humans to catch. They can also improve themselves without human intervention and can make predictions and handle complex, changing environments.

But machine learning has its limitations. It requires a neutral and complete set of data to learn from and it uses a lot of computer power. The results need to be taken with some precaution as it is susceptible to systematic errors. The video Machine Learning and Human Bias further explains what machine learning is, and how human bias can creep into machine learning tools.

Machine Learning and Human Bias

Writing a machine learning program takes dedication and work. Programmers have developed many ways to make machine learning more accessible, and Teachable Machine is one answer to these attempts. Teachable Machine is a web-based tool that allows users to quickly and easily make a teachable computer program without the use of programming. This means users who do not have computer programming experience can still use the power of artificial intelligence.

Google’s Teachable Machine uses machine learning to classify. The developer needs to provide learning data or examples that are already classified for the tool to train itself. This can be sounds, pictures, or video. The next step is to train the tool. This means the tool tries to find patterns in the data to perform the task the developer asked it to do, in this case classifying. Once trained, developers test their tool to see how it performs.

This project describes how to develop an AI tool that classifies drawings of happy and sad faces as shown in Figure 2. Classifying drawings according to the emotion they show might feel farfetched; there is however a whole field in computer science that develops AI tools that can capture and analyze human emotions called emotion AI.

If classifying drawings does not excite you, consider classifying other objects, people, sounds, or poses instead.

Three machine learning outputs next to each other. the first shows a happy face drawing classified as 99% likely to be happy; the second is a drawing of a sad face classified as sad with 100% certainty, and the last is a happy face drawing classified as 99% certain to be happy.
Figure 2. Examples of happy and sad face classifications completed by a computer.

Teachable Machine relies on learning data to learn how to classify. Therefore, it needs a set of examples from each class that will be used. Figure 3 shows an example of learning data for a tool that will classify drawings of happy and sad faces.

Two sets of pictures of 10 pencil drawings. The first set shows happy faces, the second set depicts sad faces. All drawings are on a white paper placed on a solid orange background before taking the picture.
Figure 3. A collection of different images that could be used as learning data to train a tool to classify happy and sad faces.

Once trained, the developer needs to test the performance. A confusion matrix is one way to communicate the performance of a classifying tool. For a tool that classifies drawings as either “Happy” or “Sad” a confusion matrix would look like the one shown in Figure 4.

A table with the classes 'Happy drawing' and 'Sad drawing' are listed in the first column and the classifications 'Happy' and 'Sad' are listed in the top row. The diagonal squares in the table - those with the number of happy drawings classified as happy and sad drawings classified as sad - are in blue. The two other squares have the number of drawings that are misclassified and are colored in yellow.
Figure 4. A confusion matrix for a tool that classifies drawings in “Happy” and “Sad”.

The diagonal, blue squares show how accurate the tool is. The accuracy is how often a tool can correctly perform its task. For example, a tool that classifies 90 out of 100 happy faces as “Happy” has 90% accuracy for happy faces. So the blue square next to “Happy drawing” would have the number 90 written in it. The diagonal, yellow squares show how frequently the tool gets confused, or provides an incorrect answer. In our example, 10 of the 100 happy drawings that were tested were misclassified as “Sad.” Thus, the yellow square next to “Happy drawing” would have the number 10. The higher the accuracy and the lower the number of misclassifications, the better the tool is performing.

Developers will test if their tool shows AI bias, or a tendency to systematically misclassify. A tool might for example tend to classify drawings as “Happy” when they are held close to the lens, or classify drawings made with a dark pencil as “Sad”, no matter how the face looks. These are examples of biased AI tools.

In this project, you will investigate whether or not an AI tool’s performance changes if you increase its learning data. One tool will have twenty examples per class as learning data. The other tool will have two hundred examples to learn from in each class. Do you think more learning data will increase, decrease, or not change the accuracy of your AI tool?

Terms and Concepts

  • Artificial intelligence (AI)
  • Machine learning
  • Teachable machine
  • Learning data
  • Confusion matrix
  • Accuracy
  • AI bias

Questions

  • What helps you decide if a drawing of a face looks happy or sad? Could a computer use the same factors?
  • The tool will learn from examples of happy and sad faces we provide. Should the examples of happy faces all be identical, or would it be better to have different examples? Must the examples show the facial features that change when one is happy or sad?
  • Do you think giving a machine learning tool more examples of happy and sad faces to learn from will lead to a machine learning tool that is better at classifying happy and sad faces? Explain your answer.

Materials and Equipment

  • Face template, six copies
  • Pencil
  • Scissors
  • Construction paper
  • Coloring pencils, crayons, or markers
  • Access to a computer with webcam. [Note: cell phones and tablets will not work.] Instead of a webcam, digital photos can be taken with another device and uploaded.
  • Access to the internet, more specifically, the Teachable Machine web page.
  • Stuffed animal

Experimental Procedure

Just like little kids learn to distinguish happy from sad faces by seeing examples, Teachable Machine will learn from examples you provide. We will refer to this set of examples as the learning data because your tool will learn from this set of drawings. In AI, the phrase training data is also used to refer to this data.

Getting Familiar with Teachable Machine

  1. Before you start this project, it might be fun and helpful to play with Teachable Machine. Go to the Teachable Machine new project web page and use the webcam to train the tool to recognize if you or a stuffed animal is looking at the camera.The Teachable Machine Tutorial 1 video explains how to gather learning data. Go ahead and gather learning data. If your computer does not have a webcam, take digital pictures and upload those to the tool instead. Then, watch the Teachable Machine Tutorial 2 video to learn how to train the model.The Teachable Machine Tutorial 3 video shows how to understand the predictions.Try these steps to get familiar with Teachable Machine and be sure to watch the tutorials if you get stuck.
  2. The procedure below explains how to use Teachable Machine to classify drawings from happy and sad faces. If you decide to classify other pictures, or choose to classify sounds or poses, you can still use the outline of this procedure. You will however need to adjust it to fit your goal.

Creating Learning Data

Your tool will only learn from the examples you provide. What details do you think are essential when we draw examples to help the machine recognize a happy face or a sad face? Should you draw identical examples, or do you think it is better to have a variety of different happy faces in our example set? Should you include many details like hair style, earrings, freckles, etc., or should we focus on the features of the face that change when someone is happy or sad?

  1. Draw examples of happy faces in twenty circles (two sheets of the template) and sad faces in another set of twenty circles (two templates). Always use the same pencil or pen to draw your faces.
  2. Cut along the lines and make a stack of happy and a stack of sad faces.
  3. If you do not have a digital camera or scanner you can use the webcam to take pictures in Teachable Machine. This will be explained in step 10.If you have a digital camera or scanner and can easily transfer the files to the computer:
    1. For each drawing, place the face on a plain background like a sheet of colored construction paper, as shown in Figure 5, so the tool does not get distracted by the environment. Take a picture or scan. Try your best to take every picture the same way, laying the sad/happy face in the same spot, at the same angle, same distance, etc.
    2. Transfer the files to the computer.
A smiley face drawn on a white paper; the white paper rests on an orange larger paper (lower right corner). A set of six pictures of drawings of happy faces. Each circle of a face takes up almost all the space in the picture; all have a small orange strip visible above and below the drawing (upper left corner) .
Figure 5. A drawing that is ready to be photographed (lower right corner), and six drawings that have already been photographed (upper left corner).

Creating Test Data

  1. Repeat steps 3–5 but this time, ask friends and family members to draw faces for you. Collect as least five happy and five sad faces.
  2. Repeat steps 3–5 but use coloring pencils, crayons, and markers to draw colorful faces. Some examples are shown in Figure 6. Create as least five happy and five sad faces.
A blue paper with 6 happy and sad face drawings. The drawings are colorful.
Figure 6. Colorful drawings of happy and sad faces.

Preparing the Tool

  1. Go to the Teachable Machine new project web page for recognizing images. If you decide to classify sounds or poses, go to the new project page of Teachable Machine instead and choose the project class that fits your goal best.
  2. Label the classes.The tool will classify drawings of faces into two groups or classes: happy faces and sad faces. Rename class 1 as “Happy” and class 2 as “Sad.”

Uploading Learning Data

  1. If you already saved the learning data files to your computer in step 5:
    1. Use the “Upload” button to upload the learning data files for happy faces to the “Happy” class, and the learning data files for the sad face to the “Sad” class.

    If you will use the webcam in Teachable Machine to take pictures:

    1. Start the webcam for the “Happy” class by clicking the “Webcam” button.
    2. Place a happy face drawing on a plain background like a sheet of colored construction paper, so the machine does not get distracted by the environment.
    3. Hold the drawing close to the lens, so the drawing fills most of the space.
    4. Take all the pictures with the camera at the same angle so the lighting stays consistent.
    5. Try to keep your fingers out of the picture. Teachable Machine has a function that crops the pictures as you are taking them. This can help you crop your fingers out of the pictures.
    6. Repeat step b–e for all happy drawings in your learning data. Try your best to take every picture in the same way.
    7. Once all happy faces of the learning data are uploaded, move on to the “Sad” class, repeat steps b–f for uploading the sad faces learning data. Try your best to take these pictures the same way as you took the happy face pictures.

    You now have twenty pictures in the learning data for each class. If this is not the case, remove doubles or add pictures where needed.

    It is fine if the learning data shows some variation in lighting, distance, etc. It is important though that these variations occur in both classes. For example, it is not OK to have all the pictures in the “Happy” class to be close-ups and all the pictures of the “Sad” class to be taken from further away. But having about the same portion of close-ups and pictures taken from further away in both classes is fine.

Training the Model

  1. Start the tool’s learning process by clicking on the “Train Model” button. The machine will take less than a minute to complete this step. In that time, the tool searches patterns in the learning data it can use to distinguish between happy and sad drawings.

Saving the Tool

  1. All the uploaded data on the machine can be saved to Google Drive and later uploaded from the Drive to continue. Look for the commands “Save project to Drive” and “Open project from Drive” under the “Teachable Machine” menu. It will ask you to log in to a Google account to access your Google Drive.The project can also be saved as a file on your computer and uploaded later. Look for the commands “Download project as file” and “open project from file” under the “Teachable Machine” menu.

Testing the Tool

  1. To see how the tool performs, you can use the webcam or files. Figure 7 shows how to toggle between the two.
    the 'Preview' window has an input on/off button and a toggle to select Wedcam or File
    Figure 7. Teachable Machine allows you to use files you upload or the webcam to test the tool.

    One by one, choose a few happy and a few sad drawings from the learning data, and test how the tool classifies these drawings.

    The bars under the test picture in the “output” box inform you how the tool classifies the drawing. Your tool will probably classify its learning data with high confidence, as shown in the left image of Figure 8 where we see the tool is certain the image belongs to the “Happy” class.

    Sometimes, the tool is not clear about how to classify a drawing. The right image in Figure 8 is an example. The orange bar next to “Happy” shows the tool classifies this drawing with a confidence of 59% in the “Happy” class. The red bar next to “Sad” shows it is 41% confident the drawing belongs to the “Sad” class.

    A happy face with underneath the word 'happy' next to an orange bar that shows the indication 100%. A sad face with underneath an orange bar next to the word 'happy'. The bar is about 2/3 dark; 59% is written in the bar. A second, red, bar with the word 'sad' written next to it is about 1/3 dark and displays 41%.
    Figure 8. Two examples of an AI tool that learned to classify happy and sad face drawings.

    In this experiment, you classify a drawing in the class for which the tool shows a confidence of over 50%. With this rule, the picture on the right in Figure 8 is classified under “Happy” because 59 is higher than 50. Because this drawing is in fact a sad face, the tool receives a mark in “Sad drawing misclassified as “Happy.”

    If you use the webcam, try your best to show the drawings the same way as you took pictures in step 10.

  2. In your notebook, draw a confusion matrix like the one shown in Figure 9. Do not forget to fill in the amount of learning data you used in the title.
    A table with the classes 'Happy drawing' and 'Sad drawing' are listed in the first column and the classifications 'Happy' and 'Sad' are listed in the top row. The diagonal squares in the table - those with the number of happy drawings classified as happy and a sad drawings classified as sad - are in blue. The two other squares have the number of drawings that are misclassified and are colored in yellow.
    Figure 9. A confusion matrix helps organize the test results.
  3. Test new data.The real test is seeing how well the tool can classify drawings it has never seen.Test on all the cases listed below. Always tally the results in your confusion matrix.
    1. Use five happy and five sad drawings of the pile you collected in step 6 (drawings made by other people).
    2. Test five happy and five sad drawings of the pile you drew in step 7 (more detailed and colorful drawings).
    3. Use the webcam or a camera to take more pictures to test what happens if you change the background. Use four drawings of the learning data for each class to perform this test.
    4. Use the webcam or a camera to take more pictures to test what happens if drawings are shown close-up or further away from the lens compared to the learning data. Use six drawings of the learning data for each class to perform this test.
    5. Test any other variations you want to test.
    6. Write down observations in your notebook. Does the tool perform well on a subset of the tests, or very poorly on a specific subset? Do you notice any trends?

    Creating a Second Tool

    You have trained and tested a tool that used twenty drawings in each class to learn the difference between a happy and sad face. You will now create a tool that will learn from 200 drawings in each class.

  4. Save the tool under a new name. This will keep this tool separate from the one that was trained with twenty drawings per class.
  5. Leave the drawings you have in the learning data in both classes. In each class, for each of the drawings, use the webcam to add nine new pictures of the same drawing. Move the drawing with the background around a little in the lens: closer to, further away from, to the right or left, and maybe include another background or no background, etc. If your device does not have a webcam, take nine pictures for each of the drawings in the learning data, each slightly different. Upload these pictures.If you prefer to add more variation in the learning data, you can draw more faces for each class and add pictures of these to your learning data. You cannot however use any of your test data drawings for this task.Each class should now have 200 pictures as learning data.
  6. Execute step 11 to train the new model.
  7. Draw a new confusion matrix in your notebook, again similar to the matrix shown in Figure 9. This matrix will hold the test data for the tool trained on 200 drawings per class.
  8. Execute step 15 with the new tool. Collect the test data in your second confusion matrix.

    Analyzing the Data

  9. Compare and contrast the two confusion matrices you have. The background section can help you understand the confusion matrix. Remember, blue represents accuracy, yellow represents misclassification. Higher numbers in the blue squares and lower numbers in the yellow squares indicate a better performing tool.
  10. In case the two tools perform similarly, test both tools on more data. Maybe try some cartoon drawings or smiley faces you find online, faces where the circle is not drawn, etc. More data might help you see a clearer difference in performance.
  11. Can you think of some reasons that explain your results?
  12. If you had more time and resources, how might you change the learning data to create a tool that performs better?

    Communicating the Results

  13. Add pictures of the learning data and classifications of the AI tool you created in your report. Figure 2 and Figure 3 are good examples.

 

No Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: