Image Recognition in Snap!

The post below was written by go_girl pioneer, Ruby.

Hello everybody. I hope you are all doing well.

In this week’s go_girl sesions on Skype, we worked in Snap! and focused on using costumes which can be drawings, photographs or imported images.  Ken demonstrated a sample program which he trained to recognise images of cats and dogs.  He also showed us how to add images to the program.  He then demonstrated a sample program trained to label pictures of a cat, a dog or a flower.  We learned the importance of providing enough training for the program to more accurately recognise each image.

On Friday, I shared the program I trained to recognise images of a daffodil and a sunflower. It was fun to do and I was happy that it actually worked. I am creating another program which I am training to recognise images of The Simpsons.  I hope to have it working soon.

Sunflower2Daffodil3In our session, we also focused on pose detection in machine learning. Google Creative Lab released a browser-based software called PoseNet for real-time human pose estimation. It can estimate where key parts are on the body and was created using deep machine learning. One of the best things about PoseNet is that it works in a browser without any special software or hardware (apart from a webcam).  Afterwards, we each had a turn at experimenting with a pose program on Snap!  It was able to recognise the location of my left ear and my friend set the program to recognise her nose.  We all enjoyed playing around with it.  Next, Ken shared how to begin training a program to warn you not to touch your face while sitting at a computer.

Nose1Our homework is to make our own filters (e.g. drawing glasses or a nose) using sprites with the pose program on Snap! I’m looking forward to trying it out!

I hope you all have a lovely weekend and I will see you in the next blog post!

 

This entry was posted in Uncategorized. Bookmark the permalink.