Web-based voice command recognition

Last time we converted audio buffers into images. This time we’ll take these images and train a neural network using deeplearn.js. The result is a browser-based demo that lets you speak a command (“yes” or “no”), and see the output of the classifier in real-time, like this:

Curious to play with it, see whether or not it recognizes yay or nay in addition to yes and noTry it out live. You will quickly see that the performance is far from perfect. But that’s ok with me: this example is intended to be a reasonable starting point for doing all sorts of audio recognition on the web. Now, let’s dive into how this works. Continue reading

Audio features for web-based ML

One of the first problems presented to students of deep learning is to classify handwritten digits in the MNIST dataset. This was recently ported to the web thanks to deeplearn.js. The web version has distinct educational advantages over the relatively dry TensorFlow tutorial. You can immediately get a feeling for the model, and start building intuition for what works and what doesn’t. Let’s preserve this interactivity, but change domains to audio. This post sets the scene for the auditory equivalent of MNIST. Rather than recognize handwritten digits, we will focus on recognizing spoken commands. We’ll do this by converting sounds like this:

Into images like this, called log-mel spectrograms, and in the next post, feed these images into the same types of models that do handwriting recognition so well:

final-log-mel-spectrogram

The audio feature extraction technique I discuss here is generic enough to work for all sorts of audio, not just human speech. The rest of the post explains how. If you don’t care and just want to see the code, or play with some live demos, be my guest! Continue reading

From Human Brains to Computer Brains

Intelligent Systems, Artificial Intelligence, Smart Recommenders, Machine Learning and the list of endless fancy words that popup here and there over websites will always have a mystery behind. Over the past few years, we have witnessed great advancements in computer systems. Computers can now take over tasks that we, humans, never thought a computer would be able to do – including tasks that no human brain can efficiently and quickly perform such as looking through thousands of text files and drawing connections between them, reading millions of medical papers and connecting genes to potential diseases. The latter is the job of IBM Watson’s Discovery Advisor, a tool for researchers.

This way it seems that many researchers around the world strive to build computers that can substitute humans completely. The question that arises is: are we going to see computer brains that completely mimic human brains? In our post today, we cover some basics of the research in this direction trying to figure out an answer for the million cells question ..

Continue reading

Neural Networks and Recent Accomplishments (and how to train your own NN: a Python based DIY)

Artificial Neural Networks (ANN) are computational models inspired from one of nature’s most splendid creations – the neuron. It seems our quest to make the machines smarter has converged onto the realization that we ought to code the ‘smartness’ into them, literally. What better way than to draw parallels from the source of our own intelligence, our brains?

Continue reading