Learning

Learning - for Machines & Humans

Humans and Machines learn both from experiences and from data. However, somewhere along the way, we stop relying on creating experiences.

When I was in college, one of the classes I took was on Machine Learning. According to the dictionary,

💡Machine Learning is a field of inquiry devoted to understanding and building methods that ’learn'.

Learning is hard, though - for machines and for humans. There are about a hundred lessons I wish I could teach my children and friends by just “telling” them - lessons I learned by living life. But it’s not as easy as that. Some lessons are learned best by experience.

In computer science, there are two broad categories of machine learning.

  1. Learning from massive amounts of data, a.k.a. Supervised or Unsupervised learning.

  2. Learning from interacting with the environment, a.k.a. Reinforcement learning.

The trouble with learning from data is the quality and variety of data. There is a famous case of a facial recognition algorithm that was very adept at recognizing the faces of white men, but not so many faces of different ethnicities and gender. This had less to do with any planned bias; however, the bias was in the data used to train the algorithm. Since this algorithm was trained at a University by grad students who tended to be young white men, the algorithm learned from an inferior set of data, yielding suboptimal results.

So, in order to train an effective algorithm, you need a robust set of data that is big enough and varied enough to learn from.

On the other hand, reinforcement learning is where an artificial intelligence agent interacts with its surroundings to learn - almost like a child - by taking actions and getting feedback from the environment. Some examples are where the AI agent learns to play a game such as Checkers, Chess, or Go; by playing millions of games and learning from them. The results are surprising - as the AI agents often come up with moves seldom seen by human players - and throw some expert players off their games!

This leads me to how humans learn.

As I mentioned earlier, children learn using their environments. They want to touch everything and taste everything. They are little agents, interacting with their environments completely. And they learn by experiencing near constantly.

I remember watching my daughter learning to stand up. She would keep falling but keep trying again. It must have taken thousands of tries but she was registering something on each try, and just got right back up and tried again. Till she got it. Then she moved on to walking.

What I find interesting is that as humans grow, they shift from reinforcement / experiential learning to data-driven learning. The same child who would try things thousands of times till she go it, shifted at some point to giving up after one bad experience. Or even worse, seeing someone else fail. Now she was retaining the data from others’ experiences and using it to limit herself. Gone is that spirit of discovery and the drive to mastery. The desire to try 1000 times.

So what gives?

While data is useful, it can be limiting. And assuming that the data we collect during our lifetime is a drop in the bucket, relying on it to restrict ourselves seems a bit sad to me.

Somewhere along the way, in the process of growing up, perhaps in the process of taking ourselves too seriously, we lose our ability to learn experientially. To iterate thousands of times just for the feedback. Just to learn that one new skill. To master new things - like I mastered standing up and walking.

Let’s create new experience!