home projects sketches open source talks

Helen Keller and somatic intelligence

29 August 2015

who is helen keller

Hellen Keller was a woman born in 1880. At the age of 18 months she lost her hearing and vision rendering her blind and deaf for the rest of her life. Despite that, she was still able to be an intelligent and productive member of society. She interacted through the world with touch, taste, and smell.
I’ve had a deep interest in her ever since I learned about her history. The story of her learning her first “word” or named concept is amazing. At around the age of 7, a teacher was brought in to teach her how to communicate. The teacher would try to give her objects to touch and then write the words in her hand. This went on for a while as Helen didn’t understand that every object had a word to identify it. The breakthrough came when the teacher was running cold water on her hand, and then writing water on her hand repeatedly. After Helen realized it was a word, she demanded the name of every object she could find! From there she went to school and became an active member of society. She wrote many books ,lectured around the world, and was an ambassador for people with disabilities. I’ve read some of her work and it was amazing how she could use such poetic language to describe her thoughts and her world all without ever hearing or seeing anything.

can we still learn without all of our senses?

In normal human society, vision and audio sensory input is the primary way we teach and learn. As a baby, our parents are constantly talking to us and that is how we learn our first words. We see our parents moving their mouths and we copy their movements to speak our first words. We learn the alphabet by being shown the letters in books and written on paper. At the same time our parents are speaking the words to us so that we associate those sounds with the words and concepts we learn. As we grow older we continue to learn from watching videos, learning from our peers, and reads books. This continues to use audio and visual sensory as the medium to communicate this information. Now image you have never heard or seen anything, how does that feel? The way we would communicate with the world would be by touching and smelling things. Most of your time would be by yourself “hearing” your inner voice. What does an inner voice sound like if you have never heard anything?
Lets say you recalled in your mind a physical object like a chair.

You would recall how the letters are spelled on your hand. You would recall how a chair feels when you sit on it. You would recall what a chair’s purpose is and how it relates to other concepts. You would know that there are many types of chairs that come in all different shapes and sizes.

How would you learn what the concept of “now” means, this very moment. It is a concept that you learned from interacting with your environment and eventually learning about the concept of time. You would probably think of lots of other concepts like sleep, weeks, days, breakfast, lunch, resting, energy, time,etc.

What would recalling the concept of “friend” feel like? A friend is someone you spend time with and you find an interest in that person. You continue to interact with that person and spend time with them. That person might make you feel good or happy and so you enjoy spending time with them. If you thought about the unique features of that person you would remember the texture of that person would your hand touched them. You would feel the word friend on your hand.

These are all concepts and memories that she had, very similar to us, but embedded in her mind and neurons in a completely different way than we think. Or is it actually the same?

focus on vision in machine learning and intelligence algorithms

How we teach computers to learn is more similar to how most humans learn than how someone like Helen Keller learns. We feed in mostly representations of images and raw text into a computer. Depending on the kind of machine learning problem the computer is trying to solve, it optimizes for one of several types of output:
trying to differentiate between a positive and negative classification
predict a numerical value based off the input
group the input with other input that you have already seen.
A common way to see how smart our machine learning algorithms are is to see how well it comprehend handwriting from the MNIST dataset.
DeepMind, an artificial intelligence company building models to watch video games to learn how to play was bought by google for $600+ million .
We are training computers to distinguish between males and females by studying the different features of their faces and bodies. We already know the mind contains concept neurons . They light up in a brain when the concept of something is brought to its attention. For example the concept of star wars, you have a group of neurons that lights up overtime you hear the theme music of star wars, you see yoda, you see the name star wards written, you see parody movies, anything related to the concept of star wars.

Neural networks, especially deep neural networks are hot in the machine learning community now. They supposedly learn from general data, but are best known for processing general data.

common algorithm for learning

Learning about how Helen Keller becoming a member of society makes me believe that the the way the brain learns is through a common learning algorithm that learns specifically how to interact with its environment. The body can add any new sensors to the body and as long as the brain can process that data through neurons then it can use and learn from that sensory data . That could be radar, the ability to see infrared, the ability to sense certain chemicals, etc.

We think Audio, Vision, Smell, and taste are very different, but they are just sensory inputs into the brain. Touch is considered a separate sensory system from the others. In science and medicine nomenclature, touch is referred to as the somatosensory system. It is a whole system because touch is made up of several inputs: pressure, temperature, skin,vibration, and probably some others we don’t fully understand. You could even go as far to say that the other common sensory inputs we spoke about are just other ways to interact with the environment as they are just subsets of the somatosensory system. Almost every single living creature in our world has the ability to interact with the environment and remember some part of its interactions. That is what learning is. I believe that for our computers to actually learn, they must learn by interacting with its environment and storing that information in a way that relates memories based off past experiences. I would like to see agents learning from sensory data that do not have vision.

self comes to mind

A lot of these ideas came to me while reading Hellen Kellers works and from a Antonio Damasio, a cognitive neuroscientist who wrote a book called Self Comes to Mind . Antonio Damasio believes the brain maps body states to interact with its environment a that emotions are preprogrammed autonomous programs that get executed in the body to help us survive our world. In his books he talks about a high level cognitive architect that the brain runs. I have never come across computation models that mimic the somatic sensory system. I believe that any system that wants to accurately model the brain must model a general sensory system which means the agent must have a body. Reverse engineering the body’s algorithm could result in a general artificial intelligence that learns similar to the way we do.

conclusion

Building a somatic sensory computation model could potentially “solve intelligence”. Building computation models that build based off version are the wrong direction. Yes, the brain and eyes are doing visual pattern recognition to see lines and edges, so it is a form of learning, yet that is not the core of intelligence. Learn how the inner brain is processing all of the sensory data regardless of where it comes from and we can unlock the key to intelligence.

I would love to contribute to this and see it built out in an open source fashion.