h1. Concepts, invariant memories, and the brain p(meta). 06 April 2013 How are concepts stored in the brain? We "know":http://www2.le.ac.uk/offices/press/press-releases/2013/february/small-groups-of-brain-cells-store-concepts-for-memory-formation2013-from-luke-skywalker-to-your-grandmother that groups of neurons fire together when sensory data related to certain concepts such as Star Wars appear whether its hearing the words "star wars", writing it out, watching the movie, hearing the theme song, or seeing pictures of yoda from different angles. We've had a strong intuition that the brain can store these invariant representations in the brain for a while, but we have no concrete idea of how the brain does this nor do we have any computation models that simulate this to any realistic degree. So what exactly are invariant memories? They are units of memory that have the same basic form no matter how they are recalled. These memories are stored in a format that are resistant to scale, distortion, noise, and perturbations. That means we can recognize basic objects like cups just by touching them from different angles. They allow us to taste foods like chili peppers and wasabi and know they are spicy even though the taste is not exactly the same. They allows us to recognize that knives, pens, chopsticks, the number 1, and the letter I all have something in common, they are typically in the shape of a straight line. Our brains stores these memories in a way that are resistant to variation. Concepts are the same thing as invariant memories, just a more abstract and higher level version. If I watched the movie Star Wars or read an unnamed book about a boy who grows up on the farm with his uncle on some far away planet, but is later taken away to learn mystical powers and become a hero and fight his father, I would recognize them both as the concept "Star Wars". Computers can do some basic forms of invariant memory recognition, but seemingly small amounts of variation will throw them off such as viewing the same image from a different angle or at night versus in the daytime. Furthermore, generalized recognition of concepts is not in the realm of what computers can do. The current trend for pattern recognition in the machine learning world is in "deep belief networks.":http://www.scholarpedia.org/article/Deep_belief_networks I believe that building a computational model that can store concepts and invariant memories will have an incredible impact for human kind. On the same scale as computers and the internet, probably even more profound. With the ability to model invariant memories and concepts, computers will be able to do things like actual voice recognition, understanding of video and text, and learn from unstructured data just like humans but at a much faster pace. On top of this, I believe this is an achievable goal for us now as compared to having a full understanding of the brain or true artificial intelligence. So why is it that more people are not working on this? I believe there are several major reasons. * 1) We do not have the right tools to understand the brain. The primary tools we have now are fMRIs, optogenetics,"brainbows":http://en.wikipedia.org/wiki/Brainbow and EEGs. These tools do not give us enough resolution into the brain and severely limits our ability to do empirical studies. Instead, we have many ideas in artificial intelligence that are proposed and tested, so we get statistical models that bring us algorithms such as bayesian models, randomforests, SVMs,etc. Do we need to know how things work in the brain at the level of synapses, dendrites, and action potentials? Most people believe so, I don't really know the answer. * 2) I don't believe enough people are even aware of this idea of invariant memories. Currently the computer scientists study algorithms and models that work on specific kinds of data, neural networks are typically used for vision and voice data, while we have specialized natural language algorithms for parsing text, and other models for parsing other kinds of data. I'm not sure if there are groups of people who study how to actual store the data though. I believe that there is a common algorithm that can be used for learning multimodal data and others in the community believe so such as "Andrew Ng":http://www.youtube.com/watch?v=hBueMr9eaJs and "Jeff Hawkins":http://www.amazon.com/Intelligence-Jeff-Hawkins/dp/B000GQLCVE. The field of machine learning is a branch of computer science to study systems that can learn from data, but most of the progress seems to be on supervised algorithms where we tell the computers the answer. On the other hand, Humans and other biological creatures can learn in an unsupervised manner. * 3) There is not a unified field of study for this area. Does it belong to neuroscience, computer science, mathematics, cognitive science, philosophy or some other area? I believe the breakthrough will come from a multi-disciplined person. Is there a specific field of study for this area? Some would believe that this belongs to the field of machine learning or maybe computational neuroscience. I know "Jeff Hawkins":http://numenta.com is building a company around invariant classification and he strongly believes that we must learn more from neuroscience and less from traditional computer science. * 4) I believe our focus on vision might potentially be leading us down the wrong road. Typically when we test intelligence in non-human life, tests typically consist of vision classification such as can dogs recognize other dogs or can a parrot recognize certain objects. When thinking about invariant memories, most people focus on vision because it is arguably the sense that we as humans use most. I use vision to type this blog post, to walk to work, write software, watch movies, and to choose food from my refrigerator. But if I'm trying to create an algorithm that can store and recognize invariant memories, then I would try to break down the essence of invariant memories by studying smaller systems that can understand concepts. I constantly think about "Helen Keller":http://en.wikipedia.org/wiki/Helen_Keller, who at the age of 19 months lost her vision and hearing. With only 3 senses, she was still able to become an intelligent member of society and even published "many books.":http://www.gutenberg.org/ebooks/author/895 This opens up many questions, are multiple senses required for intelligence? Or is it possible for intelligence to be obtained with only a single sense such as touch or taste? Her primary sense was touch, what is it exactly that motor neurons do? What kind of information do they store? The interaction between neurons and their sensory data seem to create some kind of feedback loop that allows them to extract valuable information. So what are my own takeaways from this thought experiment. It is mind blowing how little we know about the brain. At times I do not have the slightest idea on where to move next, but I keep trying. I'm still thinking intently on where we can realistically make the largest contribution in this area, and for now for me that seems like more studying and thinking. I am also very interested in talking with others and learning about new ideas, so please do not hesitate to contact me if you feel so inclined.