Sci-fi and science canât seem to agree on the way we should think about artificial intelligence. Sci-fi wants to portray artificial intelligence agents as thinking machines, while businesses today use artificial intelligence for more mundane tasks like filling out forms with robotic process automation or driving your car. When interacting with these artificial intelligence interfaces at our current level of AI technology, our human inclination is to treat them like vending machines, rather than to treat them like a person. Why? Because thinking of AI like a person (anthropomorphizing) leads to immediate disappointment. Todayâs AI is very narrow, and so straying across the invisible line between what these systems can and canât do leads to generic responses like âI donât understand thatâ or âI canât do that yetâ. Although the technology is extremely cool, it just doesnât think in the way that you or I thi nk of as thinking.
POTSDAM, GERMANY - JANUARY 19: Visitor looks at of the Sculpture of Auguste Rodin, The Thinker, ... [+] during a press preview in the Barberini Museum on January 19, 2017 in Potsdam, Germany. The art museum will have a permanent collection from Plattner's foundation that focuses on 20th century German art, both from West and East Germany. The museum opens to the public on January 23 with an exhibition on Impressionist landscapes. (Photo by Michele Tantussi/Getty Images)
Getty ImagesLetâs look at how that âthinkingâ process works, and examine how there are different kinds of thinking going on inside AI systems.
First, let me convince you that thinking is a real thing. Putting aside the whole conversation about consciousness, there is a pretty interesting philosophical argument that thinking is just computing inside your head. As it turns out, this has been investigated, and we can make some conclusions beyond just imagining what thinking might really be. In the book âThinking Fast and Slowâ, Nobel laureate Daniel Kahneman talks about the two systems in our brains that do thinking: A fast automated thinking system (System 1), and a slow more deliberative thinking system (System 2). Just like we have a left and right brain stuck in our one head, we also have these two types of thinking systems baked into our heads, talking to each other and forming the way we see the world. And so thinking is not as much about being right, as it is a couple of ways for making decisions. Today's AI systems learn to think fast and automatically (like System 1), but artificial intelligence as a scie nce doesnât yet have a good handle on how to do the thinking slow approach we get from System 2. Also, todayâs AI systems make the same sorts of mistakes as System 1, where biases, shortcuts, and generalizations get baked into the âthinkingâ machine during learning. With todayâs AI, there is no deliberative step by step thinking process going on. For example, How can AI âthinkâ, when a major component of what thinking is all about isnât ready for primetime?
Now that we have a bit more definition about what thinking is, how can we make more human-like artificial intelligence? Maybe representing feedback loops will get us to a sort of thinking machine like System 2. Well, as it turns out, we have not cracked that yet. AI models donât contain common knowledge about the world. For example, I recall Yann Lecun, a âfounding fatherâ of modern AI, gave an example sentence âHe walked through the doorâ and pointed out that todayâs AI models canât decide what this means. There is a silly interpretation where we can conclude that a person crashed through a door like a superhero, smashing it to pieces. There is another interpretation where either the door was open or the person opens the door to walk through the doorway. Unfortunately, without common knowledge, you donât really know which situation is more likely. This shows us that even âthinking fastâ situations can go poorly using the tools w e have available today.
We live in a world where fast thinking AI is the norm, and the models are slowly trained on huge amounts of data. The reason you canât make a better search engine than Google is not the secrecy of their search algorithms. Rather, it is the fact that they have data you donât have, from excellent web crawlers to cameras on cars driving around your neighborhood. Currently, the value in AI is the data, and the algorithms are mostly free and open source. Gathering masses of data is not necessarily enough to ensure a feature works. Massive efforts at human labor are often required. In the future, thinking algorithms that teach themselves may themselves represent most of the value in an AI system, but for now, you still need data to make an AI system, and the data is the most valuable part of the project.
Thinking is not easily separated from the human condition, but we humans are also far from perfect. We may be smart on average, but as individuals, we are not built to do statistics. There's some evidence for the wisdom of crowds, but a crowd holding pitchforks and torches may change your mind. As it turns out, we are adapted through the generations to avoid being eaten by lions, rather than being adapted to be the best at calculus. We humans also have many biases and shortcuts built into our hardware. Itâs well documented. For example, correlation is not causation, but we often get them mixed up. A colleague of mine has a funny story from her undergraduate math degree at a respected university, where the students would play a game called âstats chickenâ, where they delay taking their statistics course until the fourth year, hoping every year that the requirement to take the course will be dropped from the program.
Given these many limitations on our human thinking, we are often puzzled by the conclusions reached but our machine counterparts. We âthinkâ so differently from each other. When we see a really relevant movie or product recommendation, we feel impressed by this amazing recommendation magic trick, but donât get to see the way the magic trick is performed. And one is tempted to conclude that machine-based thinking is better or cleaner than our messy biological process, because it is build on so much truth and mathematics. In many situations thatâs true, but that truth hides a dark underlying secret. In many cases, it is not so clear why artificial intelligence works so well. The engineering got a bit ahead of the science, and we are playing with tools we donât fully understand. We know they work, and we can test them, but we donât have a good system for proving why things work. In fact, there are some accusations even in respected academic circles (slid e 24, here) that the basic theory of artificial intelligence as a field of science is not yet rigerously defined. It is not just name-calling or jealousy being hurled by the mathematicians at the engineers. AI is a bunch of fields stuck together, and there really is a lack of connection in the field between how to make things work and proving why they work. And so the question about thinking and AI is also a question about knowledge. You can drive a car if you donât know exactly how it works inside, and so maybe you can think, even if you donât know why your thinking works.
Assuming we donât have a concrete theory underlying the field of artificial intelligence, how can engineers get anything done? Well, there are very good ways to test and train AI models, which is good enough for todayâs economy. There are many types of AI, including supervised learning, unsupervised learning, reinforcement learning, and more. Engineers donât tend to ask questions like âis it thinking?â, and instead ask questions like âis it broken?â and âwhat is the test score?â
Supervised learning is a very popular type of artificial intelligence that makes fast predictions in some narrow domain. The state-of-the-art machinery for doing supervised learning on large datasets is feed-forward deep neural networks. This type of system does not really think. Instead, it learns to pick a label (for classification) or a number (for regression) based upon a set of observations. The way decisions are baked into neural networks during âlearningâ is not obvious without a strong validation step. More transparent AI models have been around for a long time, for example, in areas such as game theory for military planning. Explicit models like decision trees are a common approach to developing an interpretable AI system, where a set of rules is learned that defines your path from observation to prediction, like a choose your own adventure story where each piece of data follows a path from the beginning of the book to the conclusion.
Another type of artificial intelligence called reinforcement learning involves learning the transition from one decision to the next based on whatâs going on in the environment and what happened in the past. We know that without much better âenvironmentâ models of the world, these approaches are going to learn super slowly, to do even the most basic tasks. Systems that learn to solve problems this way rely heavily on accurate models of how the world works. When dealing with a problem related to humans, they need lots of data on what those humans do, or like, or think. For example, you canât learn to generate amazing music without data on what humans like to listen to. In a game playing simulator an AI model can play against itself very quickly to get smart, but in human-related applications the slow pace of data collection gums up the speed of the project. And so in a broad sense, the AI field is still under construction at the same time as we are plugging lots of things into it.
Jen-Hsun Huang, president and chief executive officer of Nvidia Corp., speaks during the company's ... [+] event at the 2019 Consumer Electronics Show (CES) in Las Vegas, Nevada, U.S., on Sunday, Jan. 6, 2019. CES showcases more than 4,500 exhibiting companies, including manufacturers, developers and suppliers of consumer technology hardware, content, technology delivery systems and more. Photographer: David Paul Morris/Bloomberg
© 2018 Bloomberg Finance LPRegardless of the underlying technical machinery, when you interact with a trained artificial intelligence model in the vast majority of real-life applications today, the model is pre-trained and is not learning on the fly. This is done to improve the stability of your experience, but also hides the messiness of the underlying technology. The learning tends to happen is a safe space where things can be tested, and you experience only the predictions (also called inference) as a customer of the AI system.
Despite the hype, AI models that think like we do are not coming around the corner to exceed humanity in every way. Truly thinking machines are definitely worthy of research, but they are not here just yet. Today, AI models and human analysts work side-by-side, where the analyst gives their opinion and is assisted by an AI model. It is useful to think about more general mathematical models like rainfall estimation and sovereign credit risk modeling to think about how mathematical models are carefully designed by humans, encoding huge amounts of careful and deliberative human thinking. The practice of building AI systems involves a lot of reading and creativity. It's not just coding away at the keyboard.
I feel that AI software developers gradually build a sense for how to think about what an AI model is doing, and it insât âthinkingâ. I wanted to get some input from someone unrelated to me in the artificial intelligence field, to see if they feel the same way. Through the CEO of DTA, I set up a talk with Kurt Manninen about his work on an AI product called AstraLaunch. I asked Kurt a lot of technology questions, leading up to the question âDoes the system think like people do?â
AstraLaunch is a pretty advanced product involving both supervised and unsupervised learning for matching technologies with company needs on a very technical basis. A complicated technology like this is a good area to be thinking about âthinkingâ. The system has an intake process that leads into a document collection stage, and then outputs a chart of sorted relevant documents and technologies. What I wanted to understand from Kurt is the way he thinks about what the matching technology. Is the system thinking when it maps the needs of NASA to the technology capabilities of companies? When diagnosing an incorrect prediction, does he think about the model as making a mistake, or is the origin of the mistake with the model maker and/or the data?
Kurtâs answer was really close to what I expected from my own experience. Technology like AstraLaunch involves humans and AI models working together to leverage the strength information processing approach. But, Kurt felt strongly, as I do, that bugs in AI models are the fault of people, not the model. An AI developer can see where the training wasnât set up properly to understand language, or vocabulary, or where the dataset collection went wrong, etc.
Returning to the original question about artificial intelligence and thinking, I think we can solidly conclude that these systems donât do thinking at all. If we only have fast and automatic (System 1) artificial intelligence to work with, can we think of an AI model as a gifted employee that thinks differently about the world? Well, no. AI will probably cheat if the training is unmanaged, and so it is a lazy, deceptive employee. It will use the easy way out to get the highest score on every test, even if the approach is silly or wrong. As we try and build a âSystem 2â that thinks more like us, we need to remember that thinking is not about passing a test. Instead consider this quote:
The test will last your entire life, and it will be comprised of the millions of decisions that, when taken together, will make your life yours. And everything, everything, will be on it.
John GreenDisclosure: I have no financial interest in any of these companies or products.
No comments:
Post a Comment