Thursday, August 6, 2015

Computer Grading of Tests

A short time ago I wrote about the future of technology which necessarily concerns the future of artificial intelligence. Today in a Nautilus article on AI we discover why grading essay-type test questions, by neural networks, a kind of AI, can produce unexpected results. The article is long by Internet norms. Here are a few out-of-context interesting points.

Artificial intelligence has been conquering hard problems at a relentless pace lately. In the past few years, an especially effective kind of artificial intelligence known as a neural network has equaled or even surpassed human beings at tasks like discovering new drugs, finding the best candidates for a job, and even driving a car. Neural nets, whose architecture copies that of the human brain, can now—usually—tell good writing from bad, and—usually—tell you with great precision what objects are in a photograph. Such nets are used more and more with each passing month in ubiquitous jobs like Google searches, Amazon recommendations, Facebook news feeds, and spam filtering—and in critical missions like military security, finance, scientific research, and those cars that drive themselves better than a person could.


. . .


Neural nets sometimes make mistakes, which people can understand. But some hard problems make neural nets respond in ways that aren’t understandable. Neural nets execute algorithms—a set of instructions for completing a task. Algorithms, of course, are written by human beings. Yet neural nets sometimes come out with answers that are downright weird: not right, but also not wrong in a way that people can grasp. Instead, the answers sound like something an extraterrestrial might come up with.


. . .


Neural nets aren’t used only for visual tasks, and neural-net wisdom isn’t confined to those tasks, notes Solon Barocas, a postdoctoral research associate at the Center for Information Technology Policy at Princeton University. In 2012, Barocas points out, a system built to evaluate essays for the Educational Testing Service declared this prose (created by former Massachusetts Institute of Technology writing professor Les Perelman) to be great writing:


In today’s society, college is ambiguous. We need it to live, but we also need it to love. Moreover, without college most of the world’s learning would be egregious. College, however, has myriad costs. One of the most important issues facing the world is how to reduce college costs. Some have argued that college costs are due to the luxuries students now expect. Others have argued that the costs are a result of athletics. In reality, high college costs are the result of excessive pay for teaching assistants.


Big words and neatly formed sentences can’t disguise the absence of any real thought or argument. The machine, though, gave it a perfect score.


. . .


It is not yet possible to understand how a neural net arrived at an incomprehensible result. The best computer scientists can do with neural nets is to observe them in action and note how an input triggers a response in some of its units. That’s better than nothing, but it’s not close to a rigorous mathematical account of what is going on inside. In other words, the problem isn’t just that machines think differently from people. It is that people can’t reverse-engineer the process to find out why.


. . . .


The algorithms that create a net are instructions for how to process information in general, not instructions for solving any particular problem. In other words, neural net algorithms are not like precise recipes—take this ingredient, do that to it, and when it turns soft, do this. They are more like orders placed in a restaurant. “I’d like a grilled cheese and a salad, please. How you do it is up to you.” As Barocas puts it, “to find results from exploring the data, to discover relationships, the computer uses rules that it has made.”


At the moment, humans can’t find out what that computer-created rule is. In a typical neural net, the only layers whose workings people can readily discern are the input layer, where data is fed to the system, and the output layer, where the work of the other layers is reported out to the human world. In between, in the hidden layers, virtual neurons process information and share their work by forming connections among themselves. As in the human brain, the sheer number of operations makes it impossible, as a practical matter, to pinpoint the contribution of any single neuron to the final result. “If you knew everything about each person in a 6-billion-person economy, you would not know what is going to happen, or even why something happened in the past,” Clune says. “The complexity is ‘emergent’ and depends on complex interactions between millions of parts, and we as humans don’t know how to make sense of that.”


No comments: