This article on artificial intelligence is part of the Science in Sci-fi, Fact in Fantasy blog series. Each week, we tackle one of the scientific or technological concepts pervasive in sci-fi (space travel, genetic engineering, artificial intelligence, etc.) with input from an expert.
Please join the mailing list to be notified every time new content is posted.
About the Expert
Dan Rowinski is an experienced technology journalist who has covered the rise of modern machine learning and artificial intelligence.. He has interviewed and collaborated with numerous top artificial intelligence researchers, and been actively recruited by major technology firms to write about their AI initiatives. You should follow him on Twitter.
The Current State of Artificial Intelligence
Conscious computers run amok. All powerful machines which have decided that humanity must die or be enslaved. Robot overlords who have come to dominate the human race.
These are the things we think about when someone says the word, “artificial intelligence.” Especially in science fiction. We think of HAL 9000 in 2001: A Space Odyssey suddenly deciding he is going to do whatever the hell he wants. Arnold as the Terminator or Ultron from The Avengers and the manic dream of a race of killer robots hell bent on wiping out humanity. Don’t tell me you have never considered the possibility that The Matrix might be real.
Artificial intelligence in science fiction has long been the purview of conflict between human and machine. In many ways, there is absolutely nothing wrong with this. Mortals and machines have risen together through history, a constant tension between the animate and inanimate that has been mutually beneficial.
Artificial intelligence in fiction has always possessed a tendency to anthropomorphize machines. It’s an easy literary construct to create a villain. Give a machine a body, a voice, a mind and let it drive the action. Oftentimes it is a MacGuffin to the greater theme of humanities own foibles. I like these kinds of stories. I have written these kinds of stories. They serve a purpose in helping us examine our own natures vis-à-vis the tools which make our species unique.
But rarely have I seen a story which treats artificial intelligence for what it truly is …mundane computer science.
In this article we will look at the foundational principles of artificial intelligence, its most basic theories and its immediate future to help writers understand its true nature, potential and limitations.
Where AI Has Come From

In the Turing Test, the interrogator (C) the interrogator, is tasked with determining which player (A or B) is a computer and which is a human.
The idea of modern artificial intelligence comes from the earliest work on computers, starting in the 1930s. Alan Turing, the founder of modern computing, postulated in a seminal 1950 paper that it would be theoretically possible to build computers which “think.” The Turing Test has become the de facto first step in determining whether a computer has achieved a level of artificial intelligence.
The idea is to test a machine’s ability to exhibit intelligent behavior that is equivalent to, or indistinguishable from that of a human. At its most basic, the idea of the Turing Test is to see if a computer, in conversation with a human, can trick its interlocutor into believing it is a human and not a machine.
The Turing Test is a useful proxy for a degree of a computer’s sophistication, but its results can only truly tell us if a computer has learned how to effectively imitate a human. It actually tells us nothing about whether that computer is “thinking” or has come to possess a degree of consciousness.
The first computer scientists to postulate a model for a thinking computer were Walter Pitts and Warren McCulloch in the 1940s. Pitts and McCulloch studied how neural activity in the human brain functioned and attempted to create a digital model which would allow a computer to perform human functions, like see and hear and understand language. They were essentially trying to recreate the human brain. They developed the idea of “perceptrons” which were the concept of artificial neurons layered in such a way that an input could be introduced into one end of the system and identified as the output. One perceptron passed the information to a set of perceptrons and on down the line until a likely output was achieved.
The systems—algorithms, in effect—were called “neural networks.”
The initial progress was intriguing. But work on neural networks eventually stalled throughout 1960s until a researcher named Marvin Minsky absolutely destroyed the reality that perceptrons were remotely possible given the technology available at the time. Minsky stated that the neural network model required too much computing power and did not solve the fundamental problem of creating a computer which could use commonsense and human-level reasoning.
Furthermore, Hans Moravec (in “Moravec’s Paradox”) stated in the 1980s that “it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.” In essence, this means that it is easy to build computers which have superior logic capabilities (like the ability to play chess), but incredibly difficult and intensive to build computer which could perform human-level physical activities and perceptions.
Thus, artificial intelligence research entered one of two significant periods of disillusionment, called in the industry as “AI Winters.” Funding dried up, progress was incremental to non-existent and research into other aspects of computer science was accelerated. The first AI Winter lasted from the early 70s until the mid-80s. The second from the early-90s until about 2010.
The New Golden Age of AI
The term “artificial intelligence” in contemporary times is a bit of a misnomer. It functions more as an umbrella term for several different kinds of technology which work to give computers and machines more intelligent capabilities.
Modern artificial intelligence includes a variety of different techniques which perform unique functions to give computers new capabilities. Included are the concepts of machine learning, deep learning, big data, neural networks, cognitive computing and more.
Modern artificial intelligence is the answer to the question, “what happens when you give machines endless compute power and infinite data?”
Despite what science fiction might tell us, the answering that question does not—and will not—lead to new robot overlords (at least, not in the literal sense). The practical use of artificial intelligence today (which includes all of its various categories of technology) is focused on making tasks easier for industries, companies and individuals. From a commercial standpoint, machine learning is often used in solving personalization problems like more targeted advertising or better product recommendations. There is so much data in the world that the only way to handle it is for machines running intelligent algorithms to help people make decisions.
But what about computers which can see, hear and understand? Most of the cutting-edge work in artificial intelligence is being done with neural networks for the purpose of observation. Various kinds of neural networks (convolutional, recurrent, Markov Chain, Long/Short Term Memory, generative adversarial networks etc.) are employed by companies like Google, Microsoft, Facebook and IBM for the purposes of correctly identifying images, speech or text. For instance, Microsoft has built chips (field programmable gate arrays) with algorithms which can translate all of Wikipedia in seconds.
You use different kinds of these machine learning technologies every day. When you make a Google search (data mining, optimization and personalization) or ask Siri a question on your iPhone (natural language understanding). Interesting work is being done in the field of augmented reality, the ability to overlay a digital content and information on top of the physical world. Industry is moving towards automation where computers analyze data and make decisions in seconds, whereas a human may have taken weeks. What we now think of as artificial intelligence is everywhere.
But there is a long way to go, even for the simplest of problems. One innovation is illuminating. Facebook uses neural networks to teach computers to understand how people move in pictures and video in a practice called human pose estimation.
Think about that for a second. We need to teach computers (through various kinds of training on thousands to millions of data points) to understand just if a human is sitting down or standing up, waving their hand or walking. That’s something a human-level intellect can understand by the age of two. For all of their power and stunning ability with logic-based computation, the machines are not really all that smart when it comes to human-level cognition and observation.
The machines are not taking over. At least not any time soon.
The Near-Term Future of AI
People dream of The Singularity, the moment when computers gain consciousness and human-level intellect (or greater) and began evolving and replicating on their own. What happens from there is left to the mind of futurists, technologists, analysts, authors and screenwriters.
The idea of sentient computers is massive and complicated, best suited for another article. It’s best to say that, as of right now, we have absolutely no idea how to build a sentient machine or even approach the concept of “artificial general intelligence” which is defined as the ability for a computer to successfully perform any intellectual task that a human can perform. Any science fiction you watch or read that has a sentient, conscious computer glosses over the hard truths of modern computer science and neuroscience to bring machines to life, as if by some kind of magic. Very few pieces of fiction dwell on the “how” of artificial general intelligence and just jump to the consequences.
Thus, if we have no idea how to build conscious computers, a more apt question becomes: what does the limited functionality of the artificial intelligence of today evolve into in the years to come?
The answer: automation.
The acceleration of innovation in artificial intelligence—as we now understand it—will be the key to the end of The Information Age and the beginning of The Autonomous Age. In the broad strokes of historical purview, The Information Age began with the advent of the popular printing press (a machine) in 1450, which helped spread knowledge, communication, data and facts across the world. We now have devices which we carry in our pockets which can access any kind of data within seconds or contact anybody in the world. This is the logical conclusion to The Information Age.
The next age to come will be the one in which our computers and machines begin doing many tasks for us based on the principles of optimization and efficiency based on massive amounts of data and empirical observation. Name any form of human industry or labor you can think of and you will see an avenue where algorithms can improve its processes. And yet, each individual machine will be limited in its scope. An algorithm designed, for example, to harvest crops will not be able to turn around and perform content marketing optimization. The near-term future (the next fifty years, at the very least) will be filled with many narrow artificial intelligences which perform specific tasks. The creation of an artificial general intelligence which could perform many kinds of tasks—which would be an absolute prerequisite for artificial sentience—is not yet within our sights.
Follow me and you'll never miss a post:












Please share this article:












Leave a Reply