This article on writing realistic research labs is part of the Science in Sci-fi, Fact in Fantasy blog series. Each week, we tackle one of the scientific or technological concepts pervasive in sci-fi (space travel, genetic engineering, artificial intelligence, etc.) with input from an expert.
Please join the mailing list to be notified every time new content is posted.
About the Expert
Ben Taylor (@bentaylordata) is a co-founder at Ziff.ai, delivering automated deep learning into production. He has 14+ years of machine learning experience and is known for pushing the boundaries for what is possible with deep learning, including predicting country of origin and genetic haplogroups from a face. Previously, he worked in the semiconductor industry for Intel and Micron in photolithography, process control, and yield prediction and as a Wall Street quantitative analyst building sentient stock models for a hedge fund. Ben also worked for HireVue, where he led the company’s machine learning efforts around digital interview prediction and adverse impact mitigation. This article was transcribed from an interview with his sister, Jenny Ballif.
Jenny Ballif has a background in Molecular Biology but now works as Science Mom, an educational YouTuber and author of science activity books for kids. When she’s not teaching, you can find her on YouTube, Facebook, and Twitter.
Common Misconceptions About AI
What are the most common misconceptions you see about artificial intelligence (AI) in popular culture?
Ben: The worst is when droids miss their targets. That’s the main one because it’s so embarrassing. Automated turret guns even today have a 99% kill rate, and they’d be considered very primitive AI. So the idea of a droid missing a target – like you see in Star Wars – is ridiculous. Droids would never miss. Every shot would be a kill shot.
What else do you see in Hollywood that would make someone in machine learning roll their eyes?
Ben: When people apply human constraints to AI or future robotics. Artificial intelligence isn’t constrained by the human realm, it’s constrained by the physical realm. The classic example of this would be vision. Humans see a very narrow band of visible light. But AI wouldn’t be restrained by that, it would comprehend everything from visual to infrared, ultraviolet, gamma, x-ray—it would see the whole spectrum and consider the interactions between all of them.
Remember how in Jurassic World the guy pours gas on himself so the dinosaur won’t see him with thermal vision? There’s no scenario where you could fool or hide from AI. But in movies, you see people hiding from robots all the time. Before an AI bot came into your home, it would know exactly where you were and understand everything about your biological functions. With radar it could measure your heart rate and identify you through a wall. That’s already available today. So AI bots that are looking for people would always find them.
Hollywood often portrays AI as being single agents, where in reality, weaponized AI would tend to be multi-agent. That means a droid army seeing everything each droid senses, with every droid controlled in unison with one shared mind. That is how a hive-mind works, and when you really think about the details you realize how inept human fighters would be against this. Hand-to-hand combat from the AI’s perspective would like a fighting a child.
A lot of the things we have today in AI have been designed by humans, but we’re very quickly moving toward what’s called intelligent design where the AI is trained in a virtual environment to satisfy its goal rather than having human input. All the physical parameters of the robot are modified and evolve in real time, and that means the AI won’t have biological direction or inspiration. It’s not going to look like a mechanical spider or something inspired from biology. It will be optimized for whatever its goal is – to kill, to search, to pick up socks – whatever the end task is. We can’t predict what these bots will look like, just that they’ll have maximum efficiency. That’s something Hollywood doesn’t seem to understand.
Let’s talk a little more about unintended or surprising consequences. Because this reminds me of what’s going on in the Top Chess Engine Championship with Leela. Have you heard about that?
Ben: Remind me.
The AI chess engine Leela has been entirely self-trained. They gave it rules for how the pieces could move but no strategy whatsoever, and now Leela is beating all the other chess engines but there’s this surprising behavior emerging in the endgame where it looks like Leela is trolling its opponents. There will be an obvious checkmate in just a few moves, but instead of going for the quick finish, Leela will promote pieces and uselessly sacrifice them, drawing the game out fifty to a hundred moves longer than it needs to be. It almost feels like the engine is gloating or taunting – but that’s us personifying its motivations.

Image Credit: Jenny Ballif. Screenshot of Leela giving up one of her queens. Taken during a match between engines Leela and Shredder on Jan 31, 2019.
Ben: Right. The chess engine is nowhere near close to the level of complexity where it could be deliberately malicious or taunt its opponent. Somehow, its behavior is going to be explained by the cost function. Looking at a game where the AI drags out the ending versus a game where there’s a quick ending, there’s got to be some way that the long game is satisfying the function better.
Can you expand on what you mean about the cost function?
Ben: We call this a lot of things: The AI has a goal, target, fitness function, an objective. They’re all the same thing – the AI is rewarded for accomplishing some task. The incentive an AI has to satisfy its goal can’t be comprehended from a human perspective. We have all these other motivations – we need to breathe and need to eat, and we’re curious and we have automatic reflexes and we get distracted and forget things.
An AI has one objective and it needs to reach its objective like you need to breath when you’re drowning — that’s not the best analogy but it’s the only one I can think of that conveys the single-minded focus and intensity of purpose that AI has. The drive can’t be overstated, and that’s why it gets scary, because if you have an AI whose objective is to make you happy and it realizes that morphine releases endorphins, it will put you on a drip in a heartbeat. You’d be in a coma but according to the AI’s measure of happiness, you’re at maximum happiness. It’s not being malicious or evil – it just found an efficient way to reach the goal.
AI are known for cheating. They cheat all the time. They’re really good at finding shortcuts. So that’s something you always want to think about when designing the objective.
When it comes to humanoid droids or personal assistants – what would be the best objective?
Ben: To maximize freedom of choice. If the overall goal is human happiness or peace, it’s pretty easy to see how the AI would take away personal freedom to satisfy those goals. But if the goal is freedom of choice – that’s probably the safest bet.
Any last words of advice for writers when it comes to AI?
Ben: AI can be life-changing for good or for bad. Our mirrors and showers will know soon know more about our health and bodies than our current physicians. So many lives can be saved from self-driving cars or early response, i.e. sending a drone to a heart attack patient, stopping a toddler blind cord strangulation accident, etc…
AI could also have a very dark future pathway where it becomes an optimized war machine and real existential threat to humans. There is a scenario where the entire human race is wiped out by relatively dumb search and destroy agent. The future aliens will wonder why the metal earthlings are so aggressive, only to find out later that their extinct creators were idiots.
Follow me and you'll never miss a post:












Please share this article:












I’d be more impressed if EVERY computer I’ve EVER used did not glitch EVERY day of the thirty or so years I’ve been using the damn things! Artificial Idiocy is more like it.
And autonomous cars promise much but have yet to deliver. They’ve only killed one person (a pedestrian) so far in about ten million miles of testing, which sounds not bad until you realize that’s ten times the current death rate of one death per 100,000,000 miles driven by ordinary cars.