by Kyt Dotson
As a computer science and engineering major (aside from my cultural anthropology studies), I find myself constantly drawn to science fiction that explores the facts of human nature as told from the adaptation of artificial intelligence. We’ve seen this idea played with through a great deal of fiction, artificial intelligence itself causes us to question ourselves as to what makes us human and what exactly is humanity. In many ways, When HALIE Was One by David Gerrold represents many classical thoughts running through this philosophical conundrum of the human condition.
Initially published in 1972, When HARLIE Was One was republished in 1988 with a black cover and golden text with the subtitle Release 2.0 and that’s the version that I read. From what I understand, the book is a fix-up novel developed from a series of short stories; some people find this generated a somewhat choppy plot style to the narrative, but it didn’t bother me as I absorbed the storytelling milestones.
AI in science fiction often brings up the question: what happens if humanity were to invent a sentient race?
H.A.R.L.I.E. isn’t precisely the protagonist to the story, but he is the eponymous artificial intelligence the book is written about: Human Analog Replication, Lethetic Intelligence Engine. The AI’s story is told through the eyes, life, and times of one David Auberson the psychologist who’s job it is to train HARLIE up from an infantile intelligence and into adulthood.
The story begins because the AI is thought to be behaving erratically and his psychologist, Auberson, is called in to determine what’s going on. He’s a computer project and not considered to be “alive” or even sentient in the traditional sense. As a result, HARLIE runs the real risk of being turned off (or “euthanized” depending on the determination of his sentience.) As a result, much of the book contains a lot of rationalization and questioning of what makes an intelligent being or even a sentient.
HARLIE himself represents a gigantic financial drain on the company that runs him, so they want to be sure that they’re not wasting their money on an AI that will eventually burn itself down in some sort of psychosis.
Much of the story is written as conversations between HARLIE and Auberson and run the gamut of human experience. For the most part, it doesn’t take very much to convince the psychologist that HARLIE has all the hallmarks of a human being: intelligence, reasoning capability, self-awareness, emotions, and even a fear of death. However, the juxtaposition that generates the tension in the story is the corporate environment that he must argue against in order to maintain HARLIE’s digital existence.
The contexts and prose of the story are rather unsophisticated and it feels somewhat down-to-earth after reading narratives contemporary to the 21st century. The story also has a rather odd turn that happens near the end that feels a little bit too much like a deus ex machina (in fact, if not literally) but it doesn’t take much away from the core theme of the story. I found myself staying up late to keep turning the pages and build my own consensus about HARLIE and his eventual disposition.
The story is written so that the reader can empathize both with the struggle of an intelligence to prove that it should have a right to life and the corporate culture who is essentially running life support for this entity and bleeding money because they expect to have a product. In modern science fiction this might be the ultimate question about human resources vs. machine resources.
After all, what makes a human, however embodied or empowered? Where do we draw the line?
For readers who are looking for a science fiction timeline of the study of artificial intelligence, or those intrigued by the Wintermute from Neuromancer or other incarnations such as Cortana from HALO, Shodan from System Shock, Data from Star Trek: The Next Generation, Hadaly from Black Hat Magick; the science fiction literature is full of AIs of varying ranges of basic sentience to prescience.