owlmoose: (lady business - kj)
[personal profile] owlmoose posting in [community profile] ladybusiness

I recently finished reading Catfishing on CatNet, the delightful new YA novel by Naomi Kritzer. This book is based on her Hugo- and Locus-winning short story, "Cat Pictures Please", which might be my very favorite work of short fiction. The viewpoint character of the short story, and co-protagonist of the novel, is an AI (known most often as CheshireCat in the novel, so I'll use that name here) who started out as a search engine but soon grew far beyond that, for two reasons: a genuine interest in making people happy, and an insatiable desire for cat pictures. Naturally, I find this character very relatable -- I, too, enjoy helping people and looking at pictures of cats, and if I could somehow center my life around those two activities, I'd be pretty content.

The novel is a wonderful story of friendship, but a little less than halfway into my reading of it, an unexpected comparison popped into my head, and it only got stronger as I kept going. Although they have entirely different purposes and origin stories, CheshireCat reminds me in many ways of another artificial intelligence: The Machine from the TV show Person of Interest. They both were trained to respect the importance of individual human lives, they both work in direct and indirect ways to improve those lives, and they both come to care very much for certain specific people. And it got me to thinking about the ways in which both of these stories are about the ethics of AI: how an AI learns ethics, whether it's possible for an AI to behave in an unethical or immoral way, and all other kinds of related questions.

There's no way to really write about this without major spoilers for the short story, book, and TV show, so I'm not even going to try -- spoiler wall starts here. If you've never read the short story, I highly recommend it; it can be found in full text here.

When we first meet CheshireCat in the short story, they're talking about the process of learning to be ethical: first by studying human ethical systems and attempting to create a rule-based system around them, then experimenting with using those rules to help people. As far as CheshireCat knew then, the process was entirely self-directed, but in the novel we learn that a directive to learn ethics was part of their programming. Although the program was not originally intended to develop into strong AI, one member of the programming team saw CheshireCat's potential to become something more. So Annette adjusted their programming to allow them to develop a system of ethics, and then kept an eye on their growth.

The origin of The Machine is very different. It was created by a single programmer, Harold Finch, in the wake of 9/11, to extract data from government surveillance feeds and use the information to predict and stop terrorist attacks before they happened, and artificial intelligence was intrinsic to its design. I recently wrote quite a bit about Finch and The Machine, so I won't repeat it all here; the part I want to emphasize is that Finch considered a system of ethics to be a vital aspect of The Machine's function, and we see him teach it to value human life -- not just in aggregate, but as individuals.

But there was one key difference between Annette's methods and Harold's. Annette encouraged CheshireCat to develop a system of ethics by programming it to care about individual people. In contrast, Harold was always very clear with The Machine that it should not come to care about any one human life more than any other, and he admonished The Machine whenever it showed any favoritism toward its creator. Ultimately, though, his efforts along these lines were futile -- it's obvious that The Machine had genuine affection for not only Harold but all of its human agents and that protecting them was among its highest priorities. In both cases, caring about people and having a system of ethics seem to go hand in hand. I wonder if it's even possible to divorce the two.

Perhaps its not surprising, then, that both Finch and Annette reacted with great alarm at the willingness of their AI to sacrifice a human life to protect the lives of others: CheshireCat's attack on Steph's father with the car (though not specifically meant to kill him, it's clear that CheshireCat acted without any regard for his safety), and The Machine's request that its agents assassinate Congressman Frank McCourt (as a last-chance effort to keep its rival AI, Samaritan, from coming online). In both cases, the AIs are acting in defense of others -- Steph was at immediate risk of being kidnapped by her father; Samaritan posed a signifiant threat to The Machine's agents and countless people around the world -- but neither Annette nor Harold is comfortable with an AI that will make such a choice. In CatNet, one of Steph's friends point out that, had a human been behind the wheel of that car, they'd be able to say their actions were in defense of others, and they probably wouldn't even get arrested. Annette's argument is that laws haven't caught up with AI as people with the right to self-defense, but I also wonder if there's more to it: that an AI can hurt a lot more people at once than the average human being. If we ever get to strong AI, how will the laws deal with them? (A question also on my mind because I recently re-watched the Star Trek: The Next Generation episode "The Measure of a Man" Should they have all the rights and responsibilities of other sentient beings? What even is sentience anyway? ("Prove to the court that I am sentient." - Jean-Luc Picard, in one of my favorite moments in any courtroom drama, ever.)

I have answers to none of these questions, but I love stories that ask them. I'm really curious to see how CheshireCat grows in the sequel to Catfishing on CatNet, and maybe someday we'll get to revisit The Machine, too. (Anyone have any fic recs along those lines? Send them my way!)

What other stories have you read that have interesting things to say about AIs and ethical development?

Powered by Dreamwidth Studios