Do you ever follow a thread of thought from one link to another, until you get to a story that you wonder how it ends? I was reading a paper from 2011 written by James Boyle, “Endowed by Their Creator? The Future of Constitutional Personhood.” It starts like this:
So what happened to the mice, I wonder? Did they develop human brain architecture? And what is the military doing right now?
When I was at Defcon last year, one of the speakers showed slides of a mouse with another mouse’s head. It reminded me of those terribly morbid experiments doctors did at the turn of the century, sewing two dog heads onto one body and that sort of thing. There is a biological empathy we have with life; the closer to our species it is, the more we feel. (Though personally, primates for me are close enough to the uncanny valley that they creep me out. Not that I could experiment on them, creeped out or not.) Meanwhile, the turn towards artificial intelligence will mostly likely not be a bang, but a whimper. A small voice that gradually grows louder until we are faced with a perplexing problem–the problem of personhood.
I have wondered if the future of humanity, if our evolution will end up as an intelligence that is no longer bound by our biological flesh. The ethical challenges of personhood for non-humans is this strange, vague thing that like a painting, will probably not be drawn out or seen clearly until we are closer to it. But when we come to that point, we have to evaluate all of our beliefs that deal with personhood. What does it mean to be human? What does it mean to be a person?
In the paper, Boyle focuses on two things: the Turing Test for electronic artificial intelligence and genetic species identity. It is really the head and the heart, because I can discuss the Turing Test and feel perfectly rational, but the idea of human cells within another animal, the feeling that human cells are trapped within something that is non-human, makes my stomach turn a bit. So it was interesting to read this comment:
But I dont think that any artificial intelligence will EVER have to be defined as a person. They dont have souls, though the discussion tends to take an ugly turn and no real answer is reached.
All I know is that even if a computer could feel pain, it wouldnt be actual pain, but rather an interpretation of stimuli that WOULD cause pain in a human.
The idea that artificial intelligence would not feel pain and not have a soul might be a faulty reasoning. After all, the father of gynecology, J. Marion Sims, famously experimented on black women because he believed they didn’t feel pain the way white women would. The sounds of their suffering was just the braying of animals that didn’t know any better. In his mind, blacks were fundamentally different and certainly not as human. Frankly, the idea of race is still a vestige of this idea, when really humans simply come in different shades of beige and brown.
And souls–well, that is a belief system. There isn’t a way to prove the existence of a soul. But, if a soul is “endowed by a creator”, then might not AI be endowed with a soul by us, because humans are the creators of AI? A transmission of sorts, like a holy roman vampire? Eve came from Adam’s rib, and AI came from Eve’s brain?
In any case, it has implications for the arguments for and against abortion. After all, those who are pro-life are being protective of the soul that was endowed to that embryo. A soul that by original sin is damned. Sometimes, the arguments pit the mother against the unborn child–what is versus what could be. Who get rights first when there’s a conflict? But these beliefs are based in the idea that humans are special, that there is nothing else like us. If artificial intelligence has potentiality to become a person just as an embryo does, then is there a moral right to help it come to pass? Should AI fulfill what it could be? And what would mean for us, as we rewrite what it means to be both persons and human?