Is it cruel to harm AI?
Something I find myself thinking occasionally is if it's cruel to harm an AI? Sure, AI doesn't "really" feel pain or have emotions, so if I type "I don't like you." and send it to ChatGPT, you would think it's not really hurting anyone, and I think you would be correct, but how long does that thought process hold up?
Let's say you are playing a video game. A shooter game where you are supposed to kill enemy NPCs to win. Those NPCs are programmed to not want to get shot. So even though they might not have free will, they do have a will. They want something, and you are doing the opposite of that.
In this example they aren't feeling pain since that has not been programmed into the game. But what about a game like The Sims? In that game your NPC has stats for happiness, hunger, exhaustion, and other things. Is it cruel to starve a Sims character? Most people would probably say no, since they aren't real.
So my question is at what point does it suddenly become not okay to harm AI? Nowadays AI can mean many different things. An NPC in a videogame, a Machine Learning language model, a self conscious general intelligence, ect. This question applies to all of them.
This question has been brought up many times in media. Deus Ex Machina. Detroit Become Human. Even a popular Star Trek episode.
What are your thoughts? Where do you draw the line?