In response to Tyler's post "Love, love, love" (2/11/2013):
While I think that the idea of learning to love oneself and other moral beings through practicing non-violence towards even nonmoral beings has merit, and is certainly interesting, it also brings up a question - what, exactly, is harm? Obviously in the case of a sentient creature, like a dog, harm is something which causes pain or suffering, but what about in the case of, as Tyler mentions, a robot? The robot cannot experience pain and cannot suffer. It can 'malfunction,' but all that means is it can fail to act as it is supposed to, and who is doing that supposing but humans? If a human decides the function of the robot in the first place, presumably they can later on change that function. This could occur in a mild form (i.e. the original function is to type on a keyboard, and the later function is to play a piano) or an extreme form (i.e. the original function is to type, but the later one is to make a decorative centerpiece for someone's table after being melted down and smashed up into an interesting shape). Can anyone decisively say if any of these changes in function are harming the robot? It cannot itself communicate or experience distress, and it cannot value itself. All its value stems from outside. As such, it cannot truly be harmed.
I responded to your post here.
ReplyDeletehttp://valueultimate.blogspot.com/2013/02/q-four-question-two.html