The assassination of Iranian scientist Mohsen Fakhrizadeh might have been conducted by a remote-controlled weapon. While this is still a conventional assassination, it does raise the specter of autonomous assassination automatons—assassin bots. In this context, an assassin bot would be capable of conducting its mission autonomously once it was deployed. Simple machines of this kind already exist—one could think of the land mine as an autonomous assassination device—once deployed it “decides” to activate according to its triggering mechanism. But when one thinks of proper assassin bot, one thinks of a far more complicated machine—one capable of seeking and killing its target in a sophisticated manner. Also, it could be argued that a mine is not an assassination machine—while it can be placed in the hopes of killing a specific person, they lack the hallmarks of an assassin. That is, they do not seek a specific human target to kill them. As such, a proper assassin bot would need to be able to identify their target and attempt to kill them. To the degree that the bot can handle this process without human intervention it would be autonomous.

The idea of assassin bots roaming about killing people raises moral concerns. While the technology would be new, there would be no new moral problems here—with one possible exception. The primary ethical matters of assassination involve questions about whether assassination is morally acceptable and debates over specific targets, motivations, and consequences. But unless the means of assassination is especially horrific or indiscriminate the means are not of moral concern—what matters morally is that some means is used to kill a person, be those means a punch, a poniard, a pistol, or poison. To illustrate, it would be odd to say that killing Mohsen Fakhrizadeh with a pistol would be acceptable but killing him as quickly and painfully with a knife would be wrong. Again, methods can matter in terms of being worse or better ways to kill, but the ethics of whether it is acceptable to assassinate the person are distinct from the ethics of what means are acceptable. Because of this the use of assassin bots would be covered by established ethics—if assassination is wrong, then the use of robots would not change this. If assassination can be morally acceptable, then the use of robots would also not change this—again, unless the robots killed in horrific or indiscriminate ways.

There seem to be two general ways to look at using assassin bots in place of human assassins. The first is that their use would remove the human assassin from the matter. To illustrate, a robot might be sent to poison a dissident rather than sending a human. As such, the moral accountability of the assassin would be absent—though the moral blame or praise would remain for the rest of the chain of assassination. Whether, for example, Vlad sent a human or a robot to poison a dissident Vlad would be acting the same from a moral standpoint.

The second is that the assassin bot does not remove the assassin from the moral equation, but it does change how the assassin does the killing. To use an analogy, if an assassin kills targets with their hands, then they are directly engaged in the assassination without even the intermediary of a weapon. If an assassin uses a sniper rifle and kills the target from hundreds of yards away, they are still the assassin—they have directed the bullet to the target. If the assassin sends an assassin bot to do the killing, then they have directed the weapon to the target and are the assassin—unless the assassin bot is a moral agent and can be accountable in ways that a human can be, and a sniper rifle cannot. Either way, the basic ethics do not change. But what if humans are removed from the loop?

Imagine, if you will, the algorithms of assassination encoded into an autonomous AI. This AI uses machine learning or whatever is currently in vogue to develop its own algorithms to select targets, plan their assassinations and deploy autonomous assassin bots. That is, once humans set up the system and give it basic goals the system operates on its own.

The easy and obvious moral assessment is that the people who set up the system would be accountable for what it does. Going back to the land mines, this system would be analogous to a very complicated land mine—while it would not be directly activated by a human, the humans involved in planning how to use it and in placing it would be accountable for the death and suffering it causes. Saying that the mine went off when it was triggered would not get them off the moral hook—the mine has no agency. Likewise, for the assassination AI—it would trigger based on its operating parameters, but the humans would still be accountable for what it does to the degree they were involved. Saying they are not responsible would be like the officer who ordered land mine placed on a road claiming that they are not accountable for the deaths of the civilians killed by those mines. While it could be argued that the accountability is different than that which would arise from killing the civilians in person with a gun or knife, it would be difficult to absolve the officer of moral responsibility. Likewise, for those involved in creating the assassin AI.

If the assassin AI developed moral agency, then this would have an impact on the matter—it would then be an active agent in the process and not merely a tool. That is, it would change from being like a minelayer to being like the humans in charge of deciding when and where to use mines. Current ethics can, of course, handle this situation: the AI would be good or bad in the same way a human would be in the same situation. Likewise, if the assassin bots had moral agency—they would be analogous to human assassins.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *