I was pondering the idea of two forces decking it out with drones.
It could get so chess-like and complex that you'd need to automate ever high levels of decision making.
If a stupid AI control module just simply gets it wrong and suddenly starts shooting your drones and anything it has trained itself to destroy to ensure it can destroy your drones
then we are all toast - think of a billion drones that suddenly start sharing a drone-meme "kill humans that are a threat to drones" on a mission to look for and kill any drones defined as primary target?
how do we reason with it?
even if there was some human with a kill switch at hand (a la The Day the Earth Stood Still)
we already KNOW he's just as likely to be as stupid