Initially, AI won't demand human-like "sentience" to become an existential risk. Present day AI plans are given precise ambitions and use learning and intelligence to achieve them. Philosopher Nick Bostrom argued that if just one provides Just about any aim to a sufficiently effective AI, it may prefer to damage humanity to realize it (he employed