Artificial intelligence is the stuff of SciFi nightmares — what happens when the machines decide they would be better off or more efficient without us? Sometimes it seems as though that fiction is becoming a reality — already driverless cars have been responsible for two human deaths. But how does artificial intelligence decide when someone has to die in an accident? Can it be programmed with ethics to help it make the best decisions?
That’s a lot harder than you might think, as human bias gets programmed right in with any ethical algorithms. Learn more about ethical AI from the infographic below!