“A major issue with using humans to provide ground truth for AI is that humans are not perfect either. There needs to be processes for evaluating human judgement in parallel to machine judgement. Otherwise the AI can end up learning the subjectivities of individual reviewers, distracting the AI from learning properly.
Both the confidence and the decision of sufficiently sophisticated AI can be bypassed using adversarial learning techniques. A terrorist who is blocked by Facebook is more likely to switch to some other platform rather than bypass the AI, but Facebook can never completely remove terrorist content.”