Artificial Intelligence and Discrimination (White Paper)
Decisions of computer programs appear fact-based, objective and neutral. In fact, however, artificial intelligence (AI) repeatedly makes problematic or discriminatory decisions – for example, when determining the probability of recidivism of offenders, black people receive worse prognoses than white people. Members of the Plattform Lernende Systeme are calling, among other things, for an independent monitoring body to promote the explainability and verification of algorithms. The white paper from Susanne Beck et al, member of the working group IT Security, Privacy, Legal and Ethical Framework, analyses the causes and forms of discrimination and shows possible solutions.