We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Digital rights advocacy group
Many AI systems have been proven to magnify discrimination and inequality. In particular,
so-called ‘predictive policing’ and biometric surveillance systems are disproportionately used
to target marginalised groups including racialised, working class and migrant communities.
These systems criminalise people and infringe human rights, including the fundamental right
to be presumed innocent.
(source)
replying to Open Rights Group
Comments
Author and digital rights activist
would say according to AI:
What are the most effective strategies for educating the public and lawmakers about the dangers associated with these AI systems?