The rapid expansion of AI-powered mass-surveillance systems across Africa is violating citizens’ right to privacy and having a chilling effect on society, according to experts on human rights and emerging technologies.
At least $2bn (£1.5bn) has been spent by 11 African governments on Chinese-built surveillance technology that recognises faces and monitors movements, according to a new report by the Institute of Development Studies, which warns that national security is being used to justify implementing these systems with little regulation.
Chinese companies often sell the technology in packages that include CCTV systems, facial recognition, biometric data collection and cameras that track vehicle movements and are presented as a tool to help rapidly urbanising countries modernise their cities and reduce crime.
But researchers from the African Digital Rights Network, who co-authored the report, said there was no real evidence of these systems reducing crime and warned that they allow governments to monitor human rights activists and political opponents, arrest protesters and lead journalists to self-censor.
Wairagala Wakabi, executive director of the Kampala-based policy body Cipesa and co-author of the report, said: “This large-scale and invasive AI-enabled surveillance of public spaces is not ‘legal, necessary or proportionate’ to the legitimate aim of providing security. History shows us that this is the latest tool used by governments to invade the privacy of citizens and stifle freedom of movement and expression.”
Nigeria has spent the most on infrastructure, investing $470m on 10,000 smart cameras by last year. Egypt has installed 6,000, while Algeria and Uganda have about 5,000 each.
An average of $240m was spent by the 11 countries with the investment often funded by loans from Chinese banks.
The report emphasises that a lack of regulation or legal framework on storing and using the data on individuals is a concern, given the rapid rollout of this technology but Bulelani Jili, an assistant professor at Georgetown University, said even the introduction of laws could be dangerous.
Surveillance of online activity has often been used to crack down on dissent and has been legalised through laws that can criminalise ordinary people for their posts online. Jili said focusing on the introduction of laws could simply allow governments to claim the systems had been legitimised.
“The real challenge, therefore, is not simply whether surveillance is regulated, but how societies negotiate the balance between security, accountability and civil liberties once these technologies become deeply institutionalised,” he said.
He said there had already been concerns about facial recognition being used to monitor activists in Uganda and that surveillance systems were used to crack down on gen Z-led protests in Kenya.
This could pose a danger to anyone deemed a threat to governments in the future, he warned.
“Historically marginalised communities, political activists, journalists and minority groups can be disproportionately affected when these technologies become embedded in policing and intelligence practices,” said Jili.
Yosr Jouini, who authored the report’s section on Algeria, said the systems were originally introduced in connection to “smart city” projects that promised to tackle crime and manage traffic but in reality often became mainly a tool of the security forces.
“The narrative is framed only through a security lens, which dismisses any other concern and does not provide enough mechanisms for citizens to ensure their rights are protected,” she said.
She highlighted how street protests in 2019 and 2021 played a key role in political change but the expansion of surveillance systems could make people hesitant about protesting in the future.
“We know a lot of protesters have been arrested when participating in public space gatherings. We don’t know for sure if it was based on the cameras but there’s a chilling effect – because it could happen – on people’s willingness to participate in public gatherings.”