The purpose of this AI is to help the police, but civil liberties advocates fear that it will be misused.
Artificial intelligence (AI) is used in more and more industries. Going through medicine and robotics, it even joined the ranks of the police. Several services like Media Sonar, Social Sentinel and Geofeedia use their technologies to help police officers monitor social networks. These companies analyze the conversations that take place on the platforms and pass the information on.
This type of system can be quite invasive. Zencity, an Israeli data analysis company, presents itself as an alternative to these services. It only offers aggregate data and prohibits targeted surveillance during demonstrations. Several American police stations have been seduced by its services, notably in Phoenix, New Orleans and Pittsburgh.
AI helps police fight disinformation…
Zencity creates personalized reports for law enforcement, but also municipal officials. Thanks to machine learning, public conversations that have taken place on social media are analyzed, but not only. Forums, local newsletters and calls to 311, the US police emergency number are also considered.
This data would allow the police to fight against disinformation. In particular, they would be very useful for local American law enforcement agencies, allowing them to better cope with the increase in crime in large cities.
For example, Zencity warned Brandon Talsma, a supervisor in Jasper County, Iowa, of a sudden increase in social media talk about the county. The facts turned out to be serious, as a black man living in the predominantly white town of Grinnell was found dead in a ditch. In view of the city’s population, rumors spread that the man had been lynched by residents with racist motives.
Zencity quickly noticed that almost none of the online conversations were from Iowa, questioning the rumors. Brandon Talsma feared that these would amplify and turn into disinformation and thus provoke violence. Police subsequently clarified that the murder was not racially motivated and four suspects were charged.
…But also to carry out mass surveillance?
Despite everything, this type of system raises certain questions, not all municipal officials adhere to it. “Watching the public does not mean involving them. It’s the contrary” says Deb Gross, city councilor in Pittsburgh. In this city, the city council heard about the product for the first time during a meeting in late May authorizing its renewal. The city had been using the tool for a year, but had not disclosed its purchase. However, renewing the tool under a $ 30,000 contract required Board approval. In some cities, it is used without a public approval process, often via free trials.
Giving the police the opportunity to monitor discussions on social media, especially those criticizing public order, so precisely, is also of concern to many privacy groups. Over the years, the US police have used a variety of software to analyze social platforms, often scrutinizing groups linked to police reform and opposing surveillance.
For example, last summer, Minneapolis police asked Google for information about users in the vicinity of an AutoZone store looted in the days following the murder of George Floyd.
“If they meet in such and such a place, it is information accessible to the public, which everyone can consult for free” attempted to explain Sheriff Tony Spurlock of Douglas County, Colorado. He says the sheriff’s office has been using the tool for about a year and signed a contract for $ 72,000 in early 2021. The tool provides aggregate information and does not, he says, identify individual users.
Across the Atlantic, the Civil Liberties Committee said on June 29 that the use of AI by law enforcement and justice should be subject to human control. It involves open algorithms and public audits to avoid mass surveillance.