Call for Papers - AI and Surveillance in Policing and Law and Order: Opportunities, Threats, Perspectives and Cases
Introduction
Rapid developments in Artificial Intelligence (AI) technologies and applications present new and previously unforeseen opportunities to address societal issues (Stahl, 2021) but, at the same time, create challenges about the nature of public services and policy, as well as relations between service providers and citizens (Urquhart & Miranda, 2021). There are high expectations around how the use of AI will increase efficiency and effectiveness in public service organisations, leading to the improvement of services and policy, automation of tasks, and enhanced decision-making. Policing and law enforcement has not been immune from the trend, and there has been growing pressure to take advantage of the opportunities offered by AI to deploy innovative emerging technologies in the ‘fight against crime’ (Eneman et al, 2022). Consequently, there is an emergent discourse about ensuring that policing and law enforcement agencies have access to appropriate technologies and that the legal and ethical setting is amenable to their use.
In law enforcement, AI-applications can be used before, during and/or after a crime has been committed. By analysing historical crime data, statistical patterns can be used to forecast or estimate criminal behaviour related to locations, individuals or types of crime in order to anticipate and prevent crime (van Brakel, 2021). Surveillance cameras integrate image recognition technologies to respond to crime in real time, by identifying violent situations, dangerous objects, suspicious vehicles and offenders and victims’ identities. To maintain public order and security, the behaviour of crowds and their leading vigilantes can be automatically analysed with video footage captured from drones. Biometric approaches such as facial recognition can identify a person by comparing a facial image with databases such as the criminal suspects register, the passport database or vast commercial facial image repositories (Mau, 2023). Generative AI can be used for translation of text and voice and for analysis of large volumes of text. Generative AI can also be used in undercover operations to generate fake social media profiles and deep-fake profile images.
These are just a few examples of the potential use of AI in policing and law and order. Other unforeseen applications and uses, which are difficult to predict, are also likely to be developed, and will inevitably have implications and consequences for policing, the justice system and citizen-state relations. In this respect, we are experiencing the early stages of a new era, an era organised around, and which priorities, digital processes (Connon et al, 2022; Fussey & Sandhu, 2022).
For policing and law enforcement the opportunities to deliver more effective service outcomes are seductive, but they can only be achieved with careful refection about the consequences of using such technologies (Smith & Miller, 2022; Urquhart et al 2023). The policy and regulatory environment around AI is evolving and policing agencies will be expected to work with new frameworks which promote accountability and legitimacy, and will be expected to use new technologies in a way that promotes public confidence in policing (Black & Murray, 2019). Policing and law enforcement organisations need to understand how the fast emerging AI-technologies can be used in law enforcement practices in a legitimate, necessary and proportional way and, at the same time, assess risks attached to the particular use of the technology, both for individuals and society more broadly.
This balancing act is a complex task and risks creating legal uncertainty (Murray, 2021) that can lead to ambiguities, tensions, compromises and trade-offs between the need for increased efficiency and security, while democratic rights as privacy (Amoore, 2014) must be handled at policy level, different organizational levels and between organizations (Ball & Webster, 2019).
Purpose of the workshop
The purpose of the workshop is to provide an international platform for researchers from different disciplinary areas and academic levels to meet to critically discuss and reflect on different use cases illustrating AI-enabled surveillance in policing contexts. Our aspiration is to generate and share new contributions to knowledge around the existing, future and potential uses of AI, and their implications and consequences, in policing and law and order. In this respect, we aim to reflect on contemporary technological development as well as, and alongside, significant legal, organisational and political contexts. The European AI Act, for example, will play a role in shaping the regulatory environment (Edwards, 2022; Veale & Zuiderveen Borgesius, 2022). Issues relating to how law enforcement authorities navigate and balance the borderland between, on the one hand, the rapid technological development and recently extended mandate to use new technology as part of the authority's work practice, and the emerging comprehensive policy and regulatory framework in which the AI Act has a significant role. Balancing the possibilities with AI for law enforcement while adjusting and balancing the work practices to new regulation will likely create a set of tensions and dilemmas, and may possibly also lead to institutional conflicts.
There are emergent questions about the use of AI in policing and law and order, they include:
- How is AI utilised in policing contexts?
- How will AI change the practices of policing and law enforcement?
- How are AI systems procured and evaluated?
- What are the consequences of using AI for the organisation of law enforcement practices?
- What challenges and dilemmas arise in the borderland between technological possibilities and the emerging regulatory landscape?
- How are these challenges being addressed at the policy, management and operational level?
We are keen to include theoretical and empirical work from different academic perspectives in the workshop, and would welcome abstracts from early career scholars.
Potential themes include, but are not limited to, the following topic areas:
- Changes in policing practices arising from the use of AI
- Legal and regulatory challenges of emerging AI and data-driven policing systems
- AI as part of surveillant assemblages in policing
- Emerging discourses and relationships (e.g. between AI providers and public service users)
- Qualitative and ethnographic studies exploring police officers’ experiences with AI tools
- Emergent discourses around AI in policing
- Organizational consequences that can arise through new possibilities of using AI in policing (e.g a new regulatory landscape emerges with increased regulation of AI)
- The construction and meaning of privacy in relation to AI in policing
- The construction and management of risks in relation to the AI Act
- Transformations in policing practice, oversight and scrutiny due to AI integration
- Case studies on AI applications in Policing & Law and Order
Publication
Following on from the workshop the organisers will facilitate the publication of an edited collection on the topic of the workshop to the Routledge Studies in Surveillance book series [https://www.routledge.com/Routledge-Studies-in-Surveillance/book-series/RSSURV]. We aim to include all workshop contributions in this edited collection. Chapter contributions are expected to be in the range of 5-6,000 words.
Important dates
The deadline for paper proposals to the workshop is 2nd September 2024. DEADLINE EXTENDED TO 9th SEPTEMBER 2024.
Authors will be notified concerning paper acceptance by 13th September 2024
Workshop takes place 16th-18th October 2024
Submission of book proposal to the Routledge Studies in Surveillance book series, December 2024
Deadline for full chapters 3rd February 2025
Review and revision of chapters to take place in February and March 2025
Submission of final manuscript to Routledge, April 2025
Submission of abstracts
Abstracts for workshop participants are to be submitted by 2nd September 2024
Abstracts should be up to 2,000 words and clearly related to the workshop theme
Abstracts should be in MS Word or PDF format
Abstracts should include all author’s names and institutional affiliations
The contact details for the corresponding author should be clearly indicated
Abstracts should be sent to Marie Eneman at the University of Gothenburg by email: marie.eneman@ait.gu.se - please feel free to contact us with any questions!
Workshop details
The workshop will span three days and will include a reception, a keynote speaker, panel discussions and the presentation and discussion of academic papers.
Day one: Wednesday 16 October 2024
Welcome reception, 6pm University of Gothenburg
Day two: Thursday 17 October 2024
10am Introduction and welcome
11am Keynote presentation
Presentations and discussion
Lunch 1-2pm
Close 5pm
Workshop dinner 7pm
Day three: Friday 18 October 2024
9am presentations and discussion
Lunch 1-2pm
2pm Panel Q&A discussion
Workshop close 3pm
The workshop will be delivered in a hybrid format, which will accommodate presentations delivered by Zoom, although we are particularly keen for participants to attend in-person. The cost of refreshments, lunches and dinners are covered by the host institution.
References
Amoore, L (2014) Security and the claim to privacy, International political sociology, 8(1).
Ball, K & Webster, W (2019) Surveillance and Democracy in Europe, Routledge.
Black, J & Murray, A (2019) Regulating AI and Machine Learning: Setting the Regulatory Agenda. European Journal of Law and Technology, 10(3).
Connon, I., Egan, M., Hamilton-smith, N., MacKay, N., Miranda, D & Webster, W (2022) Review of emerging technologies in policing: findings and recommendations. Available at:
Edwards, L (2022) The EU AI Act: a summary of its significance and scope. Report from Ada Lovelace Institute.
Eneman, M., Ljungberg, J., Raviola, E., & Rolandsson, B (2022) The sensitive nature of facial recognition: Tensions between the Swedish police and regulatory authorities, Information Polity, 27(2).
European Commission (2021) Proposal for a regulation of the European Parliament and of the council, Laying down harmonised rules on artificial intelligence and amending certain union legislative acts, COM (2021) 206 final.
Fussey, P & Sandhu, A (2022) Surveillance arbitration in the era of digital policing. Theoretical Criminology, 26(1), 3-22.
Mau, S (2023) Sorting Machines: The Reinvention of the Border in the 21st Century, Polity Press.
Smith, M & Miller, S (2022) The ethical application of biometric facial recognition technology. AI & Society, 37.
Stahl, B (2021) Artificial Intelligence for a Better Future: An Ecosystem Perspective on the Ethics of AI and Emerging Digital Technologies, Springer Nature.
Urquhart, L & Miranda, D (2021) Policing faces: the present and future of intelligent facial surveillance. Information & Communications Technology Law, 31(2),
Urquhart, L, Miranda, D, Connon, I & Laffer, A (2023) Critically Envisioning Biometric Artificial Intelligence in Law Enforcement. Available at: https://www.research.ed.ac.uk/en/publications/critically-envisioning-biometric-artificial-intelligence-in-law-e
Van Brakel, R (2021) Rethinking Predictive Policing: Towards a Holistic Framework of Democratic Algorithmic Surveillance. In The Algorithmic Society: Technology, Power and Knowledge, (Ed.) Marc Schuilenberg and Rik Peeters, 104–118. London: Routledge.
Veale, M & Zuiderveen Borgesius, F (2022) Demystifying the Draft EU Artificial Intelligence Act. Computer Law Review International, 22(4).