Suç Önlemede Algoritmik Yanlılığın Sınamaları


Creative Commons License

Arslan A. C.

ПРОБЛЕМЫ И ПЕРСПЕКТИВЫ СОВЕРШЕНСТВОВАНИЯ ЗАКОНОДАТЕЛЬСТВА И ПРАВОПРИМЕНИТЕЛЬНОЙ ПРАКТИКИ ОРГАНОВ ВНУТРЕННИХ ДЕЛ - PROBLEMS AND EXPECTATIONS FOR THE IMPROVEMENT OF LEGISLATION AND LAW ENFORCEMENT PRACTICE OF INTERNAL AFFAIRS BODIES, Karaganda, Kazakistan, 30 - 31 Ekim 2024, ss.4-7

  • Yayın Türü: Bildiri / Tam Metin Bildiri
  • Basıldığı Şehir: Karaganda
  • Basıldığı Ülke: Kazakistan
  • Sayfa Sayıları: ss.4-7
  • Polis Akademisi Adresli: Evet

Özet

The rising use of Artificial Intelligence (AI) and Machine Learning (ML) technologies in crime prevention and public security raises questions not only about the accuracy and reliability of these systems but also their ethical and societal impacts. Algorithmic bias, specifically, is a critical challenge in the ap- plication of crime prediction and policing. In this study, we analyze the effects of algorithmic bias on crime prevention in light of the perception of criminality and the criticisms of the predictive ML models. Accordingly, we evaluate automation bias in terms of its effects on the safety and effectiveness of human-AI collabora- tion. The transparency and explainability of AI systems increase user trust, while it allows overconfidence and wrong decision-making by creating a problematic emotion of safety. The overlapping effects of algorithmic bias on the factors of sex, race, and ethnicity are evaluated as a significant point that may cause a bar- rier in front of these systems to be more just and objective. In this context, the proposed strategies to mitigate the algorithmic bias in crime prevention, pursuing ethical policies in algorithmic design, and the required preventions for sustaining societal justice are discussed.

Keywords: algorithmic bias, artificial intelligence, security administration, security technologies

Introduction

The growing deployment of artificial intelligence (AI) and machine learning (ML) technologies in crime prevention and public safety has given rise to con- cerns regarding the accuracy and reliability of these systems, as well as their ethical and societal implications. The issue of algorithmic bias is becoming a significant challenge in the field of crime prediction and the development of po- licing applications (Lum & Isaac, 2016; Mehdipour, 2021). This study assesses the impact of algorithmic bias in crime prevention within the context of percep- tions of guilt and the critique of predictive machine learning models. In light of these considerations, this study addresses the impact of automation bias on the safety and efficacy of human-AI collaboration. As user confidence in AI systems increases, so too does the potential for overconfidence and misjudgment, which

–4–

can lead to a problematic sense of security. It is acknowledged that algorithmic bias can have a detrimental impact on factors such as social disadvantage, which in turn impairs the objectivity of these systems. In this context, the discussion will address proposed methods to mitigate algorithmic bias in crime prevention, an ethical approach to algorithm design, and measures to protect social justice.

Artificial intelligence (AI) and machine learning have become integral ele- ments of crime prevention policies (Angbera, 2023). The analysis of crime statis- tics and the prediction of future crimes represent a revolutionary advance in the field of public safety. However, these systems should be recognized not only as technical tools but also as mechanisms with social, ethical, and political implica- tions. In particular, algorithmic bias represents a significant challenge, demon- strating that these technologies are not impartial or fair. Algorithms are influ- enced by biases present in the datasets they are trained on, which shape percep- tions of criminality and policing activities (Rotaru et al., 2022). This study ex- amines the impact of algorithmic bias on crime prevention policies and its poten- tial adverse effects on social justice.

Algorithmic Bias and Crime Prediction

Algorithmic bias refers to the phenomenon of artificial intelligence (AI) or machine learning models exhibiting a tendency to disadvantage specific groups on the basis of inherent biases present within the data sets from which they are trained. The algorithms utilized in crime prediction are frequently founded upon historical crime data. However, it should be noted that these data sets are influ- enced by past social and institutional biases (Mohler et al., 2015; Rotaru et al., 2022). Those belonging to minority groups, in particular those residing in low- income neighborhoods, may be subjected to disproportionate targeting as a result of the creation of a perception of elevated crime rates. This further disadvantages already marginalized groups.

Algorithmic bias is not limited to biases in data sets. The parameters used in the training of algorithms can lead the model to discriminate against certain groups. Furthermore, crime prediction models often operate independently of the social and economic context. This reinforces the tendency to treat crime as an individual problem, ignoring the deep structural problems at its roots.

Automation bias and human-AI collaboration

Automation bias refers to the phenomenon whereby humans exhibit overcon- fidence in the accuracy of the results produced by machines, assuming them to be correct. In the context of crime prevention systems, there is a risk that police officers or decision-makers may place undue trust in the predictions provided by AI, which could potentially lead to erroneous decisions (Apene, 2024). For ex- ample, when an algorithm identifies an individual as high-risk, it can be assumed

–5–

that this information is accurate and that punitive measures are taken against that person without further investigation (Ziosi, 2024). This results in an increased prevalence of discriminatory practices, particularly against minority groups (Vats, 2022).

Furthermore, automation bias can also lead to erroneous perceptions of safety (Aggarwal, 2023). It is possible that individuals may have a misplaced sense of security, operating under the assumption that artificial intelligence systems are objective and unbiased. However, these systems are shaped by biases in data sets and may not always produce accurate results (Wu et al., 2022). Therefore, for the effective use of artificial intelligence systems in crime prevention policies, trans- parency and explainability of these systems are of great importance. It is crucial that users be informed about how artificial intelligence systems work, what data they are trained with, and what kind of results they produce. Only in this way can human-AI collaboration be made safer and more effective.

The Impact of Algorithmic Bias on Social Disadvantages

The phenomenon of algorithmic bias is most evident in instances where de- mographic factors are involved in the context of social disadvantage. In particu- lar, in the context of crime prediction systems, there is a tendency to assume that certain groups are at an elevated risk of criminal activity (Mohler et al., 2015). This assumption can serve to reinforce discriminatory practices directed towards these groups. This serves to exacerbate the police brutality and inequalities in judicial processes to which these groups are already exposed.

A comparable issue arises with regard to gender. In systems where women are typically regarded as a lower risk factor, female offenders may receive less attention than they would otherwise. Such outcomes may result in unfair deci- sions, particularly in cases of domestic violence and sexual offenses.

Strategies to Reduce Algorithmic Bias

A number of strategies can be employed to mitigate the impact of algorithmic bias in crime prevention policies. Firstly, the data sets employed for the purposes of algorithm training should be meticulously selected. It is of the utmost impor- tance to guarantee that the data is free from bias and is representative of a diverse population, ensuring that it does not reflect any social biases. Moreover, the effi- cacy of the algorithms should be evaluated on a regular basis, and their impact on specific demographic groups should be closely monitored (Wu et al., 2020).

Additionally, it is crucial to ensure algorithmic transparency. The public must be informed about the inner workings of crime prediction systems, the data they are trained with, and the results they produce. This can help to make these sys- tems more fair and reliable.

–6–

Furthermore, ethical policies must be followed to reduce algorithmic bias.

Social justice approaches must be adopted in the design of crime prediction sys-

tems and the possible side effects of these systems must be taken into account.

This will both protect the rights of individuals and increase social securi- ty.1234567891011

Conclusion

The potential for algorithmic bias to impact crime prevention systems raises concerns about the risks these systems may pose to social equality and justice. To mitigate this bias, it is essential to ensure the impartiality of data sets, guarantee transparency and explainability of algorithms, and implement ethical rules and oversight mechanisms. Transparency and respect for human rights are crucial for enhancing public trust in algorithmic crime prediction systems. The appropriate and fair utilization of technology in crime prevention processes ensures social justice and facilitates security. Consequently, in addition to technical accuracy, ethical responsibility must also be considered.

1 2 3 4 5

7

8 9

10

11

.
.

Aggarwal, K. (2023). Implementing machine learning algorithms on criminal databases to de-

velop a criminal activity index. J Emerg Invest. https://doi.org/10.59720/22-25

Angbera, A. (2023). Model for spatiotemporal crime prediction with improved deep learning.

Computing and Informatics, 42(3), 568-590. https://doi.org/10.31577/cai_2023_3_568

Apene, O. (2024). Advancements in crime prevention and detection: from traditional approach-

es to artificial intelligence solutions. ejaset, 2(2), 285-297. https://doi.org/10.59324/ejaset.2024.2(2).20

Lum, K. and Isaac, W. (2016). To predict and serve? Significance, 13(5), 14-19. https://doi.org/

10.1111/j.1740-9713.2016.00960.x

Mehdipour, F. (2021). Reducing profiling bias in crime risk prediction models. Rere Āwhio —

The Journal of Applied Research and Practice, (1), 86-93. https://doi.org/10.34074/rere.00108

.
6 Mohler, G., Short, M., Malinowski, S., Johnson, M., Tita, G., Bertozzi, A., ... & Brantingham, P.

.

. .

.

–7–

.

. .

.

(2015). Randomized controlled field trials of predictive policing. Journal of the American Statistical

Association, 110 (512), 1399-1411. https://doi.org/10.1080/01621459.2015.1077710

Rotaru, V., Huang, Y., Li, T., Evans, J., & Chattopadhyay, I. (2022). Event-level prediction of

urban crime reveals a signature of enforcement bias in us cities. Nature Human Behaviour, 6 (8),

1056-1068. https://doi.org/10.1038/s41562-022-01372-0

Vats, A. (2022). Building the case for restricted use of predictive policing tools in india. The In-

ternational Review of Information Ethics, 32 (1). https://doi.org/10.29173/irie487

Wu,J., Abrar,S., Awasthi,N., Frı́as-Martı́nez,E., & Frías-Martínez,V. (2022). Enhancing

short-term crime prediction with human mobility flows and deep learning architectures. Epj Data

Science, 11 (1). https://doi.org/10.1140/epjds/s13688-022-00366-2

Wu, J., Frı́as-Martı́nez, E., & Frías-Martínez, V. (2020). Addressing under-reporting to en-

hance fairness and accuracy in mobility-based crime prediction.

https://doi.org/10.1145/3397536.3422205

Ziosi, M. (2024). Evidence of what, for whom? the socially contested role of algorithmic bias

in a predictive policing tool. https://doi.org/10.1145/3630106.3658991