Artificial intelligence and ethics in the digital society: A social justice perspective


Abstract views: 22 / PDF downloads: 9

Authors

DOI:

https://doi.org/10.5281/zenodo.17924656

Keywords:

Artificial intelligence Ethics, Social justice, Algorithmic bias, Digital society, Responsible AI, Accountability

Abstract

This article aims to examine the ethical dilemmas emerging with the rise of artificial intelligence (AI) technologies in the digital society from a social justice perspective. While acknowledging AI's potential for societal benefit, it highlights the risk of its capacity to reproduce historical and structural inequalities.

The study consists of four sections. The introduction establishes the importance and scope of the subject. The first section elucidates the concepts of responsible and trustworthy AI, discussing ethical dilemmas in algorithmic decision-making processes, data privacy, and the need for transparency and accountability; national and global ethical frameworks are also evaluated.

The second section addresses algorithmic discrimination and bias within the context of social justice, demonstrating AI's impact on disadvantaged groups in areas such as criminal justice, employment, and public services. Noting the limitations of technical solutions, it emphasizes the importance of feminist and critical approaches.

The third section provides a systematic analysis of ethical issues such as bias, opacity, the accountability gap, privacy erosion, and surveillance, demonstrating their interconnected nature. It advocates for a holistic approach to the entire machine learning lifecycle.

The conclusion and recommendations section proposes bias mitigation techniques, explainable AI (XAI), strengthened legislation, alignment with the EU AI Act, ethical review boards, and public awareness. The article concludes that ethical AI use is only possible through interdisciplinary collaboration, transparent public dialogue, and governance that centres human dignity.

References

Akıncılar Köseoğlu, N., & Çetin, B. (2024). Avrupa’da Dijital Etik, İnsan Hakları Bağlamında Yapay Zekâ ve Algoritmik Ayrımcılık. Kastamonu Üniversitesi İktisadi ve İdari Bilimler Fakültesi Dergisi, 26(1), 69-83. https://doi.org/10.21180/iibfdkastamonu.1384167

Argın, Y. (2023). Türk Sinemasında Kadın ve Akıl Hastalığı Temsilleri: Beyza’nın Kadınları Filminin Eleştirel Söylem Çözümlemesi. Y. Arğın (Ed.), Güncel Yaklaşımlarla Geleneksel ve Yeni Medyada Beden içinde (s. 214). Nobel Bilimsel.

Barocas, S., & Selbst, A. D. (2016). Big data’s disparate impact. California Law Review, 104(3), 671–732. https://doi.org/10.2139/ssrn.2477899

Benjamin, R. (2019). Race after technology: Abolitionist tools for the New Jim Code. Polity Press.

Biçer, S., & Şener, Y. (2020). Castells’in İzinden Mısır ve Hong Kong Protestoları Örneğiyle Dijital Aktivizm ve Yeni Toplumsal Hareketler. A. S. İgit & Ö. Sayılgan (Eds.), Dijital İletişim - Kuram ve Araştırmaları içinde (s. 320). Nobel Akademik Yayıncılık.

Chouldechova, A. (2017). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data, 5(2), 153–163. https://doi.org/10.1089/big.2016.0047

Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.

Cumhurbaşkanlığı Dijital Dönüşüm Ofisi. (2021). Ulusal Yapay Zekâ Stratejisi (2021–2025). https://cbddo.gov.tr

Demirel, Y. T., & Arıkan, N. İ. (2023). Yapay zekânın afet bölgelerinde kullanımı. International Journal of Educational and Social Sciences, 2(2), 77–82.

D’Ignazio, C., & Klein, L. F. (2020). Data feminism. MIT Press.

Dignum, V. (2019). Responsible artificial intelligence: How to develop and use AI in a responsible way. Springer. https://doi.org/10.1007/978-3-030-30371-6

Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv. https://arxiv.org/abs/1702.08608

Dwork, C., Hardt, M., Pitassi, T., Reingold, O., & Zemel, R. (2012). Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference (pp. 214–226). ACM. https://doi.org/10.1145/2090236.2090255

Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press.

European Commission High-Level Expert Group on Artificial Intelligence. (2019). Ethics guidelines for trustworthy AI. European Commission. https://digital-strategy.ec.europa.eu

European Union. (2016). Regulation (EU) 2016/679 (General Data Protection Regulation). Official Journal of the European Union, L 119, 1–88.

European Union. (2024). Regulation (EU) 2024/1689 (Artificial Intelligence Act). Official Journal of the European Union.

Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review, 1(1). https://doi.org/10.1162/99608f92.8cd550d1

IEEE. (2019). Ethically aligned design: A vision for prioritizing human well-being with autonomous and intelligent systems (1st ed.). IEEE Standards Association.

Kleinberg, J., Mullainathan, S., & Raghavan, M. (2017). Inherent trade-offs in the fair determination of risk scores. In Proceedings of the 8th Innovations in Theoretical Computer Science Conference (pp. 43:1–43:23). Schloss Dagstuhl. https://doi.org/10.4230/LIPIcs.ITCS.2017.43

Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 1–21. https://doi.org/10.1177/2053951716679679

Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press.

O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.

OECD. (2019). Recommendation of the Council on Artificial Intelligence. OECD. https://legalinstruments.oecd.org

Oğuz, Ö. (2024). Çalışma Hayatında Algoritmik Ayrımcılık. Süleyman Demirel Üniversitesi Hukuk Fakültesi Dergisi, 14(2), 1851-1886. https://doi.org/10.52273/sduhfd..1581436

Raji, I. D., Smart, A., White, R., Mitchell, M., Gebru, T., Hutchinson, B., Smith-Loud, J., Theron, D., & Barnes, P. (2020). Closing the AI accountability gap: Defining auditing and accounting for AI systems. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAccT ’20) (pp. 33–44). ACM. https://dl.acm.org/doi/10.1145/3351095.3372873

Selbst, A. D., & Powles, J. (2017). Meaningful information and the right to explanation. International Data Privacy Law, 7(4), 233–242. https://doi.org/10.1093/idpl/ipx022

Selbst, A. D., Boyd, D., Friedler, S. A., Venkatasubramanian, S., & Vertesi, J. (2019). Fairness and abstraction in sociotechnical systems. In Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency (FAccT ’19) (pp. 59–68). ACM. https://doi.org/10.1145/3287560.3287598

Suresh, H., & Guttag, J. (2021). A framework for understanding sources of harm throughout the machine learning life cycle. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’21) (pp. 113–123). ACM. https://doi.org/10.1145/3465416.3483305

Şener, Y. (2021). Sosyal Medya ve İdeal Bedenin İnşası: Diyet İçerikli Instagram Sayfalarında İdealize Edilmiş Kadın Bedenleri Üzerine Göstergebilimsel Bir Analiz. A. Karabulut (Ed.), Dijital Yozlaşma ve Etik içinde (s. 303). LİTARATÜRK.

T.C. Resmî Gazete. (2016, 7 Nisan). 6698 sayılı Kişisel Verilerin Korunması Kanunu (Sayı: 29677). https://www.mevzuat.gov.tr

Tanışık, S., & Bal, S. (2024). Dijital Mahremiyet ve Kurumsal Sorumluluk: Kişisel Verilerin Korunmasında İletişim Teknolojilerinin Kamusal Rolü. Yeni Medya(16), 268-285. https://doi.org/10.55609/yenimedya.1424182

Türkiye Yapay Zekâ İnisiyatifi (TRAI). (2024). Yapay zekâ etik ilkeleri ve hukuki düzenlemeler raporu. https://turkiye.ai/wp-content/uploads/2024/06/TRAI-Yapay-Zeka-Etik-Ilkeleri-ve-Hukuki-Duzenlemeler-Raporu-Mayis-2024-5.pdf

UNESCO. (2021). Recommendation on the ethics of artificial intelligence. UNESCO. https://unesdoc.unesco.org

Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Why a right to explanation of automated decision-making does not exist in the GDPR. International Data Privacy Law, 7(2), 76–99. https://doi.org/10.1093/idpl/ipx005

Yaşar Ümütlü, A. (2025). Algoritmik Adalet: Uluslararası Hukukta Yapay Zeka Hakimliği. Selçuk Üniversitesi Hukuk Fakültesi Dergisi, 33(1), 777-815. https://doi.org/10.15337/suhfd.1637446

Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs.

Published

2025-12-14

How to Cite

Arğın, E. (2025). Artificial intelligence and ethics in the digital society: A social justice perspective. International Journal of Educational and Social Sciences, 4(2). https://doi.org/10.5281/zenodo.17924656