THE CHALLENGE OF EPISTEMIC RELIABILITY OF KNOWLEDGE IN THE CONTEXT OF ARTIFICIAL INTELLIGENCE

Authors

DOI:

https://doi.org/10.24919/2522-4700.49.6

Keywords:

philosophy of artificial intelligence, ethics of artificial intelligence, epistemology of artificial intelligence, methodology of science, reliability of knowledge, bias, AI alignment, explainable AI.

Abstract

Summary. The article analyzes the problem of epistemic reliability of knowledge obtained through artificial intelligence, considering aspects of transparency, justification, and the impact of sociocultural factors on AI training processes. Aim. The main aim of the study is to identify the factors affecting the reliability of AI-generated knowledge and to explore ways of improving the transparency of AI models to enhance user trust. Methodology. The research methodology is based on an analytical review of current approaches in the field of epistemology of artificial intelligence and philosophy of science. Methods of comparative analysis are employed to identify similarities and differences between human and machine cognition, as well as to examine concepts that reveal the nature of the justification of knowledge under conditions of insufficient transparency of deep learning algorithms. The conceptual analysis method is applied to the problem of creating «explainable AI» and «AI alignment» in the context of comparing the content of terms related to human cognition and morality with corresponding terms applied to AI. The method of thought experiment is used to model situations involving bias and the opacity of AI decisions. Scientific Novelty. The scientific novelty of the article lies in a systematic approach to studying the problem of epistemic reliability of AI in the context of its ability to explain its decisions. In particular, the article reveals how social, cultural, and ethical aspects influence the AI training process, which in turn leads to biases in decision-making. For the first time, the issue of theory-laden facts is investigated, where training data for AI is considered within a conceptual framework that affects their objectivity. The Münchhausen trilemma problem is also applied for the first time to characterize the explanatory capabilities of AI. Conclusions. The article substantiates the need to improve the transparency of AI algorithms to enhance their reliability. The main conclusions emphasize the importance of integrating ethical principles into the development process of AI to reduce bias and the development of explainable AI models capable of transparently justifying their decisions. This will increase user trust and ensure more effective use of AI in critical domains. Prospects for future research include the development of tools to account for the epistemological characteristics of the reliability of AI-generated knowledge.

References

1. Albert H., Rorty M.V. The Problem of Foundation. Treatise on Critical Reason. Princeton University Press, 1985.

2. Baker R.S., Hawn A. Algorithmic Bias in Education. International Journal of Artificial Intelligence in Education. Vol. 32. 2022. P. 1052–1092.

3. Belenguer L. AI Bias: Exploring Discriminatory Algorithmic Decision-Making Models and the Application of Possible Machine-Centric Solutions Adapted from the Pharmaceutical Industry. AI and Ethics. Vol. 2. № 4. 2022. P. 771–787.

4. Donald A., et al. Bias Detection for Customer Interaction Data: A Survey on Datasets, Methods, and Tools. IEEE Access. Vol. 11. 2023. P. 53703–53715, 2023.

5. Franklin A. The Theory-Ladenness of Experiment. Journal for General Philosophy of Science. Vol. 46. 2015. P. 155–166.

6. Johnson G.M. Algorithmic Bias: On the Implicit Biases of Social Technology. Synthese. Vol. 198. 2021. P. 9941–9961.

7. Kasirzadeh A., Gabriel I. In Conversation with Artificial Intelligence: Aligning Language Models with Human Values. Philosophy &

Technology. Vol. 36. 2023.

8. Köchling, A., Wehner, M.C. Discriminated by an Algorithm. Business Research. Vol. 13. 2023. P. 795–848.

9. McDonald F.J. AI, Alignment, and the Categorical Imperative. AI and Ethics. Vol. 3. 2023. P. 337–344.

10. Trotta A., Ziosi M., Lomonaco V. The Future of Ethics in AI: Challenges and Opportunities. AI & Society. Vol. 38. 2023. P. 439–441.

11. Yang W., Wei Y., Wei H., et al. Survey on Explainable AI: From Approaches, Limitations, and Applications Aspects. Human-Centric Intelligent

Systems. Vol. 3. 2023. P. 161–188.

12. Zhang Y., Tiňo P., Leonardis A., Tang K. A Survey on Neural Network Interpretability. Ithaca: Cornell University Library. arXiv.org. Vol. 5. №. 5. 2021. P. 726–742.

Published

2024-12-13

How to Cite

KOZACHENKO, N. (2024). THE CHALLENGE OF EPISTEMIC RELIABILITY OF KNOWLEDGE IN THE CONTEXT OF ARTIFICIAL INTELLIGENCE. Human Studies: A Collection of Scientific Articles of the Drohobych Ivan Franko State Pedagogical University. Series of Philosophy, (49), 94–109. https://doi.org/10.24919/2522-4700.49.6