Responsible Artificial Intelligence in Government: Development of a Legal Framework for South Africa
DOI:
https://doi.org/10.29379/jedem.v14i1.678Keywords:
artificial intelligence, responsible AI, legal framework, automated decision-making, algorithmAbstract
Various international guideline documents suggest a human-centric approach to the development and use of artificial intelligence (AI) in society, to ensure that AI products are developed and used with due respect to ethical principles and human rights. Key principles contained in these international documents are: transparency (explainability), accountability, fairness and privacy. Some governments are using AI in the delivery of public services, but there is a lack of appropriate policy and legal frameworks to ensure responsible AI in government. This paper reviews recent international developments and concludes that, an appropriate policy and legal framework must be based on the key principles contextualised to the world of AI. A national legal framework alone is not sufficient and should be accompanied by a practical instrument, such as an algorithm impact assessment, aimed at reducing risk or harm. Recommendations for a possible South African legal framework for responsible AI in government are proposed.
Downloads
Metrics
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2022 Dirk Brand
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
JeDEM is a peer-reviewed, open-access journal (ISSN: 2075-9517). All journal content, except where otherwise noted, is licensed under the CC BY-NC 4.0 DEED Attribution-NonCommercial 4.0 International