Responsible Artificial Intelligence in Government: Development of a Legal Framework for South Africa

: Various international guideline documents suggest a human-centric approach to the development and use of artificial intelligence (AI) in society, to ensure that AI products are developed and used with due respect to ethical principles and human rights. Key principles contained in these international documents are: transparency (explainability), accountability, fairness and privacy. Some governments are using AI in the delivery of public services, but there is a lack of appropriate policy and legal frameworks to ensure responsible AI in government. This paper reviews recent international developments and concludes that, an appropriate policy and legal framework must be based on the key principles contextualised to the world of AI. A national legal framework alone is not sufficient and should be accompanied by a practical instrument, such as an algorithm impact assessment, aimed at reducing risk or harm. Recommendations for a possible South African legal framework for responsible AI in government are proposed.


Introduction
Artificial intelligence (AI), robotics, augmented reality and Internet of Things (IoT) are important innovative technological developments which impact society in various ways. Clarity about these concepts is necessary before the legal implications of the use of AI and the need for regulation could be discussed. A generic definition of artificial intelligence (AI) is: 'the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages' 1 . A wider definition of AI includes machine learning, predictive modelling and the analysis of large sets of data. 2 The use of AI, including machine learning, is about automated decision making that can assist or replace human decision making. Machine learning is the science that develops and applies algorithms through which a computer learns from input data, to develop solutions to complex problems. 3 It is not only private businesses that develop and employ AI, but governments all around the globe are increasingly making use of AI in various ways. Effective data management and the use of data analytics to assist evidence-based decision making are important elements of digital governance. In order to employ AI in government services, large datasets are required, which stresses the need for effective regulation relating to data protection and privacy as well as the design and use of AI. The current Covid-19 pandemic has accelerated the need to use data analysis and AI in the fight against the virus. The use of AI in this context could contribute to better decision-making, but it also raises various questions relating to issues such as fairness, the impact on human rights, and the protection of privacy and accountability. There is legitimate concern that governments could misuse AI for political purposes and negatively impact citizens' rights. In addition, the complexity and opacity of many AI systems make it very difficult to ensure accountability, which suggests the need for an innovative approach to regulating AI.
The essence of responsible AI is about ethical values and principles, aptly described by Leslie as follows: "AI ethics is a set of values, principles, and techniques that employ widely accepted standards of right and wrong to guide moral conduct in the development and use of AI technologies." 4 Ethical considerations provide a value-base for development of legal frameworks that could underpin the development and use of responsible AI in society in general and in government in particular. Ethical guidelines are, however, only just guidelines and not binding law. It is nevertheless a useful approach to establish a broad ethical basis for the development of an appropriate legal framework for the regulation of responsible AI. Such approach is, for example, followed by the European Commission, who published a proposed AI Regulation in 2021 which was preceded by policy papers that included ethical guidelines. 5 Around the world there are various initiatives to regulate the use of AI or at least to provide some guidelines for responsible AI. There is not only competition to develop new AI applications, but there is also a race to AI regulation. In absence of appropriate AI regulations, the risks associated with unrestricted use of AI by government are substantial, as discussed below, and that should be mitigated. There is also a need to create legal certainty regarding the responsible use of AI. Hence, the drive to promote responsible and trustworthy AI that could be used to benefit society.
This article is not meant to be a comprehensive comparative study, but it puts the spotlight on some of the recent international initiatives, to develop a regulatory framework, for responsible AI in government. The aim is to provide some recommendations for the design of a regulatory framework for South Africa, but it could also be relevant to other countries that consider regulation of AI in government. The purpose of such a regulatory framework would be to create more legal certainty and to minimize potential risk. The key research questions to be answered in this paper are: What are the critical principles for responsible AI in government? and; What are the key elements of a legal framework for responsible AI in government in SA?

Legal concerns relating to the use of AI in government
There is already a wide range of AI applications or use cases in government in various countries, many of which harness the benefits of AI for society, for example, algorithms to regulate traffic flow in metropolitan areas, health apps that provide interactive consultations, visualisation of health diagnoses, and smart agriculture apps to assist farmers with managing their crops more efficiently. There are, however, also controversial AI applications that raise various legal concerns, for example, predictive modelling about crime hotspots that use facial recognition.
AI has the potential to negatively impact human rights, democracy and the rule of law. 6 The use of AI in government, who has a general duty to protect human rights and uphold the rule of law, thus requires a careful consideration of the potential impact on human rights. Accountability of government for the use of AI in the administration, raises further questions about how that accountability should be construed. This section focuses on some of the critical human rights concerns, as well as accountability relating to the use of AI in government.
It is well documented that invasive AI applications could be discriminatory due to an unfair bias in the data and in the AI design. 7 Facial recognition software for example is widely used for a variety of purposes, but the risk of wrong identification or unfair bias that leads to discrimination remains a serious concern. Facial recognition software makes use of pre-trained models that use large databases of faces, and thus depends on the quality of the training data and of the images in the databases. 8 The algorithm in this software develops patterns of recognition in the training data, which is then used in the machine learning process and applied to either live facial recognition situations or static comparison of data. Bias in training data that, for example, leads to discrimination against women or specific race groups, would be reflected in the software. It is evident that facial recognition software could impact various individual human rights, for example, the right to privacy, freedom of expression and the right to equality. There are, however, also potential benefits of facial recognition software, for example, in cases of finding missing children.
The question is how could technological development, including the use of AI, be supported while also protecting human rights. The European Court of Human Rights determined that a balanced approach should be followed. 9 Data protection regulations play an important role in response to the legal concerns relating to the use of AI, including facial recognition software, due to the close link between data and the development of AI. An example of specific legislation dealing with facial recognition software is the EU Law Enforcement Directive, which is based on the legal principles of data protection. It stipulates that the processing of facial images must be: a) lawful, fair and transparent; b) follow a specific, explicit and legitimate purpose; and c) comply with the requirements of data minimization, data accuracy, storage limitation, data security and accountability. 10 There are also other use cases of unfair bias in AI that lead to discriminatory effects, for example, a social welfare tool used in the USA to decide on placement of children that utilises administrative data of a means-test program. The data is inherently biased against poor communities and the AI simply replicates this bias and it thus, discriminates against people in poor communities. 11 Transparency and accountability are well-established constitutional law principles, which in general, provide the legal foundation for good governance. Transparency of decision-making strengthens accountability and supports citizens' efforts to hold government accountable. The use of AI in government as one of its decision-making tools, means that it must also be subject to these principles and the relevant applicable legal frameworks, for example, on financial accountability. It is evident from the literature that transparency and accountability cannot simply be applied in exactly the 8 FRA. (2019). Facial recognition technology: fundamental rights considerations in the context of law enforcement, https://fra.europa.eu/sites/default/files/fra_uploads/fra-2019-facial-recognition-technology-focus-paper-1_en.pdf  Engstrom, D.F. et al (2020). Government by Algorithm -Artificial Intelligence in Federal Administrative Agencies, 79. same way to AI and machine learning in view of the nature of this technology. 12 Transparency of algorithms includes two elements, namely; accessibility and comprehensibility of information. The fact that the 'black box' of AI systems is inherently opaque, means that the conventional meaning of transparency in this context, is not very helpful to support algorithmic accountability. A lack of transparency, could thus, also prevent citizens from having a suitable legal remedy. 13 Therefore, transparency should rather be construed as explainability or interpretability in the context of AI, meaning, that the design and logic of the AI process should be explained. Explainability is also contextual, namely different users might need a different level of explanations, and in this way algorithmic accountability could be strengthened. 14 Algorithmic accountability is about, 'the design and implementation of algorithmic systems in publicly accountable ways, to mitigate harm or negative impacts on, consumers and society'. 15 While the concept of someone being held accountable to consumers or society sounds simple, it is evident that algorithmic accountability is a much more complex concept that deals with various factors and limitations, for example, the problems relating to transparency of algorithms. Busch pointed out that algorithmic accountability is not only about ensuring transparency, but that it also relates to the ethics of algorithms, the legal and technical requirements in their design, and societal considerations 16 . Algorithmic accountability is not about an ex post facto, once-off report, but it requires a systemic approach. The Alan Turing Institute argues aptly that, accountability should be applied over the whole algorithm life cycle and that answerability and auditability are the core elements thereof. 17

International initiatives to regulate the use of AI
Many high-level meetings and research groups have debated the ethical and legal considerations relating to responsible AI, during the last few years. These debates contributed to the development of various documents, that provided principles and guidelines in support of responsible or trustworthy AI.
The following examples show some of the important international proposals. 12 The European Commission's High-Level Expert Group on AI (AIHLEG) published a framework document called, 'the Ethics Guidelines for Trustworthy AI', in 2019 18 . They argue that trustworthy AI has essentially three components, namely: (i) It must be lawful.
(ii) It should be ethical.
(iii) It should be robust; both from a technical and a social perspective.
The Council of Europe's Ad Hoc Committee on Artificial Intelligence (CAHAI) proposed nine principles that should underpin the regulation of AI, namely: 1) Human dignity.
2) Human freedom and autonomy.
3) Prevention of harm. 4) Non-discrimination, gender equality, fairness and diversity. 5) Transparency and explainability of AI systems. 6) Data protection and the right to privacy. 7) Accountability and responsibility. 8) Democracy. 9) Rule of law. 19 The Model Artificial Intelligence Governance Framework, issued by the Singapore Personal Data Protection Commission, is based on two guiding principles, which they argue, should facilitate innovation while protecting the interest of consumers, namely: (i) the use of AI in decision making should be explainable, transparent and fair, and; (ii) AI solutions should be human-centric. 20 Other important and useful international guidelines are; the OECD Recommendation of the Council on Artificial Intelligence 21 , the G7 Common Vision for the Future of AI, adopted in 2018 in Canada 22 and the World Economic Forum's Framework for Developing a National AI Strategy 23 , to name but a few. The table below provides a summary of the key principles indicated by various organisations: 18 AIHLEG (2019) Ethics Guidelines for Trustworthy AI. https://ec.europa.eu/digital-single-market/en/high-level-expert. group-artificial-intelligence. 19 CAHAI, (2020) Feasibility Study, Strasbourg, https://rm.coe.int/cahai-2020-23-final-eng-feasibilitystudy-/1680a0c6da. 20 Personal Data Protection Commission Singapore. (2020). Modal AI Governance Framework, www.pdpc.gov.sg/Help-and-Resources/2020/01/Model-AI-Governance-Framework.

21
OECD Recommendation of the Council on Artificial Intelligence. OECD/LEGAL/0449, 2020.

Table 1: Key principles in international documents
A comprehensive study in 2020 by the Council of Europe, which included, a mapping exercise as well as stakeholder consultations, provided a good analysis of the relevant soft law documents produced by governmental and non-governmental organisations around the world, that could guide the development of appropriate regulatory frameworks. 24 One of the key findings of this study indicates that, there are some core principles common to most of the soft law documents developed in the world, namely; transparency, privacy, responsibility, justice, and non-harming. More than half of the AI ethics documents are concerned about the impact of AI on human rights. 25 The crystallization of a common set of principles that should underpin the regulation of AI is helpful, but how should this be translated into a useful regulatory framework? What should be regulated? Only the AI systems or also the development of AI, and the underlying data that is used?
An algorithm does not exist in isolation, but it forms part of a more comprehensive system which includes the technological development of the AI and, according to Muller, also the people involved in the creation, control and use of the AI, as well as the people impacted by it. In other words, one should have a holistic approach and consider the whole AI life cycle. 26 This is in line with the systemic approach to AI governance proposed by the Turing Institute and which includes key legal principles, such as transparency and accountability, that must be applied to the whole AI life cycle. 27 This means that, regulation of AI should include the various steps in the AI life cycle, namely the acquisition of data, the design, testing, validating of AI, use of AI, and the impact of the AI in society. It is argued that such a systemic approach creates an appropriate way to have a legal response to the concerns regarding algorithmic accountability, and the potential impact of AI on human rights, highlighted above. 24 CAHAI. (2020). Towards Regulation of AI Systems, DGI (2020) 16. 25 CAHAI, Feasibility Study supra p 43. 26 Muller, C. 'The Impact of AI on Human Rights, Democracy and the Rule of Law' in CAHAI, supra p 28. 27 Leslie (2019) supra.

OECD
Human rights X X Human freedom, autonomy X X X X Prevention of harm X X X Non-discrimination, fairness X X X X Transparency, explainability X X X X Privacy X X Accountability X X X Democracy X X Rule of law X X Technical robustness X X X Societal and environmental wellbeing X X The EU is currently leading the way, regarding the development of an AI regulatory framework, and the European Commission has in fact published a draft AI Regulation in April 2021, which indeed includes reference to the whole AI life cycle. 28 This proposed regulation, which is informed by the work done by the AIHLEG, follows a risk-based approach, that aims to protect the fundamental human rights enshrined in the European Charter of Fundamental Rights. AI systems with unacceptable risk are prohibited, while high risk and low risk AI systems are allowed with different requirements applicable to each of these last two categories.
While this draft AI Regulation applies to the whole society, including the use of AI in government, it is necessary in the context of this article to reflect on recent initiatives that focus specifically on regulating AI in government.

Existing regulations on the use of AI in government
In contrast to the global race to develop general AI regulations, there is a smaller number of initiatives, relating to the responsible use of AI in government. In view of the nature of government and its position vis-a-vis citizens, it is necessary, to specifically focus on the legal issues and initiatives to regulate AI in government.
The same concerns, regarding potential negative human rights implications and the need for transparency and accountability regarding the use of AI in society, also apply to the governmental context. It is arguably even more relevant in this context, in view of the general duty on government, to promote and protect human rights. A recent study showed that in various countries in Central Europe, there is growing interest in, and use of AI in government, but there is at the same time, a need for appropriate policy and legal frameworks, according to a report issued by Open Data Kosovo. 29 An underlying issue in government in the countries recorded in this study, is a lack of transparency, accountability and knowledge about the use of AI in government, which confirms, the need for policy and legal guidance, as well as training of officials in digital skills and knowledge of AI.
The use of AI in government can transform the way in which government delivers its services to the community. More efficiency in the administration, and improved policy development and decision-making are some of the benefits, but there are also many risks related to the use of AI by government, e.g. mass surveillance of citizens by using facial recognition software and a lack of transparency and accountability.
The Government of Canada has adopted a Directive on Automated Decision-making in 2019, with the aim of giving legal direction to the use of automated decision-making and artificial intelligence in government. 30 It has been in operation since the 1 st of April, 2019. This initiative specifies a detailed practical approach to the use of AI in decision-making in government, which, provides guidance and legal certainty for Canadian citizens. It is a practical model based on principles, which could be used in other countries as well.
The Canadian Directive applies to any system, tool, or statistical models used in government, to recommend or make an administrative decision about a client. Some institutions such as the offices of the Auditor General and Chief Electoral Officer are excluded from the Directive, but they can enter into specific agreements with the Treasury Board of Canada, regarding the application of the Directive. Automated decision systems are defined by the Directive as follows: "Includes any technology that either assists or replaces the judgement of human decision-makers. These systems draw from fields like statistics, linguistics, and computer science, and use techniques such as rules-based systems, regression, predictive analytics, machine learning, deep learning, and neural nets." This definition is wide enough to include algorithms that only support decision-making by humans through probabilistic reasoning, as well as, the broader spectrum of artificial intelligence applications, such as machine learning that could support or replace human decision-making.
The objective of the Directive is, to reduce the risk associated with automated decision-making, and to support more 'efficient, accurate, consistent, and interpretable decisions made pursuant to Canadian law.' 31 The application of this Directive should produce the following results: "Decisions made by federal government departments are data-driven, responsible, and comply with procedural fairness and due process requirements. Impacts of algorithms on administrative decisions are assessed and negative outcomes are reduced, when encountered. Data and information on the use of Automated Decision Systems in federal institutions are made available to the public, where appropriate." 32 It is evident from the stated outcomes above, that procedural fairness, lawfulness, as well as transparency, are underlying principles that underpin this Directive. An important practical element, which is included in the architecture of the Directive, is the use of an algorithmic impact assessment, which is defined as: "A framework to help institutions better understand and reduce the risks associated with Automated Decision Systems, and to provide the appropriate governance, oversight and reporting/audit requirements, that best match the type of application being designed." 30  Algorithmic impact assessments vary in their scope and use, and could be limited, focusing for example, only on data privacy, or comprehensive inclusion of a wide range of issues. The Canadian Government followed the route of a compulsory and fairly comprehensive algorithmic impact assessment, for all existing and new, automated decision-making systems in government.

Government of Canada
An Algorithmic Impact Assessment (AIA) must be done before the use of any automated decision system 33 in government. It is a risk-based measurement tool that helps software developers to design AI solutions in an ethical and human-centered way. In terms of the AIA provisions, four levels of risk are identified, namely:  Little to no impact,  Moderate impact,  High impact,  Very high impact on:  the rights of individuals or communities,  the health or well-being of individuals or communities,  the economic interests of individuals, entities, or communities,  the ongoing sustainability of an ecosystem.
The impact levels show a clear human-centric approach, and provide some indication about the risk attached to a proposed algorithm. At the opposite ends of the risk spectrum would be, for example, a government chat bot that provides information on public services and access thereto (little or no impact) and, an automated decision-making tool, to determine the allocation of social grants (very high impact). Understanding the level of risk is useful, but it is necessary that the software developers, as well as the intended users in government, respond in an appropriate way to mitigate the risk. The Directive therefore, sets out impact level requirements for the respective levels that cover a spectrum of possible actions, for example; peer review, testing of the AI model, humanin-the-loop for decisions, and appropriate explanation, depending on the level of risk. It is a fourstep process consisting of the following steps described in more detail below: a) respond to questions to determine risk level; b) identify various risk-based actions and apply it; c) produce a report to the government; d) publish final updated AIA on the Canadian Government's website.
Any person or organisation that considers the development of an AI system for use in government must respond to a detailed set of questions, used to determine the relevant risk level. Examples of these questions are:  Who collected the data?  Will the system assist or replace a human decision-maker?  Is the algorithmic process difficult to interpret or explain?  What is the impact of the system on the rights and freedoms of individuals? 33 Art. 6 of the AI Directive. Based on that assessment, different actions are to be followed in the further design and use of the AI system. An example of the risk-based actions regarding human involvement is: Risk level I and II (low and moderate impact) -decisions may be rendered without any direct human involvement.
Risk level III and IV (high and very high impact) -decisions may not be made without having specific human intervention points during the decision-making process; and the final decision must be made by a human. 34 A report is then produced to indicate the results of the impact assessment and the system requirements, regarding the range of factors, such as notice, explanation or human involvement. The final AIA must also be published on the official Government of Canada website to adhere to the principle of transparency. It is a practical approach that ensures various human intervention possibilities when artificial intelligence is to be used in government. Although this AIA is detailed and it contributes to ensuring responsible use of AI to reduce risk or harm, it could still be improved or finetuned, to, for example, include a requirement to stipulate what assurances regarding the protection of human rights are provided.
The adoption of this Directive does not mean that the breaks are put on innovation in government. To the contrary, it was designed to balance the benefits of innovation using algorithms with the societal needs of ethical and responsible use of AI. The Directive translates legal principles into practical measures that, should be applied to the design and use of AI in government.
In the United Kingdom there is not yet an AI regulation, but the UK AI Council developed a policy document titled, 'a Roadmap for AI', which focuses on the landscape for the responsible use of AI in government and in society. This UK AI Roadmap confirms the importance of key principles such as accountability, transparency and the respect for human rights in the development and application of AI, thus, providing a solid basis for the regulation of responsible or trustworthy AI. 35 Regulating responsible AI in government is only one side of the coin. Well-equipped leaders and other officials in government who understand the new digital environment and harnessing the benefits of AI in the delivery of public services, is equally important. Canada has established a Digital Academy as part of the Canada School of Public Service. 36 In Sweden there is also a dedicated focus on developing digital leaders. AI Sweden, which is a collaboration between the Swedish Government, the Swedish Centre for Applied AI and the private and public sectors, has developed a program to educate leaders about the technical, legal and ethical aspects of using AI in society. 37 In the UK it is also proposed, that officials in local and national government should be equipped with digital and data management skills. In Germany researchers from the Bertelsmann Stiftung and 34 Appendix C of the AI Directive. 35 UK AI Council. (2021). AI Roadmap. https://www.gov.uk/government/publications/ai-roadmap. 36 Government of Canada. (2019). Get in friends, we're going to the CSPS Digital Academy. https://www.canada.ca/en/government/system/digital-government/living-digital/get-in-friends-were-going-to-csps-digital-academy.html.
iRights.Lab developed a practice guide for algorithmic support systems aimed at the common good, which could be used by officials in government. 38 The steps contained in this practice guide deal with the planning, development and use of algorithmic support systems in government. It is clear that appropriate regulation that will enhance innovation, as well as leadership and skills development for responsible AI in government are crucial building blocks for digital governance.
It is not only at national government level that there are initiatives to use AI in government. Verhulst refers to AI localism to describe the initiatives of local decision-makers to enhance the use of AI in local government. 39 This development, which is recorded inter alia in Canada, USA and in some EU Member States, fills the gap left by a lack of national government regulation, or incomplete governance provisions within the private sector. Verhulst developed an AI localism canvas to serve as a guide for policymakers, local government leaders, researchers, and software designers for developing responsible AI solutions for their cities or local communities. This canvas covers a spectrum of topics that are critical in the whole AI life cycle. Appropriate legislation is an important part of this AI localism canvas, but it fits within a larger context that includes policy, governance and legal aspects.
Source: Verhulst https://medium.com/swlh/the-emergence-of-ai-localism-governing-artificialintelligence-at-the-local-and-city-level-4988c21cedd6 Some examples of local government initiatives are the New York City Automated Decision Systems Task Force, established in 2018 40 and, the Amsterdam Algorithm Register. 41 In terms of a local New York law 42 , the Automated Decision Systems Task Force was established to support the responsible use of automated (algorithmic) decision making within the local government, to build capacity, and manage the use of algorithms in decision-making in the delivery of services to the community. The Amsterdam Algorithm Register is an initiative by the City of Amsterdam to provide an overview of the use of algorithms in the delivery of services in Amsterdam. In addition to being quite informative, it also provides citizens the opportunity to participate in building humancentered algorithmic solutions in Amsterdam. This register promotes transparency and public participation, which are important elements of ensuring responsible AI in government. In another initiative the City of Amsterdam is piloting a, 'Framework for Responsible AI in Times of Corona', which provides a quick self-assessment tool, to assess the level of adherence to the key requirements of responsible AI. 43 The results of this quick assessment could then be used to adjust the design or application of the AI to ensure that it meets the requirements of responsible AI. This Amsterdam Framework is not a law, but it includes various legal aspects such as respecting democracy and the rule of law, compliance with existing legislation, respecting human rights, and the degree of transparency or explainability. In absence of specific regulation on the use of AI in government this practical initiative by Amsterdam is useful.

The South African Legal Context
There is currently no specific law on artificial intelligence in South Africa, but there is a broader legal context within which, discussions about any guidelines, policies or regulations regarding responsible AI in government as well as in society should take place. The international developments highlighted above, provide useful guidance in the development of an AI legal framework in South Africa. These principles are foundational to the new AI Regulation in the EU, which provides a wellfounded, comprehensive, regulatory framework for the development and use of AI, and thus, a good point of departure for countries embarking on the road of regulating AI.
The table below, gives an indication of how South Africa's constitutional provisions compare with the key principles for regulating AI, found in most international documents. Human-centered AI is based on the premise, that human rights should be protected in the design and use of AI in society and the recognition, protection and promotion of human rights is a key feature of the South African constitutional system. The challenge is to apply these constitutional principles and human rights appropriately in regulating the use of AI in government.
In the context of AI, the issue of human dignity relates to the principle of fairness, which is aimed at ensuring, that algorithmic decisions do not create discriminatory or unjust impacts or cause harm. In applying this principle, one should not simply look at the algorithm, but start with the data on which the algorithm is developed. The substantive dimension of fairness, according to the AIHLEG, aims to ensure that individuals and groups are 'free from unfair bias, discrimination and stigmatisation' and it applies to the whole AI life cycle. 44 The Turing Institute clarified it further by stating that fairness means data fairness, design fairness, outcome fairness and implementation fairness. 45 Such a systemic consideration of fairness, will contribute to an inclusive approach to regulating AI in government.
The concept of oversight is part of the constitutional architecture, by way of the various checks and balances related to the separation of powers as a general measure. In the context of humancentered AI, human oversight or human control is very important, and specific legal provision has to be made to accommodate that. A clear indication should be provided of the degree of human involvement, for example, human-in-the-loop, which suggests an active human involvement and control over the AI. 46 In short, the South African Constitution includes foundational values and individual human rights, which are key requirements for trustworthy or responsible AI. These are important building blocks for any initiative to develop policy or legal frameworks for responsible AI in government, but it is certainly not enough.
While the Constitution provides the foundation, there are also some provisions in existing legislation, that are relevant to the discussion about the development of an appropriate legal framework, for responsible AI. The right to just administrative action (sec. 33) is of particular relevance to the use of algorithms in decision-making in government. It implies that, everyone has the right to administrative action that is lawful, reasonable and procedurally fair. The Promotion of Administrative Justice Act (PAJA) Act 3 of 2000, was approved by Parliament, to give effect to this right. Administrative action in terms of s.1 of PAJA includes a decision of an administrative nature by an organ of 44 AIHLEG n 4. 45 Leslie n 3.

46
Personal Data Protection Commission Singapore. (2020). Model AI Governance Framework p30. Dirk J. Brand state or a natural or juristic person exercising a public power or performing a public function. 47 In all three spheres of government a huge variety of decisions of an administrative nature are taken on a daily basis and everyone is entitled to lawful, reasonable and procedurally fair decisions, based on the provisions of section 33 of the Constitution. Algorithms could be used to enhance decisionmaking in government or even to replace human decision-making, for example, a service delivery chatbot that interact with citizens on the website of a municipality. Automated or algorithmic decision-making refers to the use of artificial intelligence in decision-making. 48 Automated decisionmaking in government is a reality and will be used more frequently through various applications in all three spheres of government in South Africa. This raises various questions, e.g., what is the impact of PAJA on automated decision-making, and how could the right to just administrative action be applied when artificial intelligence is used in government? It is beyond the scope of this article to dissect these questions in detail; however, it is evident that issues relating to just administrative action and algorithmic decision-making should form part of the development of regulations regarding responsible AI in government.
Another relevant law in this context is the Protection of Personal Information Act (POPIA), Act 4 of 2013, which gives effect to the right to privacy contained in sec. 14 of the Constitution. The right to privacy is an important individual human right and includes the right to protection of personal information or data. Various aspects of the right to privacy could be impacted by AI, for example, informational privacy, which means the ability of a person to control his or her personal information. POPIA includes the right of a data subject (natural or juristic person), not to be subject to a decision based solely on automated processing of personal information, to create a profile of the person. 49 This is a fairly narrow stipulation regarding the use of algorithms, which suggests that more should be done when a regulatory framework for responsible AI in government is designed. In accommodating the right to privacy in the whole AI life cycle, that includes the collection of data, the design and application of AI is an important element of responsible or trustworthy AI and will enhance consumers' trust in the relevant AI applications.
In considering suitable legislation to regulate the use of AI in government, the socio-economic context of the country should be considered as well, since law does not operate in a vacuum. The high level of socio-economic inequality in South Africa requires that an inclusive approach should be followed regarding the development of the digital economy in general, and more specifically, also, the regulation of AI in government. 50 Such an inclusive approach was inter alia promoted by the EU under the German Presidency during 2020. The Preamble to the Berlin Declaration on Digital Society and Value-Based Digital Government contains this appropriate statement, which is not only important in the EU but also in the rest of the world: 51 "Everyone should be able to seize the opportunities offered by digitalisation. No one should be left behind. This declaration aims to contribute to a value-based digital transformation by addressing and ultimately strengthening, digital participation and digital inclusion in our societies." It is clear that in terms of the current South African legal context, there are already important principles and various human rights provisions in the Constitution, which provide the foundation for the development of regulations relating to responsible AI in government. It is also clear that much work still has to be done. There is not only a need to translate the key legal principles to the AI context, but various questions about the way in which AI could be used in government and the legal implications thereof, should also be attended to, for example, what are the administrative law implications of automated decision-making in government, or how would the use of AI in health care impact citizens' access to health care services? In the design of such an AI law, the objectives and scope of application should be clarified; would it for example enable innovation or protect human rights or both? 52 A clear indication of how risk would be managed, should also be included.
An AI law in South Africa should not only serve the purpose of providing a legal framework for responsible use of AI in government, but it could also be an enabler to improve the quality of public services. The use of technology, including AI, in a constructive way, for example, by creating more interactive web-based tools for community participation and monitoring of public services, will enhance decision-making and service delivery.
A regulatory framework should not be developed in isolation, but should be linked to relevant policy and practical initiatives as the Canadian experience indicates. According to the World Economic Forum the development of a national AI strategy includes a regulatory framework, that provides ethical norms and promote the development of AI, and it should also include skills development, enhancement of the research potential of the country, investment in key economic sectors and international collaboration. 53 Such a regulatory framework should also allow sufficient scope for innovation, as is evident in the case of the EU AI Regulation proposal, as well as the Canadian Directive.

Conclusions and Recommendations
In the international discourse about responsible AI there is a growing consensus about core principles that are included in various guideline documents, and which should be, the basis of any legal framework, which could be, at national or international level or both. While much of the debates, as well as most of the actual documents, are concerned with the general development and use of AI 51 European Council Berlin Declaration on Digital Society and Value-Based Digital Government. 2020 https://www.europeandataportal.eu/en/news/berlin-declaration-digital-society-and-value-based-digital-government. in society, only a few initiatives focus on the policy and legal framework for responsible AI in government. These guideline documents all include a set of principles, which have a clear legal basis in constitutional law, and which, are contextualised to apply it in a workable fashion to the field of artificial intelligence.
In response to the first research question, it is concluded that developing an appropriate policy or legal framework for responsible AI in government requires a solid foundation, and that consists of key principles such as transparency, accountability, fairness and privacy. Technology, in particular AI, should serve humanity and therefore requires a human-centered approach which recognises human rights.
The origin and contributors or designers of these guideline documents are diverse and include academic institutions, international organisations, national governments, a supra-national institution (European Union), expert teams and information technology companies. This is to be expected since AI is not limited to any specific institution, country, economic sector, or field of application. Governments, private sector, academic institutions, as well as civil society organisations, all have an interest in the development and use of AI. It is therefore necessary that the diverse interests and contributions of software designers, decision-makers and users, be considered in the development of policy and legal frameworks for responsible AI in government.
In respect of initiatives focused on responsible AI in government, the initiative of the Canadian Government, namely a Directive on Automated Decision-making (2019), is an important development that includes principles, as well as a practical approach, in the form of an Algorithmic Impact Assessment. Such a risk-based approach combines an appropriate legal basis with a practical instrument, to assess the potential impact and limit risk of using specific algorithms in government. Allowing scope for innovation is balanced in this way with legal certainty and clarity. The absence of legal frameworks does not deter software developers to create new and innovative AI products for use in government, as is, for example, evidenced by the Open Data Kosovo report. However, it is in the interest of society to provide clear policy and legal guidance based on a human-centric approach, for the development and use of responsible AI in government. The regulation of AI in government in South Africa should therefore, be based on the constitutional principles of transparency and accountability and recognition of basic human rights contained in the Constitution.
Whether a comprehensive approach for regulating AI in society is used, for example the recent EU proposal for an AI Regulation, or a limited approach focusing only on AI in government, as Canada has done, or a combination of both; certain key elements should be included in a proposed regulation. In response to the second research question regarding the key elements of a future AI law for South Africa, it is recommended that a legal framework for responsible AI in government should include at least the following provisions: i) Objective, definitions and scope of the regulation that should enhance innovation. ii) Key legal principles of transparency (explainability), accountability, privacy, fairness; respect for human rights. iii) Application to the whole AI life cycle. iv) Human oversight. v) An algorithmic impact assessment based on a risk-model.

vi) Remedies.
A practical tool such as an algorithmic impact assessment, which is used by Canada, or a sandbox for testing new AI systems in a protected environment, as is proposed by the EU, are important instruments that will complement such a regulation and support innovation. Although the Canadian Directive is implemented in a developed economic context, with a well-established governmental system, it is argued that the directive could be used in various jurisdictions, including South Africa, and different socio-economic contexts, with relevant legal amendments, to fit within the specific legal system. This is due to its practical risk-based approach and recognition of fundamental human rights in the design and use of AI, which is of universal interest. Various socio-economic needs, such as access to good education and health services, could be addressed by appropriate policy initiatives that accompany the AI regulation. This should strengthen innovation, while creating opportunities for sustainable development.