Test everything; hold fast to what is good
1 Thessalonians 5:20-21
The WCC central committee, meeting in Geneva on 21-27 June 2023, expresses concern at the accelerating pace of development and application of generative artificial intelligence[1] (AI), which has provoked intense public discussion and debate on the risks of this new technology as well as its potential benefits. The statement on “New and emerging technologies, ethical challenges” adopted by the executive committee of the World Council of Churches (WCC) in November 2022 – based on discussions at the WCC 11th Assembly in September 2022 – observed “a number of grave ethical challenges that emerge from the accelerating development of these technologies, the corporate commercial logic that drives them, and the massive concentration of power in the hands of a very few individuals with disproportionate impact on the lives of all.” We share the concern that the current rapidity of AI development is not guided by the common good, but by the commercial interests of the most powerful, with unknown but potentially grave risks for society.
Concerns about this type of technology have been longstanding in the ecumenical movement. Indeed, a WCC Faith and Order study document pointed out almost 20 years ago that developments in artificial intelligence raise questions about what it means to be human and “the implications for our understanding of human intelligence — and human uniqueness as created in the image of God.”[2]
Nevertheless, the positive potential of generative AI for major advances in health science is already being demonstrated, and the technology is expected to have important applications in many other areas for enhancing human well-being, education and environmental sustainability. However, driven by vastly increased private sector investment and commercial competition, and without effective regulatory frameworks at national and international levels, leading AI laboratories are engaged in a perilous race to develop and deploy ever more powerful AI models that not even their developers can fully understand or reliably control. Moreover, due to the ‘digital divide’ even AI’s positive benefits are unlikely to be shared equally, and the development of this new technology is likely to increase inequalities.
Significantly, after years of advocacy and concerns expressed by civil society and marginalized and vulnerable groups, many scientists and leaders in the development of this technology have recently joined in a dramatic statement that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”[3] Many such scientific and industry leaders have also issued an urgent appeal for a six-month pause in the development of AI systems more powerful than GPT-4 - the most advanced model to date - to allow for the development and implementation of shared safety protocols for advanced AI design and development, and for the establishment of robust AI governance and regulatory systems.[4]
The central committee affirms the concerns expressed by many regarding the absence of effective regulation of the accelerating development of a technology with such vast acknowledged potential for harm as well as for good. We recognize the many areas in which narrowly constrained applications of this technology are already demonstrating their utility. However, we also underline the areas in which harms are already evident, including algorithmic bias entrenching discrimination based on race and gender, and the amplification of divisive and destructive misinformation, undermining democracy, social cohesion and trust.
Further risks associated with AI include a new wave of technologically-driven unemployment, even greater concentration of power and wealth in the hands of a technological elite, enhanced capacities for surveillance and repression, the development of autonomous weapons systems (‘killer robots’)[5] able to function without meaningful human control, and proliferation in the development of chemical, biological, and cyberweapons by states and non-state actors.
We note and will monitor the current developments at European Union level towards the first ever regulatory framework for AI - the Artificial Intelligence Act - aiming to ensure that AI systems are overseen by human beings, are safe, transparent, traceable, non-discriminatory, and environmentally friendly – though we observe with concern that these efforts are already being undermined by AI industry lobbyists.
Moreover, particularly with regard to efforts to develop ‘Artificial General Intelligence’[6], we see a need to draw on the legacy of Christian theology and anthropology in order to critically and constructively contribute to the public discourse on this matter and to counter a modern form of idolatry. As Christians, we also raise the concern that AI may present challenges for the work of the churches in ministry and theology, replacing human interaction with digital facsimiles, facilitating plagiarism instead of theological insight, and threatening traditional and Indigenous knowledge and wisdom. Theological reflection needs to hold in tension interrogating AI technologies while accepting their reality, critiquing these technologies according to principles emerging from Christian theology and ethics, especially ecological and contextual theologies, and seek ways that they can contribute to the building up of the peaceful and just reign of God.[7]
While the development of AI can be a powerful positive force for sustainable development, appropriate regulation, accountability and education must be ensured, so that human beings – rather than profit – remain at the centre of this technology, and so that justice and the protection of God’s precious and increasingly threatened Creation – rather than its exploitation – defines its purposes.
The WCC central committee therefore:
Appeals to all governments urgently to establish strong and binding regulatory regimes based on the precautionary principle[8] for the further development and deployment of more advanced AI systems - especially Artificial General Intelligence - including through regulation of access to the large amounts of specialized computational power necessary for AI development.
Encourages the development of legal liability frameworks to hold those who develop and deploy AI systems legally responsible for resulting harms.
Urges all governments to cooperate internationally for the negotiation of a common international normative standard for the regulation of AI.
Calls on the UN Secretary-General to make the regulation of further development and deployment of advanced AI a key priority in the UN’s New Agenda for Peace, and to promote the negotiation of a common international standard thereon.
Requests the WCC general secretary to consult and collaborate with ecumenical, interfaith, civil society and UN system partners on ways in which the WCC might help mitigate AI risks while leveraging its potential benefits for human flourishing and environmental sustainability.
Invites reflection and action on this and related issues pertaining to ‘faith, science and wellbeing’ with input from WCC’s Faith & Order Commission, the Commission of the Churches on International Affairs, the Commission on Health & Healing, and other WCC commissions and advisory bodies as relevant to their respective mandates.
Invites all WCC member churches and ecumenical partners to advocate with their governments for swift action to establish appropriate regulatory regimes and accountability frameworks, and to engage in theological reflection and study through their theological education institutions on the ethics of AI and its implications for human self-understanding, taking into account its potential positive as well as negative consequences.
[1] Generative artificial intelligence describes algorithms (such as ChatGPT) that can be used to create new content, including code, text, audio, images and videos.
[2] Christian Perspectives on Theological Anthropology: A Faith and Order Study Document, Faith and Order Paper 199 (Geneva: WCC Publications, 2005), 26; WCC Digital Archive: https://archive.org/details/wccfops2.206/page/26.
[3] Center for AI Safety, Statement on AI Risk: https://www.safe.ai/statement-on-ai-risk#signatories
[4] Future of Life Institute, ‘Pause Giant AI Experiments: An Open Letter’: https://futureoflife.org/open-letter/pause-giant-ai-experiments/
[5] WCC executive committee Minute on Lethal Autonomous Weapons Systems (‘Killer Robots’), November 2019
[6] Artificial General Intelligence (AGI) refers to a theoretical type of AI that equals or exceeds human-like cognitive abilities, such as the ability to learn, reason, solve problems and communicate in natural language. In contrast, narrow (or weak) AI is able to solve one specific problem or task in response to a command, but lacks general cognitive abilities.
[7] See for example, Erin Green, “Sallie McFague and an Ecotheological Response to Artificial Intelligence,” Ecumenical Review 72:2(2020) 183–96, https://doi.org/10.1111/erev.12502.
[8] The precautionary principle states that if a product, an action or a policy has a suspected risk of causing harm to the public or to the environment, protective action should be supported until there is complete scientific proof that the risk is absent or minimal.