The Bell: Faith & AI – A Christian contribution to ethical AI

Benjamin Perrin speaking on ‘Faith & AI’ at the Peter A. Allard School of Law. Photo courtesy of the UBC Christian Law Students Association.

The promise and perils of artificial intelligence (AI) have captured a lot of attention in recent years.

As a law professor who has recently been teaching and researching about this topic, I’ve learned a lot from reading a range of critical perspectives on AI that challenges AI-hype fueled by a massive tech industry.

I also wanted to think critically about AI from my perspective as a follower of Jesus.

On March 19, I gave a public talk at UBC entitled ‘Faith & AI: A Christian Contribution to Ethical AI.’ Here’s a quick snapshot of some of what we discussed:

While there is a growing body of writing from a Christian perspective on AI, one of the first pieces I read really resonated with me as providing a valuable framework for answering the question of whether and, if so how, AI should be developed and implemented to address various problems or opportunities in our world.

In An AI Ethics Framework from a Christian Perspective, Tonye Brown (a Christian software developer and founder of FaithGPT) identifies 10 core values grounded in Christian ethics that can help navigate this new technological revolution. I think Brown’s framework is a terrific starting point and have relied on it, while making a few adaptations.

After describing this framework, I identify some examples of how AI is being used in harmful and problematic ways, and applications that are beneficial and positive. Ultimately, by grounding technological innovation in values such as human dignity, love, stewardship, justice, humility, transparency, wisdom, human agency, peace and redemption, developers and policymakers can forge a path toward AI that not only advances efficiency but also nurtures the common good.

Foundations in Christian ethics

At the heart of a Christian ethical framework for thinking about AI lies the belief that every human being is made in God’s image – a concept drawn from Genesis 1:27. This carries profound implications for AI: technology should be designed to respect and enhance human life rather than diminish it. When AI systems are developed with an awareness of human dignity, they can empower individuals and communities, ensuring that technological progress is synonymous with human flourishing.

Equally central is Jesus’ call to “love your neighbour as yourself” (Mark 12:31). This principle calls for AI systems that are accessible, equitable and capable of addressing the diverse needs of society. Whether it’s through user-friendly interfaces or algorithms that mitigate bias, an approach centred on considering the needs of others encourages technologies that bridge social divides rather than deepen them.

The call to environmental stewardship, underscored in passages like Genesis 1:26 and Psalm 24:1, reminds developers that the earth is not ours to exploit, but to care for. In the realm of AI, this translates to a commitment to sustainability – using technology to manage resources responsibly, reduce waste and even combat environmental challenges. When AI systems are designed with sustainability in mind, they contribute not only to economic growth but also to the health of our planet. We know AI systems are huge ‘energy hogs’ and this needs to be considered too.

Justice for the oppressed and marginalized (Isaiah 1:17), demands that AI should work to dismantle systemic inequities. This involves proactive measures such as incorporating diverse datasets, conducting regular bias audits and establishing transparent grievance mechanisms. The goal is to ensure that AI acts as an instrument of fairness, promoting equal access and opportunities across all segments of society.

Humility (Proverbs 11:2) is needed in approaching human development of AI. Developers must be honest about the limitations of AI, avoiding the temptation to over-promise on what technology can deliver. Embracing humility means acknowledging that, while AI is a powerful tool, it should never replace human oversight – especially in high-stakes decision-making. This cautious approach ensures that AI remains a support for human judgment, not a substitute.

Transparency and accountability are indispensable in an ethical AI framework. As Luke 8:17 reminds us that nothing hidden will remain concealed, AI systems must be designed with clear, comprehensible processes. Users should have access to explanations of how decisions are made, and developers must be ready to take responsibility for any unintended consequences of their creations. People should have a right to know if an AI system is being used in ways that affect them, and have clear avenues for redress.

Wisdom (Proverbs 2:2), is the antidote to the mere accumulation of data. AI should complement human discernment, enhancing decision-making without usurping the nuanced judgments that only human experience can provide. In this view, technology serves as an aid – a tool that works alongside human insight – rather than supplanting it.

Preserving human agency (Galatians 5:1) is another cornerstone of this ethical approach. Rather than dictating choices or constraining autonomy, technology should empower users, supporting and extending their capabilities while safeguarding their freedom.

The imperative to promote peace, as taught in Matthew 5:9, challenges developers to consider the broader impact of their work. AI applications in military and security sectors, for example, must be scrutinized to ensure they contribute to de-escalation and conflict resolution rather than perpetuating violence.

Finally, the redemptive vision encapsulated in 2 Corinthians 5:18 calls for technology that goes beyond functional achievements. AI should play a role in healing and restoring our fragmented world — whether by advancing medical research, aiding environmental recovery or rectifying historical injustices.

Identifying troubling & beneficial uses of AI

While the potential benefits of AI are vast, there are also troubling uses of AI that demand our attention.

Lethal autonomous weapons, algorithmic biases, deepfakes and the exploitation of children through manipulated imagery are stark reminders of the risks involved. These examples of problematic uses of AI are deeply troubling and conflict with the ethical principles discussed above.

Conversely, there are promising examples where AI has been used for good. Medical advances powered by AI are revolutionizing diagnostics and treatment resulting in lives saved, while AI technology is also being harnessed to reduce trauma in child exploitation investigations and support Indigenous language renewal.

Moreover, AI can make important contributions to environmental protection and offering innovative support for people with disabilities like an AI program that helps deaf children learn how to read.

Conclusion

The convergence of faith and AI is not about rejecting technological progress but about shaping it in ways that reflect our highest moral aspirations.

Christian ethics offers compelling, resonant values that can help guide Christians in engaging with this rapid technological progress in ways that uphold the sanctity of human life and human dignity, advocates for the marginalized and oppressed, and envisions a future where technology can help support healing and restoration in a broken world. We all have a voice that needs to be heard in this debate.

Artificial Intelligence & Criminal Justice is an ebook which was released January 8.

Benjamin Perrin is a professor at the Peter A. Allard School of Law at UBC. His research and teaching interests include criminal law, constitutional law, international law and artificial intelligence.

He edited Artificial Intelligence and Criminal Justice: Cases and Commentary (Canadian Legal Information Institute, 2025), a 500+ page open access ebook – a collaborative effort with UBC law students who are members of the UBC AI & Criminal Justice Initiative (which he leads).

Perrin is a member of the UBC Centre for Artificial Intelligence Decision-Making and Action (CAIDA), and has just launched a course titled ‘Should we recognize robot rights?’ at the law school.

He was just awarded a ‘Best Legal Blog of 2024’ award, with judges saying, he “set the gold standard for Canadian law prof blogging in 2024.”

He served in the Prime Minister’s Office as in-house legal counsel and lead policy advisor on criminal justice and public safety. He was also a law clerk at the Supreme Court of Canada. He is the author of Indictment: The Criminal Justice System on Trial, Overdose: Heartbreak and Hope in Canada’s Opioid Crisis and several other books.

He has posted this comment on this site as a member of The Bell: Diverse Christian Voices in Vancouver. Go here to see earlier comments in the series.

Share this story

Leave a Reply

Your email address will not be published. Required fields are marked *