ai-Bias-Fairness

Artificial intelligence (AI) is becoming part of our everyday lives. It is being embedded in search tools, smartphones, family history apps, education platforms, medical guidance, and even the tools we use at work and church. As disciples of Jesus Christ, we care deeply about fairness, compassion, agency, and truth. So, it’s worth asking, “Is AI fair? And if not, why?”

This article offers a simple explanation of why bias shows up in AI and how we can use these tools wisely.

Why Does AI Have Bias?

AI systems learn by observing the information around them. They do not hold beliefs or emotions, but they pick up patterns from the data they are trained on. Those patterns include both the strengths and the flaws found in human society.

Think of it like raising a child. If a child only hears one point of view, or grows up with incomplete information, he or she will naturally develop a limited understanding of the world. AI works the same way.

Here are the most common reasons bias appears:

  • The data it learns rrom. Most AI systems are trained with enormous collections of text, images, and examples gathered from the internet. But the internet is not always balanced, kind, or representative. If some groups, voices, or cultures are over- or under-represented, AI will reflect that imbalance.
  • The way humans label information. People who categorize or review content for training often have different backgrounds, values, and interpretations. Their opinions, however sincere, may introduce subtle bias.
  • Real-world inequalities. AI mirrors reality. If certain groups historically received fewer job interviews, less medical attention, or unequal treatment, the data will show those patterns, unless developers actively correct them.

Where Bias Shows Up

Bias in AI isn’t only about big social debates. It can affect normal experiences, for example:

  • In healthcare, models may perform better on some age groups, races, or genders than others if the training data wasn’t balanced.
  • In hiring tools, AI may unintentionally prefer resumes from certain schools, backgrounds, ages, or regions.
  • In online content moderation, certain dialects, cultural expressions, or religious phrases can be misinterpreted as “harmful” or “aggressive.”
  • In generated images or text, if you ask an AI to “show a leader,” it may default to specific demographics unless you specify otherwise.

For Latter-day Saints, this raises important questions. Does the tool treat everyone as a child of God? Does it respect agency, identity, and worth? Will I get information that is fair and balanced?

Why Fairness Matters

The Savior taught, “all are alike unto God” (2 Nephi 26:33). AI should help all of God’s children equally. When technology is unfair, it undermines principles we hold dear:

  • Moral agency (our ability to act and be judged fairly)
  • Equity and compassion
  • Stewardship over tools that influence others
  • Truthfulness in how information is presented

We don’t need fear or suspicion. But we do need awareness.

What AI Companies Are Doing to Improve Fairness

Even though AI can inherit bias, researchers are working hard to build systems that are more just, representative, and balanced. Some common approaches include:

  • Improving training data. Teams are expanding datasets to better represent age, gender, culture, language, and global diversity.
  • Conducting “fairness audits.” Models are regularly tested to ensure they don’t produce different outcomes for different groups.
  • Using diverse human reviewers. People from varied backgrounds help guide how AI responds so it doesn’t reflect a single worldview.
  • Creating transparency standards. Developers are learning to explain how models make decisions, something that was nearly impossible just a few years ago.
  • Developing regulations and safeguards. Governments and industry groups are working toward standards that protect fairness and human dignity.

No system is perfect yet. But progress is being made.

How Latter-day Saints Can Use AI Wisely

Here are a few principles that align with gospel teachings:

  • Be aware, but not afraid. AI reflects human data. Knowing this helps us interpret its answers more thoughtfully.
  • Seek multiple sources. Just as we do in gospel study, don’t rely on a single result or tool for important decisions. Always verify information from the original source.
  • Look for potential blind spots. If an answer seems off, incomplete, or biased, question it.
  • Remember every person’s divine worth. If you see technology treating groups unfairly, say something. Advocate for fairness.
  • Use AI to lift, not divide. Whether you’re creating content, researching, teaching, or communicating, use AI in ways that reflect Christlike love.

Also, see a list of principles that can provide guardrails in your use of AI in the article “101 Ways To Use AI In Everyday Life.”

Final Thought

AI is a powerful tool. It can bless lives, improve health, strengthen communication, enhance learning, and assist in gathering Israel. But like all tools, it works best when guided by wisdom, compassion, and a commitment to fairness.

When we understand where bias comes from and how to navigate it, we can use AI in a way that honors our values and uplifts others.

For practical ideas on how to use AI, see the article “101 Ways To Use AI In Everyday Life.”

 

 

Pin It on Pinterest

Share This