Rich Tatum »∵«
banner
richtatum.bsky.social
Rich Tatum »∵«
@richtatum.bsky.social
Technical SEO, AI/LLM automator, prompt whisperer, editor, media guy, photographer, factotum. Noticer of overlooked details.
I ♡ story, dataviz, analytics, writing, editing, podcasting.

→ Available to hire!
This transparency and intentionality would serve everyone, regardless of worldview, by enabling informed choices about the AI systems we create and use, and ensuring they align with our shared values and ethical standards.

<end transmission \>
December 4, 2024 at 2:48 PM
The work ahead isn't just about recognizing these embedded worldviews, but about actively incorporating ethical principles from faith traditions to guide the development of these powerful tools.
December 4, 2024 at 2:48 PM
—whether derived from scientific materialism, religious traditions, or philosophical frameworks—are already deeply woven into these systems.
December 4, 2024 at 2:48 PM
Beyond asking what role faith and ethics should play in AI development (an important question, to be sure), we need to acknowledge that foundational assumptions about reality, meaning, and ethics—
December 4, 2024 at 2:48 PM
This reality demands a new level of honesty in commercial AI discourse.
December 4, 2024 at 2:48 PM
This could help users engage with AI that resonates more closely with their values while still benefiting from the technology. By honoring various faith perspectives, we can ensure that AI serves as a tool for inclusion rather than division.
December 4, 2024 at 2:48 PM
I suspect that future development efforts will focus on creating bespoke, niche generative models aligned with local community standards and various worldviews and faith groups.
December 4, 2024 at 2:48 PM
Without this transparency, how can we fully understand or responsibly engage with these increasingly influential tools?
December 4, 2024 at 2:48 PM
This becomes even more critical given the current lack of transparency in AI development. We have no “ingredients list” for these models—no clear view into what worldviews, biases, or ethical frameworks have shaped their training data or alignment systems.
December 4, 2024 at 2:48 PM
From the curation of training data to the design of alignment systems to our daily interactions—our fundamental beliefs about reality and ethics are inevitably present, whether we acknowledge them or not.
December 4, 2024 at 2:48 PM
Faith traditions provide rich ethical principles—compassion, justice, respect for human dignity—that can guide AI development. By intentionally integrating these values, we can create AI systems that not only reflect diverse worldviews but also aspire to our highest shared ideals.
December 4, 2024 at 2:48 PM
→ So, what does faith have to offer when considering the ethical dimensions of AI?
December 4, 2024 at 2:48 PM
These worldviews—which may conflict, overlap, or remain hidden—are inescapably woven into the fabric of AI, whether we’re approaching these tools as atheists, agnostics, or adherents of any faith tradition.
December 4, 2024 at 2:48 PM
This dynamic plays out at every level of AI systems: our fundamental beliefs about reality are embedded in the training data (whether we contributed to it or not), encoded into the guardrails (whether we agree with them or not), and present in our every interaction as end users.
December 4, 2024 at 2:48 PM
It’s important to be aware of our cognitive biases and be intentional about the worldview we align with. But, admittedly, this is very difficult to do.
December 4, 2024 at 2:48 PM
Unfortunately, the only element we can truly know about our AI interactions is what biases, beliefs, and assumptions we ourselves bring to the conversation. And even there, most of us remain largely unaware of our own hidden brains.
December 4, 2024 at 2:48 PM
3️⃣ Third: We can’t escape our own biases and worldviews being involved.
December 4, 2024 at 2:48 PM
But consumers have a right to know what they’re ingesting. It should be the same for intellectual consumption. There needs to be a useful balance between protecting IP and ensuring transparency.
December 4, 2024 at 2:48 PM
I recognize that proprietary intellectual property is a commercial value. Companies guard their secret recipes—Coke doesn’t reveal its formula, after all.
December 4, 2024 at 2:48 PM
Individual users like you or me might disagree with some of these rules and permissions—if we could know them. But they, too, are opaque and unknowable to us.
December 4, 2024 at 2:48 PM
Think about it: every rule or law embeds an ethical or moral view. Rules reflect worldviews. Thus, the guardrails attempting to constrain AI and LLMs reflect the values or ethics of their builders.
December 4, 2024 at 2:48 PM
Every AI response is shaped by built-in **guardrails**—the rules and algorithms influencing, modifying, or filtering every output. Unless the model replies with an apologetic refusal to answer, these guardrails are usually invisible and unknowable to us.
December 4, 2024 at 2:48 PM
2️⃣ Second: The outputs are generally constrained by ethical, moral, and legal frameworks—but whose?
December 4, 2024 at 2:48 PM
(To be fair, some organizations are taking steps toward transparency by publishing model cards that outline aspects of the training data and limitations. That’s a step in the right direction.)
December 4, 2024 at 2:48 PM