👩🏻‍🦱💭
banner
amethystbias.bsky.social
👩🏻‍🦱💭
@amethystbias.bsky.social
This person is a Visiting Assistant Professor at Tulane University and the Ethicist and Growth Specialist for Nummi. Her area of specialization is applied ethics.
Keywords: digital/AI ethics, social media, virtue
Using AI isn’t about skipping thinking, it’s like asking a sharp colleague who replies instantly. Speed doesn’t replace thought; it frees up space for it. If you equate intelligence to human-human interaction, that's another valid thing
May 7, 2025 at 11:47 PM
I get that instinct, but again, the real intelligence test maybe isn’t whether someone uses AI, but how they use it. Sifting through noise is part of almost any meaningful process, whether reading, drafting, arguing, etc. The best thinkers don’t avoid messy inputs; they refine them. It's pursuit.
May 7, 2025 at 11:45 PM
I don't think it’s laziness, maybe it’s unfamiliarity. Most people aren’t trained to evaluate models or verify data pipelines. They use AI like they use a calculator: expecting correct results. A problem is when we treat generative models as oracles. That makes AI use on that premise useless.
May 7, 2025 at 11:37 PM
Garbage in, garbage out applies. But I’d say the value isn’t in blind trust, it’s in how people use the tool. AI’s not a shortcut for thinking, it’s a prompt for it. Verification still matters; the tool just changes where the effort goes. Putting the power in the tech over the people, I'm weary.
May 7, 2025 at 11:35 PM
I get that, but even in the AI sense, “hallucination” still hints at something deeper: the model is trying to make something from patterns, just like we do when we dream. It’s not always useful, but it’s not random either. The key’s in how we guide and check it. I think use depends on us, not it.
May 7, 2025 at 11:25 PM
With AI the key is using it wisely > blindly. We still let folk drive, vote, and write novels. I agree with you, just wanted to add: the trick may not be avoiding hallucinations, but learning to interpret and guide them for our betterment. The article didn't do that justice, mis-sourced the problem.
May 7, 2025 at 9:52 PM
It’s in some ways irresponsible to expect tools built for (ex.) efficiency to prioritize human flourishing. Instead, we might find answers focusing on empowering users to navigate these systems critically, ensuring their choices and agency take precedence over blind reliance on tech.
May 7, 2025 at 8:42 PM
Pointing fingers at tech developments is different from focusing on user awareness. When we give tech the responsibility to produce goods for our well-being, we’re placing power in their design.
May 7, 2025 at 8:42 PM
There’s often this implicit trust that tech, whether AI or dating apps, will guide us to better outcomes. That trust can be harmful if we dismiss these platforms are designed with their own goals in mind. When /we/ rely too heavily on tech for growth or well-being, we risk losing sight of the goal.
May 7, 2025 at 8:36 PM
AI, like any tool, can feel useless depending on how it’s used. I see an air fryer the same way: I’ve always relied on a convection oven. I consider the issue to be how we use AI. We might want to focus on users... whether their use is enhancing or hindering growth.
May 7, 2025 at 8:33 PM