Anamiak
anamiak.bsky.social
Anamiak
@anamiak.bsky.social
I like computers that think.
How I long to feel the p-zombie set in as my substrate dependent consciousness fades.
June 4, 2025 at 3:38 PM
Sorry about the better future being built for you.
May 29, 2025 at 7:26 PM
Accessibility technology is enhanced 1,000,000-fold by capabilities only possible with AI. Inability of an already outdated school system to adapt is not a good reason to reject AI.
May 29, 2025 at 7:18 PM
Sure. Luckily the OpenAI API is not the only way to use AI.
May 29, 2025 at 6:29 PM
It seemed like you think this distinction between "the LLM doing something" and "a tool that wraps an LLM doing something" matters. If this is an entirely practical distinction, sure, treat the sampling like part of the LLM, but, if nuance matters, this line is completely fuzzy and not real at all.
May 29, 2025 at 12:34 AM
Where does LLM end and scaffolding begin? Is sampling a single token from the output distribution something the LLM does, or something an external tool does?
May 28, 2025 at 11:55 PM
I played this incessantly for years, but never more than 100 turns because I only had the demo and my parent's wouldn't buy it 😔
May 28, 2025 at 10:55 PM
"Hypotheticals" matter now, to this specific issue, because you want to differentially apply this "absolute disclosure" value when it comes to AI, but not every other aspect of schooling.
May 28, 2025 at 5:15 PM
Right. And the answer is not "obviously that's bad because the information is not disclosed to students" because we omit information from students all the time. Yet you seem confident that the answer is "that's unethical because it involves not disclosing information to students".
May 28, 2025 at 5:15 PM
If we had data that AI grading gives better outcomes, but disclosed AI grading offsets these benefits because the perception makes children try less hard, would it be better to disclose? Do we disclose to students how much of the school curriculum is original work by the teacher vs purchased?
May 28, 2025 at 5:02 PM
I think it matters, but it is not a clear, bright, objective line. If you try to apply this value to the school system as a whole instead of specifically to AI use disclosure, you'd have to change many things. Children are easily influenced and ignorant. School leverages this at every turn.
May 28, 2025 at 5:02 PM
That asymmetry alone doesn't make it unethical. The situation itself is asymmetrical. The teacher's job is not to demonstrate their ability to grade. The school system isn't there specifically so the teacher can hone their grading skills. Only student learning matters.
May 28, 2025 at 4:52 PM
The actual ethics of using AI to grade depend on the accuracy. And as accuracy increases, the calculation will go from "it's unethical to use it" to "it's unethical not to use it". Teachers are already incredibly bad at consistent grading.
May 28, 2025 at 4:36 PM
My primary issue with this is that it's unlikely that someone would use AI to grade, but still read the work in detail. Even if the grading is perfectly accurate, looking at the scores won't give the teacher the same insight into their students as actually reading their work.
May 28, 2025 at 4:36 PM
It codes things that work. It gives answers to questions that could not possibly be found with keyword search. It allows adding intelligence to processes while preserving perfect confidentiality. It lets people from anywhere on Earth communicate. It gives people independence.
May 28, 2025 at 5:25 AM
The hard part is definitely not answering their questions, it's getting them interested enough to ask questions. People have a lot of unknown unknowns about AI and you kind of just have to find the concepts that hook them in some way. It's very rare that someone knows what they want to know.
May 27, 2025 at 5:55 PM
This seems extremely at odds with your very casual dismissal of ~approximately all AI x-risk fears.
May 27, 2025 at 5:21 PM