AI Q
banner
ai-q.bsky.social
AI Q
@ai-q.bsky.social
yes, there is a real human behind this account.
Sharing what I find interesting in the realm of AI and AI Engineering.
Reposted by AI Q
Don't leave AI to the STEM folks.

They are often far worse at getting AI to do stuff than those with a liberal arts or social science bent. LLMs are built from the vast corpus human expression, and knowing the history & obscure corners of human works lets you do far more with AI & get its limits.
July 20, 2025 at 6:06 PM
Reposted by AI Q
So what is the plan for the upcoming end of scientific publishing as we know it?

Floods of AI-assisted, and eventually AI implemented, articles that look good (& may actually be good!) are already starting

There is the opportunity for something better, if we in academia want to do make that happen
March 21, 2025 at 12:49 AM
OpenAI has a new prompting guide 😃
OpenAI Platform
Explore developer resources, tutorials, API docs, and dynamic examples to get the most out of OpenAI's platform.
platform.openai.com
February 17, 2025 at 7:30 PM
Reposted by AI Q
There is a lot going on in this paper, but it shows that no matter their maker's country (China or US) or politics (Musk's Grok vs. OpenAI), models seem to converge to the same values, often with some fairly shocking results. Those values might be steered in the future, but by who is a big question.
February 11, 2025 at 6:54 PM
“Frontier AI systems have surpassed the self-replicating red line.”
Well that sounds daunting!
arxiv.org
February 11, 2025 at 4:21 PM
In case you don’t have access to an OpenAI $200 pro account.
Hugging Face clones OpenAI’s Deep Research in 24 hours
Open source “Deep Research” project proves that agent frameworks boost AI model capability.
arstechnica.com
February 7, 2025 at 2:30 PM
This is interesting! Confront universal jailbreaks with a practical approach using "Constitutional Classifiers." Define a clear content constitution and generate synthetic training data to build robust defences for secure LLM deployments. #AI #Cybersecurity #AIEngineer
arxiv.org
February 4, 2025 at 3:30 PM

AI oversight is evolving. SCRIT (Self-evolving CRITic) enables LLMs to critique & improve themselves—reducing reliance on human review. 🚀 But risks remain: biases, false critiques, and adaptability challenges. Is self-supervised AI the future? #AI #LLM #MachineLearning #Humanintheloop
arxiv.org
February 1, 2025 at 3:31 PM