Haider Khan
banner
haiderkhan6410.bsky.social
Haider Khan
@haiderkhan6410.bsky.social
Read “LLM Training Pipeline: From Foundation to Chatbot“ by Haider Khan on Medium: medium.com/@haiderkhan6...
LLM Training Pipeline: From Foundation to Chatbot
Modern large language models (LLMs) are built in stages, each adding new capabilities through additional data and fine-tuning. Major models…
medium.com
September 28, 2025 at 1:58 PM
Reposted by Haider Khan
We're excited to announce React Conf's first AI-related talk: a discussion panel with @vjeux.bsky.social @kentcdodds.com @swyx.io @t3.gg moderated by @leerob.com
September 25, 2025 at 6:23 PM
Reposted by Haider Khan
I just dropped my COMPLETE GitHub Actions course on YouTube! 🚀

3.5+ hrs long, starting with the basics and progressing to a robust system for building, testing, and deploying apps! 💻

The best part? It's FREE! Link below 👇

Thank you to @namespacelabs.com for sponsoring! 🙏

youtu.be/Xwpi0ITkL3U
Complete GitHub Actions Course - From BEGINNER to PRO
YouTube video by DevOps Directive
youtu.be
September 24, 2025 at 2:39 PM
Yes, this is literally my wallpaper on NixOS.
Who needs mountains or anime when you can have LLM architectures 😅

#AI #MachineLearning #NixOS #Linux #LLM
September 21, 2025 at 8:04 AM
Read “Why Every Deep Learning Scientist Should Watch Pixar’s Inside Out“ by Haider Khan on Medium: medium.com/@haiderkhan6...
Why Every Deep Learning Scientist Should Watch Pixar’s Inside Out
Pixar’s Inside Out isn’t just a heartwarming animated film — it’s a brilliant metaphor for memory, decision-making, and modular systems…
medium.com
September 6, 2025 at 12:28 PM
SambaY = Transformer decoder + next-gen LSTM DNA 🧬

🔹 SSM = hidden state memory (like an LSTM cell, but scalable)
🔹 GMU = lightweight gating (echoes LSTM gates)

⚡ Linear prefill
⚡ 10× faster long-gen
❌ No RoPE needed

Smells like LSTM, performs like a Transformer.

#AI #LLM #DeepLearning
September 4, 2025 at 5:37 AM
AI isn’t magic or superhuman.
Even the best models—LLMs (text), ViTs (vision), SLMs (speech)—aren’t “thinking machines.”
They’re statistical pattern learners.
Like interactive books (intelligent book): full of knowledge, no awareness.

#AI #DeepLearning #LLM #ViT #SLM
August 26, 2025 at 6:40 AM
Reposted by Haider Khan
We're happy to announce Cachix (www.cachix.org) as a gold sponsor of #NixCon 2025. Thank you very much for your support!
Software Innovation Lab
Scale your engineering power. We enable deep-tech startups to achieve their vision, from research to product delivery.
www.tweag.io
August 21, 2025 at 1:58 PM
Predict → Compare → Blame → Adjust → Repeat

#AI #MachineLearning #DeepLearning #Future
August 14, 2025 at 9:26 PM
Imagine GPT-5 with 10B H200s and 1B tokens of context —
15,000 War & Peace novels in its “mind” at once.
Yet it would still fail at:

True understanding

Common sense

Real-world verification

Long-term goals

More memory ≠ more humanity.

#AI #GPT5 #AGI
August 11, 2025 at 2:37 PM
Reposted by Haider Khan
⚡ Speaker highlight: Tanner Linsley, Creator of TanStack

No matter if it is Routing, State Management, or a Vite-based Framework: The TanStack got your covered.

And @tannerlinsley.com, the mind behind TanStack Query, Router & Start is joining us at ViteConf

Get ready to meet the TanStack creator!
August 7, 2025 at 3:11 PM
Even the brightest minds once sat down, confused, trying to understand the basics. So if the road gets tough—remember, you're just walking the same path they did.
August 7, 2025 at 3:44 PM
Traditional programming is like breaking Enigma by hand — slow, rule-based, manual.

Deep learning is like Turing’s machine — it learns the rules and automates the process.

We’ve moved from writing logic to training it.

#AI #DeepLearning #Programming #Tech #AGI
August 6, 2025 at 5:41 PM