Discover the step-by-step guides and hands-on practical guides.🚀
📩https://medium.com/@schuerch_sarah
👀https://towardsdatascience.com/author/schuerch_sarah
They both learn by doing — not by being told.
That’s the core idea behind Reinforcement Learning.
Glad my Q-learning piece made it into this week’s Top 5 by @towardsdatascience.com. Thanks!
www.linkedin.com/pulse/whats-...
When I started writing on Medium, I just wanted to share a few thoughts about AI & Data Science.
Thanks for reading & following along.
Here’s the article that brought number 1,500 👇
code.likeagirl.io/how-to-study...
When I started writing on Medium, I just wanted to share a few thoughts about AI & Data Science.
Thanks for reading & following along.
Here’s the article that brought number 1,500 👇
code.likeagirl.io/how-to-study...
∇J(θ) ≐ ∇Eπ [G ∣ s, a]
That moment of formula anxiety isn’t about being bad at math.
It’s about missing a method.
Once you treat symbols like a new language, the fear fades.
☕ Read the full piece: bit.ly/4qp0aBC
∇J(θ) ≐ ∇Eπ [G ∣ s, a]
That moment of formula anxiety isn’t about being bad at math.
It’s about missing a method.
Once you treat symbols like a new language, the fear fades.
☕ Read the full piece: bit.ly/4qp0aBC
That’s how my 5 strategies for studying math-heavy topics began. Try it out and let me know:
☕ With Medium-Account: bit.ly/47ikxrp
☕ Friend-Link: bit.ly/4qp0aBC
That’s how my 5 strategies for studying math-heavy topics began. Try it out and let me know:
☕ With Medium-Account: bit.ly/47ikxrp
☕ Friend-Link: bit.ly/4qp0aBC
It made me pause.
I’m not sure what I think yet: whether AI tools make us learn less, or just make us learn differently.
It made me pause.
I’m not sure what I think yet: whether AI tools make us learn less, or just make us learn differently.
☕ Take many independent random variables & average them.
→ You’ll get something close to a normal distribution.
→ Whatever the originals looked like.
☕ Essential for anyone working with data.
☕ Take many independent random variables & average them.
→ You’ll get something close to a normal distribution.
→ Whatever the originals looked like.
☕ Essential for anyone working with data.
☕ Check out the step-by-step guide here: medium.com/towards-arti...
☕ GitHub repo: github.com/Sari95/CSV-P...
☕ Check out the step-by-step guide here: medium.com/towards-arti...
☕ GitHub repo: github.com/Sari95/CSV-P...
I gave it a try with LangChain. And yes, it can generate descriptive stats, for example. Not groundbreaking, but a fun start into agent workflows.
Would you trust an agent with your data analysis?
🤓 On @towardsdatascience.com : towardsdatascience.com/langchain-fo...
I gave it a try with LangChain. And yes, it can generate descriptive stats, for example. Not groundbreaking, but a fun start into agent workflows.
Would you trust an agent with your data analysis?
🤓 On @towardsdatascience.com : towardsdatascience.com/langchain-fo...
Start with exploration vs. exploitation and the Multi-Armed Bandit problem.
Simple, powerful and the perfect intro to Reinforcement Learning.
→ towardsdatascience.com/simple-guide...
Start with exploration vs. exploitation and the Multi-Armed Bandit problem.
Simple, powerful and the perfect intro to Reinforcement Learning.
→ towardsdatascience.com/simple-guide...
☕ No formatting chaos, much easier addition of citations and bibliography. Instead, I could concentrate much more on writing.
☕ medium.com/code-like-a-...
☕ No formatting chaos, much easier addition of citations and bibliography. Instead, I could concentrate much more on writing.
☕ medium.com/code-like-a-...
I was surprised how smoothly it already works.
I was surprised how smoothly it already works.
Two new modes show why 2025 is the year of AI agents:
☕ Agent Mode: Agents that act.
☕ Study & Learn Mode: Tutors that think with you.
Tried them yet? 👉 medium.com/p/77e5477efe59
Two new modes show why 2025 is the year of AI agents:
☕ Agent Mode: Agents that act.
☕ Study & Learn Mode: Tutors that think with you.
Tried them yet? 👉 medium.com/p/77e5477efe59
“Why normalize a database? Isn’t one big table easier?”
A classic first question in relational DBs.
☕ One big table feels simple: Until you run into redundancy, anomalies & messy updates.
☕ Normalization means: Store each fact once, in the right place. Clean & reliable.
“Why normalize a database? Isn’t one big table easier?”
A classic first question in relational DBs.
☕ One big table feels simple: Until you run into redundancy, anomalies & messy updates.
☕ Normalization means: Store each fact once, in the right place. Clean & reliable.
By @sarah-lea.bsky.social
By @sarah-lea.bsky.social
☕State: What situation is the agent in?
☕Actions: What are possible moves from here?
☕Reward: What does the agent receive after an action?
☕Value function: How good is a state?
That’s how RL agents learn by trial & error.
☕State: What situation is the agent in?
☕Actions: What are possible moves from here?
☕Reward: What does the agent receive after an action?
☕Value function: How good is a state?
That’s how RL agents learn by trial & error.
Reinforcement Learning (RL) isn’t about knowing the answer. It’s about learning through interaction.
That’s how AlphaGo beat a world champion:
It first learned from expert games. Then it played itself, over & over again.
Reinforcement Learning (RL) isn’t about knowing the answer. It’s about learning through interaction.
That’s how AlphaGo beat a world champion:
It first learned from expert games. Then it played itself, over & over again.
Multi-Armed Bandits use 3:
☕ Greedy: Stick with what works.
☕ ε-Greedy: Try new things sometimes.
☕ Optimistic: Assume it’s all good — at first.
Which one sounds most like you?
Multi-Armed Bandits use 3:
☕ Greedy: Stick with what works.
☕ ε-Greedy: Try new things sometimes.
☕ Optimistic: Assume it’s all good — at first.
Which one sounds most like you?
What they can do with RAG is search your docs in the background & answer using what they find. Simple, but effective.
How?
☕Chunking splits the doc into smaller parts.
☕Embeddings turn them into vectors.
☕Retriever finds matches. LLM writes the answer.
What they can do with RAG is search your docs in the background & answer using what they find. Simple, but effective.
How?
☕Chunking splits the doc into smaller parts.
☕Embeddings turn them into vectors.
☕Retriever finds matches. LLM writes the answer.
Then stop reading theory & build your own chatbot with:
☕ LangChain
☕ FAISS (vector DB)
☕ Mistral via Ollama
☕ Python & Streamlit
Follow this step-by-step guide:
👉 medium.com/data-science...
Comment WANT if you need the friends link to the Medium Article.
Then stop reading theory & build your own chatbot with:
☕ LangChain
☕ FAISS (vector DB)
☕ Mistral via Ollama
☕ Python & Streamlit
Follow this step-by-step guide:
👉 medium.com/data-science...
Comment WANT if you need the friends link to the Medium Article.
But what if there's a better one you never tried?
Multi-Armed Bandits explore this dilemma: With strategies like Greedy, ε-Greedy & Optimistic Initial Values
☕ → towardsdatascience.com/simple-guide...
But what if there's a better one you never tried?
Multi-Armed Bandits explore this dilemma: With strategies like Greedy, ε-Greedy & Optimistic Initial Values
☕ → towardsdatascience.com/simple-guide...
The Multi-Armed Bandit problem is a first step into this world.
It's not just about slot machines. Iit's about how AI (and humans) learn to choose.
towardsdatascience.com/simple-guide...
The Multi-Armed Bandit problem is a first step into this world.
It's not just about slot machines. Iit's about how AI (and humans) learn to choose.
towardsdatascience.com/simple-guide...
That’s the exploration vs. exploitation dilemma.
Multi-armed bandits model it.
Kahneman called it one of our core patterns of decision-making.
🎰 Read the full article @towardsdatascience.com: towardsdatascience.com/simple-guide...
That’s the exploration vs. exploitation dilemma.
Multi-armed bandits model it.
Kahneman called it one of our core patterns of decision-making.
🎰 Read the full article @towardsdatascience.com: towardsdatascience.com/simple-guide...
It’s about asking the questions no one else dares to ask —
even if that requires a lot of patience.
That’s what I learned from an interview with a Harvard postdoc in Biomedicine & AI.
☕ medium.com/ai-advances/...
It’s about asking the questions no one else dares to ask —
even if that requires a lot of patience.
That’s what I learned from an interview with a Harvard postdoc in Biomedicine & AI.
☕ medium.com/ai-advances/...