Aslan
banner
aslandizaji.bsky.social
Aslan
@aslandizaji.bsky.social
Artificial Intelligence, Machine Learning, Neuroscience, Complex Systems, Economics.
PhD Student at the University of Tehran.
Cofounder: @AutocurriculaLab, @NeuroAILab, @LangTechAI.
https://sites.google.com/a/umich.edu/aslansdizaji/
Finally, this app was inspired by two courses from @DeepLearningAI and @LangChainAI Academy. I would like to thank them!
October 4, 2025 at 2:09 PM
Check it out here 👇
GitHub (code): github.com/aslansd/deep...
Streamlit app (try it out): d3yjnms2ch6yxcthmrvqnn.streamlit.app
Would love feedback from researchers, builders, and anyone interested in multi-agent systems 🚀
GitHub - aslansd/deepagent_researcher_analyse_visualise: Build a deep agent researcher which analyses and visualises the information
Build a deep agent researcher which analyses and visualises the information - aslansd/deepagent_researcher_analyse_visualise
github.com
October 4, 2025 at 2:09 PM
The results?
📊 Final answers with explanations
📈 Automatically generated charts
📝 Full trace of every step the system took
It’s like having a research assistant that works systematically and shows its work!
October 4, 2025 at 2:09 PM
My app combines all of this into a multi-agent framework:
A Planner breaks down your research question
Specialized agents (web researcher, chart generator, summarizer) handle subtasks
Everything is orchestrated into a transparent workflow you can trace
October 4, 2025 at 2:09 PM
Traditional LLM agents can be “shallow” — they just loop through tools and struggle with long, complex tasks.
Deep Agents (a new feature in LangGraph) bring:
✅ Task planning (TODOs)
✅ Sub-agent delegation
✅ Context offloading to files
✅ Robust reasoning prompts
October 4, 2025 at 2:09 PM
It’s a Streamlit app powered by LangGraph Deep Agents that can take your research question, plan a workflow, fetch data, generate charts, and explain results step by step.
October 4, 2025 at 2:09 PM
Big thanks to @DeepLearningAI & @LangChainAI Academy for the resources that made this possible.
September 15, 2025 at 1:59 PM
Multi-Modal RAG App (Streamlit + Ollama)
Built a lightweight Retrieval-Augmented Generation (RAG) system that processes both text + image docs. Users can load files, build a vector store, and run retrieval-grounded QA.
github.com/aslansd/mult...
GitHub - aslansd/multi-modal-rag-web: A Multi-Modal RAG Application Built with Streamlit Using a Lightweight Ollama Model
A Multi-Modal RAG Application Built with Streamlit Using a Lightweight Ollama Model - aslansd/multi-modal-rag-web
github.com
September 15, 2025 at 1:59 PM
React Native + Expo Go App for Open Deep Research
Brought the same framework to mobile, enabling research automation on the go.
github.com/aslansd/open... expo.dev/accounts/asa...
GitHub - aslansd/open_deep_reasearch_mobile: A React Native plus Expo Go App for open_deep_research Framework of LangChain/LangGraph
A React Native plus Expo Go App for open_deep_research Framework of LangChain/LangGraph - aslansd/open_deep_reasearch_mobile
github.com
September 15, 2025 at 1:59 PM
Streamlit App for Open Deep Research
Adapted LangChain’s Open Deep Research (LangGraph) into a Streamlit app to support advanced research workflows: ingestion, retrieval, multi-modal analysis, and context-aware Q&A.
github.com/aslansd/open...
rneaovknvzddyykhkeu2et.streamlit.app
GitHub - aslansd/open_deep_research_web: A Streamlit App for open_deep_research Framework of LangChain/LangGraph
A Streamlit App for open_deep_research Framework of LangChain/LangGraph - aslansd/open_deep_research_web
github.com
September 15, 2025 at 1:59 PM
Feel free to use any one of the above apps and I would be happy to receive any feedback.
March 19, 2025 at 6:09 PM
These three projects were not possible without taking the online courses offered by DeepLearningAI and LangChain Academy. Here, I would like to thank them. I would like to thank CrewAI, Gradio, Ollama, and TogetherAI too.
March 19, 2025 at 6:09 PM
cycles. It will provide the user a final markdown summary with all sources used.
March 19, 2025 at 6:09 PM
(via seven different search APIs: DuckDuckGo, Tavily, Perplexity, Linkup, Exa, ArXiv, and PubMed), summarise the results of web search, reflect on the summary to examine knowledge gaps, generate a new search query to address the gaps, search, and improve the summary for a user-defined number of
March 19, 2025 at 6:09 PM
In the third project, I extended one of the LangChain apps called Ollama Deep Researcher which is a local web research assistant built upon the multi-agent framework of LangGraph that uses any LLM hosted by Ollama. It accepts a topic and it will generate a web search query, gather web search results
March 19, 2025 at 6:09 PM
In the second project, I built a Dungeon game simulating a fantasy world composed of kingdoms, towns, characters, and inventories powered by one of the LLMs provided by TogetherAI and used Gradio as a user interface. Again all the codes and results are brought in one notebook.
March 19, 2025 at 6:09 PM
given by the user for their startup considering their expertise.For this purpose, I used the multi-agent framework of CrewAI combining it with LangChain, Gradio, and one of the open source LLMs of Ollama. All the codes and results are provided in a notebook.
March 19, 2025 at 6:09 PM
In the first project, I simulated an environment similar to a startup having three cofounders: the first cofounder is more technical, the second cofounder is more product oriented, and the third cofounder is more business one. These three cofounders do brainstorming about various topics
March 19, 2025 at 6:09 PM
Extra Day 26
December 8, 2024 at 7:41 AM
Extra Day 25
December 8, 2024 at 7:41 AM
Extra Day 24
December 8, 2024 at 7:41 AM
Extra Day 23
December 8, 2024 at 7:41 AM