Based in Bologna 🇮🇹🇪🇺
MSc in Economics 📊
Passionate about books, mountains, politics and policies, coding, artificial intelligence and quality conversations. 📖⛰️👾
Create a app.py with streamlit (or whatever u want to name it) at the root (same level as your src folder).
Then import whatever is needed from src to mimic a chat (I suppose similar to what you already do in src/main.py).
Post here an example just to give you an idea: it's just python
Create a app.py with streamlit (or whatever u want to name it) at the root (same level as your src folder).
Then import whatever is needed from src to mimic a chat (I suppose similar to what you already do in src/main.py).
Post here an example just to give you an idea: it's just python
In my experience writing Streamlit could become frustrating for complex projects, but chatbots are really easy to do nowadays, even with callbacks and memory -> docs.streamlit.io/develop/tuto... find here some example :)
Hope I helped, this really is an interesting project!
In my experience writing Streamlit could become frustrating for complex projects, but chatbots are really easy to do nowadays, even with callbacks and memory -> docs.streamlit.io/develop/tuto... find here some example :)
Hope I helped, this really is an interesting project!
It's a minimal effort but might be worth it to showcase the project to a broader audience imho. In both cases the platform would take care of the hosting for you.
It's a minimal effort but might be worth it to showcase the project to a broader audience imho. In both cases the platform would take care of the hosting for you.
- nr. 1 needs domain knowledge and time;
- nr. 2 is computationally expensive and risks entropy but is grounded in docs;
- nr. 3 is risking entropy even more, but is less grounded.
- nr. 1 needs domain knowledge and time;
- nr. 2 is computationally expensive and risks entropy but is grounded in docs;
- nr. 3 is risking entropy even more, but is less grounded.
Current options in my mind are
1. predefine the Ontology
2. infer it from docs
3. leave the Agent in charge of extracting entities and relationships free to do whatever it wants
Current options in my mind are
1. predefine the Ontology
2. infer it from docs
3. leave the Agent in charge of extracting entities and relationships free to do whatever it wants
In my M1 MacBook Air I find that Llama3.2 (1B) works fine for simple/medium tasks at reasonable speed.
However, for some harder tasks I am yet to find a good alternative that could be hosted locally
In my M1 MacBook Air I find that Llama3.2 (1B) works fine for simple/medium tasks at reasonable speed.
However, for some harder tasks I am yet to find a good alternative that could be hosted locally
Read more about our translation efforts: ourworldindata.org/instagram-in...
Read more about our translation efforts: ourworldindata.org/instagram-in...