- unhelpful error messages that don’t really explain what’s going on
- lack of crucial access to API fields (afaict you can’t use logit bias in the ChatOpenAI model?)
- bloated architecture that seems more complicated as genAI APIs evolve so fast
- unhelpful error messages that don’t really explain what’s going on
- lack of crucial access to API fields (afaict you can’t use logit bias in the ChatOpenAI model?)
- bloated architecture that seems more complicated as genAI APIs evolve so fast
1. Causal inference: LLMs can't reason about cause and effect. They can't build models of the world and identify from first principles what interventions can bring change. They can't data mine datasets to make effective recommendations about what data to collect next.
1. Causal inference: LLMs can't reason about cause and effect. They can't build models of the world and identify from first principles what interventions can bring change. They can't data mine datasets to make effective recommendations about what data to collect next.