🔹 ToMA prioritizes intentions > emotions (other dimensions remain similar)
🔹 Uses +5.6% more 1st-order belief than bases, even when both are prompted equally for 0th/1st order states.
🔹 ToMA prioritizes intentions > emotions (other dimensions remain similar)
🔹 Uses +5.6% more 1st-order belief than bases, even when both are prompted equally for 0th/1st order states.
ToMA outperforms the base under all settings. Its reasoning is more strategic (e.g., compromise, accommodation). Even in failures, ToMA shows more active engagement (e.g., failed persuasion).
ToMA outperforms the base under all settings. Its reasoning is more strategic (e.g., compromise, accommodation). Even in failures, ToMA shows more active engagement (e.g., failed persuasion).
In our new paper, we introduce ToMA, a dialogue lookahead training framework that enables the LLMs to generate mental states that are maximally useful for achieving dialogue goals.🧵👇
In our new paper, we introduce ToMA, a dialogue lookahead training framework that enables the LLMs to generate mental states that are maximally useful for achieving dialogue goals.🧵👇
We have great members for invited talks and panel discussions.
More details here: nlp.cs.ubc.ca/future-of-nl...
We have great members for invited talks and panel discussions.
More details here: nlp.cs.ubc.ca/future-of-nl...