A new paper shows all language models converge on the same "universal geometry" of meaning. Researchers can translate between ANY model's embeddings without seeing the original text.
Implications for philosophy and vector databases alike. arxiv.org/pdf/2505.12540