lawrencegis.bsky.social
@lawrencegis.bsky.social
In that case, what we can do is pass real-time, accurate information to LLMs via tool use invocation. I'll test that approach another day.

I'm Lawrence, and I talk about #geospatial and #AI. Follow me to learn more.

(7/7)
June 10, 2025 at 9:51 AM
However, there are billions of Points of Interest on our maps. Labeling them all in training datasets would be incredibly laborious, not to mention that these POIs change over time and require continuous updates.

(6/7)
June 10, 2025 at 9:51 AM
If you break down how LLMs work, you'll realize they function so well with text generation because they're word-by-word generation engines pre-trained on enormous amounts of well-labeled datasets that are either machine-generated or manually verified before training.

(5/7)
June 10, 2025 at 9:51 AM
1. When I asked for prominent landmarks like the Brooklyn Bridge, it accurately pinpointed the latlong on the map.

2. However, when I queried less well-known locations, it gave locations that were off by hundreds of meters from their actual positions.

(4/7)
June 10, 2025 at 9:51 AM
But LLMs' ability to understand such questions relies on a fundamental assumption: Do LLMs actually have all locations accurately memorized?

To test this, I built a simple demo with OpenAI's GPT-4.1.

Here's what I found:

(3/7)
June 10, 2025 at 9:51 AM
After ChatGPT went viral, we learned that LLMs are text-in, text-out models using transformer architecture.

Naturally, we ask questions like "How many Starbucks are there near Union Street?" or "Which housing estate in Singapore has the highest growth potential for the next 10 years?"

(2/7)
June 10, 2025 at 9:51 AM