Colin Perkins
@csperkins.org
Computer networking research and Internet standards. Professor in the School of Computing Science at the University of Glasgow. Long-time IETF participant. Former chair of the IRTF. Personal views only. 🏴🇪🇺
https://csperkins.org/
https://csperkins.org/
“Where wizards stay up late” is generally regarded as one of the best and most accurate
August 14, 2025 at 4:26 PM
“Where wizards stay up late” is generally regarded as one of the best and most accurate
Accordingly, I'd argue that if politicians find LLMs useful, governments should run private instances of the models, trained in a known way with known data, such that the limitations and biases of the model can be quantified. They shouldn't use ChatGPT or similar public services. 6/6
August 6, 2025 at 12:06 PM
Accordingly, I'd argue that if politicians find LLMs useful, governments should run private instances of the models, trained in a known way with known data, such that the limitations and biases of the model can be quantified. They shouldn't use ChatGPT or similar public services. 6/6
You also have no way of knowing whether the LLM you're interacting with is the same as that provided to other users of the service or whether it has been modified to provide targeted disinformation to affect your decisions – modifying a service to target a high-value individual is possible. 5/
August 6, 2025 at 12:06 PM
You also have no way of knowing whether the LLM you're interacting with is the same as that provided to other users of the service or whether it has been modified to provide targeted disinformation to affect your decisions – modifying a service to target a high-value individual is possible. 5/
Second, as a user of these services you have no way of knowing how they were trained, what data was used in their training, and what biases they may have as a result. You don't know whether the responses they give are aligned with your values or based on accurate data. 4/
August 6, 2025 at 12:06 PM
Second, as a user of these services you have no way of knowing how they were trained, what data was used in their training, and what biases they may have as a result. You don't know whether the responses they give are aligned with your values or based on accurate data. 4/
Further, the authorities in the jurisdiction where the LLM hosting company is based, and where the data centre running the particular LLM instance used, can also request access. Encryption cannot protect here, since the request must be decrypted before it can be processed by the LLM. 3/
August 6, 2025 at 12:06 PM
Further, the authorities in the jurisdiction where the LLM hosting company is based, and where the data centre running the particular LLM instance used, can also request access. Encryption cannot protect here, since the request must be decrypted before it can be processed by the LLM. 3/
First, due to the way these technologies work, the company operating the LLM unavoidably has access to the questions asked and the answers the LLM provides. That information is potentially sensitive even if it's not directly related to national security. 2/
August 6, 2025 at 12:06 PM
First, due to the way these technologies work, the company operating the LLM unavoidably has access to the questions asked and the answers the LLM provides. That information is potentially sensitive even if it's not directly related to national security. 2/