NanoGPT
nanogpt.bsky.social
NanoGPT
@nanogpt.bsky.social
Access every AI model privately - no subscription, pay only for what you use.
The default model for imported conversations is set to ChatGPT or Claude 4 Sonnet.

One of the best parts of NanoGPT though - you can change this to any model you wish, at any time.

Plus - no-log, and no need to reveal your identity to us. Your privacy matters.
August 26, 2025 at 11:18 AM


To be clear, in both cases we do not see your chats and conversations. The importing happens locally, an your chats and conversations (and images) are stored locally.
August 26, 2025 at 11:18 AM
Claude:

See also nano-gpt.com/claude.

Fast version:

- Go to Claude Settings (claude.ai/settings/dat...) and click "export data" under data controls
- Import into your NanoGPT conversations (nano-gpt.com/conversations).

That's all! All chats conserved.
NanoGPT | Every AI model | Privacy-first | No subscription | Text, Image, Video
Access all the newest AI models including ChatGPT, Claude, Gemini, Deepseek, and all image and video models. Starting at $0.01, pay only for what you use. Private, no subscription needed.
nano-gpt.com
August 26, 2025 at 11:18 AM
ChatGPT:

See also nano-gpt.com/chatgpt.

Fast version:

- Go to ChatGPT Settings — Data Controls (chatgpt.com#settings/Dat...) and click "Export data".
- Import into your NanoGPT conversations (nano-gpt.com/conversations).

That's all! All chats and images conserved.
NanoGPT | Every AI model | Privacy-first | No subscription | Text, Image, Video
Access all the newest AI models including ChatGPT, Claude, Gemini, Deepseek, and all image and video models. Starting at $0.01, pay only for what you use. Private, no subscription needed.
nano-gpt.com
August 26, 2025 at 11:18 AM
That's all! Go try it out, and let us know what you think.
August 13, 2025 at 2:01 PM
Pricing

- Not-cached input: $5.00 per million tokens
- Cached input: $2.50 per million tokens
- Output Generation: $10.00 per million tokens

Retention: 30 days by default; configurable 1–365 days via :memory-<days> or memory_expiration_days header

Typical Usage: 8k-20k tokens per session
August 13, 2025 at 2:01 PM
Important privacy details:

- Context Memory over the API does not send data to Google Analytics or use cookies
- Only your conversation messages are sent to Polychat for compression
- No email, IP address, or other metadata is shared, only the prompts
August 13, 2025 at 2:01 PM
Provider: Polychat

When using Context Memory, your conversation data is processed by Polychat's API which uses Google/Gemini in the background with maximum privacy settings.

You can review Polychat's full privacy policy at polychat.co/legal/privacy.
PolyChat - Chat with multiple LLMs
Combine the world's most powerful LLMs from OpenAI, Anthropic, Perplexity, Google, DeepSeek, LLama, and more.
polychat.co
August 13, 2025 at 2:01 PM
- The model receives all the context it needs without hitting token limits

This means you can have conversations with millions of tokens of history, but the AI model only sees the intelligently compressed version that fits within its context window.
PolyChat - Chat with multiple LLMs
Combine the world's most powerful LLMs from OpenAI, Anthropic, Perplexity, Google, DeepSeek, LLama, and more.
polychat.co
August 13, 2025 at 2:01 PM
How It Works

- You send your full conversation history to our API
- Context Memory compresses this into a compact representation with all relevant information
- Only the compressed version is sent to the AI model (OpenAI, Anthropic, etc.)
PolyChat - Chat with multiple LLMs
Combine the world's most powerful LLMs from OpenAI, Anthropic, Perplexity, Google, DeepSeek, LLama, and more.
polychat.co
August 13, 2025 at 2:01 PM
Retention

By default, Context Memory retains your compressed chat state for 30 days.

Retention is rolling and based on the conversation’s last update: each new message resets the timer, and the thread expires N days after its last activity.

You can configure retention from 1 to 365 day.
PolyChat - Chat with multiple LLMs
Combine the world's most powerful LLMs from OpenAI, Anthropic, Perplexity, Google, DeepSeek, LLama, and more.
polychat.co
August 13, 2025 at 2:01 PM
The earlier you enable it, the more complete your memory will be.

Using Context Memory

Simple. Add :memory to any model name.

Or pass a header:

memory: true.

Or on our frontend, just check "enable context memory".
PolyChat - Chat with multiple LLMs
Combine the world's most powerful LLMs from OpenAI, Anthropic, Perplexity, Google, DeepSeek, LLama, and more.
polychat.co
August 13, 2025 at 2:01 PM