One of the best parts of NanoGPT though - you can change this to any model you wish, at any time.
Plus - no-log, and no need to reveal your identity to us. Your privacy matters.
One of the best parts of NanoGPT though - you can change this to any model you wish, at any time.
Plus - no-log, and no need to reveal your identity to us. Your privacy matters.
To be clear, in both cases we do not see your chats and conversations. The importing happens locally, an your chats and conversations (and images) are stored locally.
To be clear, in both cases we do not see your chats and conversations. The importing happens locally, an your chats and conversations (and images) are stored locally.
See also nano-gpt.com/claude.
Fast version:
- Go to Claude Settings (claude.ai/settings/dat...) and click "export data" under data controls
- Import into your NanoGPT conversations (nano-gpt.com/conversations).
That's all! All chats conserved.
See also nano-gpt.com/claude.
Fast version:
- Go to Claude Settings (claude.ai/settings/dat...) and click "export data" under data controls
- Import into your NanoGPT conversations (nano-gpt.com/conversations).
That's all! All chats conserved.
See also nano-gpt.com/chatgpt.
Fast version:
- Go to ChatGPT Settings — Data Controls (chatgpt.com#settings/Dat...) and click "Export data".
- Import into your NanoGPT conversations (nano-gpt.com/conversations).
That's all! All chats and images conserved.
See also nano-gpt.com/chatgpt.
Fast version:
- Go to ChatGPT Settings — Data Controls (chatgpt.com#settings/Dat...) and click "Export data".
- Import into your NanoGPT conversations (nano-gpt.com/conversations).
That's all! All chats and images conserved.
- Not-cached input: $5.00 per million tokens
- Cached input: $2.50 per million tokens
- Output Generation: $10.00 per million tokens
Retention: 30 days by default; configurable 1–365 days via :memory-<days> or memory_expiration_days header
Typical Usage: 8k-20k tokens per session
- Not-cached input: $5.00 per million tokens
- Cached input: $2.50 per million tokens
- Output Generation: $10.00 per million tokens
Retention: 30 days by default; configurable 1–365 days via :memory-<days> or memory_expiration_days header
Typical Usage: 8k-20k tokens per session
- Context Memory over the API does not send data to Google Analytics or use cookies
- Only your conversation messages are sent to Polychat for compression
- No email, IP address, or other metadata is shared, only the prompts
- Context Memory over the API does not send data to Google Analytics or use cookies
- Only your conversation messages are sent to Polychat for compression
- No email, IP address, or other metadata is shared, only the prompts
When using Context Memory, your conversation data is processed by Polychat's API which uses Google/Gemini in the background with maximum privacy settings.
You can review Polychat's full privacy policy at polychat.co/legal/privacy.
When using Context Memory, your conversation data is processed by Polychat's API which uses Google/Gemini in the background with maximum privacy settings.
You can review Polychat's full privacy policy at polychat.co/legal/privacy.
This means you can have conversations with millions of tokens of history, but the AI model only sees the intelligently compressed version that fits within its context window.
This means you can have conversations with millions of tokens of history, but the AI model only sees the intelligently compressed version that fits within its context window.
- You send your full conversation history to our API
- Context Memory compresses this into a compact representation with all relevant information
- Only the compressed version is sent to the AI model (OpenAI, Anthropic, etc.)
- You send your full conversation history to our API
- Context Memory compresses this into a compact representation with all relevant information
- Only the compressed version is sent to the AI model (OpenAI, Anthropic, etc.)
By default, Context Memory retains your compressed chat state for 30 days.
Retention is rolling and based on the conversation’s last update: each new message resets the timer, and the thread expires N days after its last activity.
You can configure retention from 1 to 365 day.
By default, Context Memory retains your compressed chat state for 30 days.
Retention is rolling and based on the conversation’s last update: each new message resets the timer, and the thread expires N days after its last activity.
You can configure retention from 1 to 365 day.
Using Context Memory
Simple. Add :memory to any model name.
Or pass a header:
memory: true.
Or on our frontend, just check "enable context memory".
Using Context Memory
Simple. Add :memory to any model name.
Or pass a header:
memory: true.
Or on our frontend, just check "enable context memory".