Inception Labs
@inceptionlabs.bsky.social
Pioneering a new generation of LLMs.
Try Mercury Coder on our playground at chat.inceptionlabs.ai
February 26, 2025 at 8:51 PM
Try Mercury Coder on our playground at chat.inceptionlabs.ai
We achieve over 1000 tokens/second on NVIDIA H100s. Blazing fast generations without specialized chips!
February 26, 2025 at 8:51 PM
We achieve over 1000 tokens/second on NVIDIA H100s. Blazing fast generations without specialized chips!
Mercury Coder diffusion large language models match the performance of frontier speed-optimized models like GPT-4o Mini and Claude 3.5 Haiku while running up to 10x faster.
February 26, 2025 at 8:51 PM
Mercury Coder diffusion large language models match the performance of frontier speed-optimized models like GPT-4o Mini and Claude 3.5 Haiku while running up to 10x faster.
We are excited to introduce Mercury, the first commercial-grade diffusion large language model (dLLM)! dLLMs push the frontier of intelligence and speed with parallel, coarse-to-fine text generation.
February 26, 2025 at 8:51 PM
We are excited to introduce Mercury, the first commercial-grade diffusion large language model (dLLM)! dLLMs push the frontier of intelligence and speed with parallel, coarse-to-fine text generation.