Our collaborative project, RawMed, has been accepted at #NeurIPS2025. This pioneering framework synthesizes multi-table, time-series clinical data using text-based representations, compression techniques, and efficient autoregressive modeling.
Our collaborative project, RawMed, has been accepted at #NeurIPS2025. This pioneering framework synthesizes multi-table, time-series clinical data using text-based representations, compression techniques, and efficient autoregressive modeling.
We’re excited to join the global AI community in San Diego next week to share research insights, connect with innovators, and explore the latest breakthroughs in machine learning.
📍 Stop by our booth S13 to meet the team and discover opportunities to collaborate.
We’re excited to join the global AI community in San Diego next week to share research insights, connect with innovators, and explore the latest breakthroughs in machine learning.
📍 Stop by our booth S13 to meet the team and discover opportunities to collaborate.
It was inspiring to bring together leaders, experts, and researchers across industries to share real-world use cases, trends, and challenges in building scalable and sustainable AI.
It was inspiring to bring together leaders, experts, and researchers across industries to share real-world use cases, trends, and challenges in building scalable and sustainable AI.
We are excited to co-host this in-person event with Akamai Technologies, bringing together leaders from across industries and borders to explore how to design and scale high-performance AI systems.
We are excited to co-host this in-person event with Akamai Technologies, bringing together leaders from across industries and borders to explore how to design and scale high-performance AI systems.
Our RNGD chip was engineered from the ground up with AI-native architecture to redefine compute, delivering uncompromising performance with breakthrough power efficiency.
Our RNGD chip was engineered from the ground up with AI-native architecture to redefine compute, delivering uncompromising performance with breakthrough power efficiency.
As a global company, we are excited to see that 61 countries contributed to @pytorch.org this year, not to mention all the vLLM, DeepSpeed, and Ray updates.
🤝 Connect with us at Booth S13 on October 22 and October 23 to discuss your AI infrastructure stack.
As a global company, we are excited to see that 61 countries contributed to @pytorch.org this year, not to mention all the vLLM, DeepSpeed, and Ray updates.
🤝 Connect with us at Booth S13 on October 22 and October 23 to discuss your AI infrastructure stack.
Join FuriosaAI, hosted·ai, and BlueSky Compute for a happy hour. Connect with peers across the AI infrastructure, cloud, and systems communities over drinks and light bites. Meet the team and exchange insights on next-generation AI compute.
Join FuriosaAI, hosted·ai, and BlueSky Compute for a happy hour. Connect with peers across the AI infrastructure, cloud, and systems communities over drinks and light bites. Meet the team and exchange insights on next-generation AI compute.
📑 More details: furiosa.ai/blog/furiosa...
📋 Release notes: developer.furiosa.ai/latest/en/wh...
📑 More details: furiosa.ai/blog/furiosa...
📋 Release notes: developer.furiosa.ai/latest/en/wh...
Thank you to everyone who visited our booth and joined our keynote and panel on Furiosa’s journey to building next-gen AI chips.
Drop by Booth 6307 today and tomorrow as the conference continues to meet the team and keep the conversation going.
Thank you to everyone who visited our booth and joined our keynote and panel on Furiosa’s journey to building next-gen AI chips.
Drop by Booth 6307 today and tomorrow as the conference continues to meet the team and keep the conversation going.
👀 Don’t miss our co-founder and CEO, June Paik, onstage October 10 as he shares how Furiosa is building next-gen AI chips for a more sustainable future.
👀 Don’t miss our co-founder and CEO, June Paik, onstage October 10 as he shares how Furiosa is building next-gen AI chips for a more sustainable future.
Engineered for efficient AI inference, the NXT RNGD Server hosts up to 8 RNGD accelerators, delivering:
⚡ 4 PFLOPS (FP8) compute
📦 384GB HBM + 12TB/s bandwidth
🔋 Just 3kW power draw (compared to 10.2kW for H100 SXM servers)
Engineered for efficient AI inference, the NXT RNGD Server hosts up to 8 RNGD accelerators, delivering:
⚡ 4 PFLOPS (FP8) compute
📦 384GB HBM + 12TB/s bandwidth
🔋 Just 3kW power draw (compared to 10.2kW for H100 SXM servers)
Here’s a recording of the demo in action, showing that cutting-edge models can be deployed well within the existing power budgets of typical data centers:
Here’s a recording of the demo in action, showing that cutting-edge models can be deployed well within the existing power budgets of typical data centers:
We demonstrated a real-time chatbot efficiently running the model on just two RNGD cards, using MXFP4 precision: furiosa.ai/blog/furiosa...
We demonstrated a real-time chatbot efficiently running the model on just two RNGD cards, using MXFP4 precision: furiosa.ai/blog/furiosa...
That’s why we were excited to have Alex Liu onstage sharing how we are redefining performance and efficiency in AI infrastructure.
That’s why we were excited to have Alex Liu onstage sharing how we are redefining performance and efficiency in AI infrastructure.
👀 Watch our SVP of Product and Business, Alex Liu, present on the Enterprise AI stage at 4:10 PM on September 9.
🤝 Connect with us in the app, stop by Booth 728 from September 9 to September 11, and book a meeting in advance: lp.furiosa.ai/ai-infra-sum...
👀 Watch our SVP of Product and Business, Alex Liu, present on the Enterprise AI stage at 4:10 PM on September 9.
🤝 Connect with us in the app, stop by Booth 728 from September 9 to September 11, and book a meeting in advance: lp.furiosa.ai/ai-infra-sum...
Read the full spotlight in our newsletter: www.linkedin.com/pulse/furios...
Read the full spotlight in our newsletter: www.linkedin.com/pulse/furios...
👀 Read the full article here: dxttx52ei7rol.cloudfront.net/Micro_202503...
👀 Read the full article here: dxttx52ei7rol.cloudfront.net/Micro_202503...
These six papers dig into ways to make advanced AI systems more efficient, more capable, and more flexible.
These six papers dig into ways to make advanced AI systems more efficient, more capable, and more flexible.