Greg 🐂 @ Neurips 2024
banner
gregschoeninger.bsky.social
Greg 🐂 @ Neurips 2024
@gregschoeninger.bsky.social
CEO of Oxen.ai, host of Arxiv Dives, training LLMs since they were smol.

Letting Oxen plow and maintain the fields so we don’t have to.
The Playground from Richard Powers has been an entertaining and mysterious read so far, no idea where it’s going but like where it’s been.

And of course next up in the queue is @chiphuyen.bsky.social’s new book on AI engineering 🤓
December 25, 2024 at 8:55 PM
Reposted by Greg 🐂 @ Neurips 2024
Great way to finish off the year @gregschoeninger.bsky.social @sthoward.bsky.social @oxen-ai.bsky.social Subscribe to their channel at youtube.com/@oxen-ai to be notified of the recording soon
December 20, 2024 at 7:52 PM
Coding LLMs should be open and be able to run locally
December 19, 2024 at 4:46 AM
"Robot, remove the stain"

The robot then grab some scissors ✂️ and cuts it out of the t-shirt 👚.

This is a great example of why we need more data for robotics from Danica Kragic at @neuripsconf.bsky.social
December 13, 2024 at 8:18 PM
Hit me up if you’re going to @neuripsconf.bsky.social and want to nerd out on synthetic data 🤓
December 10, 2024 at 4:40 PM
Still cooking on a synthetic dataset using Qwen VL Models
December 10, 2024 at 4:38 PM
Ask and you shall receive @scienceartmagic.bsky.social ! Qwen Vision Language Models now live on Oxen.ai

Thanks FireworksAI for doing the heavy lifting 🎆
December 6, 2024 at 10:14 PM
Reposted by Greg 🐂 @ Neurips 2024
Join me and my friends @oxen-ai.bsky.social @gregschoeninger.bsky.social @sthoward.bsky.social tomorrow (technically today, but I haven't slept yet) at 10am Pacific/1pm Eastern for an arXiv Dive on LLaVa-CoT

lu.ma/arxivdive-32...
Arxiv Dives with Oxen.AI - LLaVA-CoT: Vision Language + Step-by-step Reasoning · Zoom · Luma
Hey Nerd, join the Herd!... for a little book/paper review. WHAT TO EXPECT Each week we pick a paper to cover in depth and have open Q/A. Often joined by paper…
lu.ma
December 6, 2024 at 6:36 AM
Tomorrow we'll be diving into how you can use synthetic data, chains of thought, and inference time scaling to add reasoning to Visual LLMs.

TLDR ~ LLaVA-CoT beats out some of the closed source models on many benchmarks!
December 6, 2024 at 6:31 AM
Thanksgiving reminder, enjoy what you have today
November 28, 2024 at 5:23 PM
@oxen-ai.bsky.social now supports Groq for Visual LLMs including Llama 3.2 Vision 90B, 11B and LLaVA 7B

Can confirm, it's fast 🔥

First test processed 125 receipts in 24 seconds for $0.02
November 27, 2024 at 6:53 PM
Posting on Reddit is awesome because you get random experts telling you about models and techniques you’ve never heard of before
November 27, 2024 at 4:07 AM
Oxen.ai can now see! 🖼️ 👀

Simply select a dataset with images, type in your prompt, and let GPT-4o or GPT-4o-mini do the work for you.

Let us know if there are other vision models you'd like to see us integrate.
November 26, 2024 at 11:57 PM
Talking Upcycling MoE with Ethan He from Nvidia, and learned a lot. It's a pretty simple approach that costs 1/8 the compute to get performance 2x the model size (according to scaling laws). Highly recommend listening to the whole presentation.

youtu.be/Cepja572USI
November 20, 2024 at 3:38 AM
Mooooo'ving on over to Bluesky feels good 🤠
Welcome @oxen-ai.bsky.social ! Moooooooooo

If you're into #ai #ml, they have great Arxiv Dives and AI Water Cooler discussions alternating every other Friday.

If you want to really dig into/collaborate on your datasets, they've got you covered too! oxen.ai
Home | Oxen.ai
Manage your machine learning datasets with Oxen AI.
oxen.ai
November 20, 2024 at 3:24 AM
New Oxen.ai demo just dropped on the YouTubes, come collab on some data with us 🤝

youtu.be/zukfa2jTnHw?...
Oxen.ai Demo - 11/18/2024
YouTube video by Oxen
youtu.be
November 20, 2024 at 3:08 AM