This is a 30B parameter MoE with 3B active parameters and is the successor to their previous 7B omni model. [1]
You can expect this model to have similar performance to the non-omni version. [2]
There aren't many open-weights omni models so I consider this a big deal. I would use this model to replace the keyboard and monitor in an application while doing the heavy lifting with other tech behind the scenes. There is also a reasoning version, which might be a bit amusing in an interactive voice chat if it pronounces the thinking tokens while working through to a final answer.
- 80M Transformer/200M ConvNet audio token to waveform
This is a closed source weight update to their Qwen3-Omni model. They had a previous open weight release Qwen/Qwen3-Omni-30B-A3B-Instruct and a closed version Qwen3-Omni-Flash.
You basically can't use this model right now since none of the open source inference framework have the model fully implemented. It works on transformers but it's extremely slow.
No... that website is not helpful. If you take it at face value, it is claiming that the previous Qwen3-Omni-Flash wasn't open either, but that seems wrong? It is very common for these blog posts to get published before the model weights are uploaded.
Based on things I had read over the past several months, Qwen3-Flash seemed to just be a weird marketing term for the Qwen3-Omni-30B-A3B series, not a different model. If they are not the same, then that is interesting/confusing.
red2awn 6 hours ago [-]
It is an in-house closed weight model for their own chat platform, mentioned in Section 5 of the original paper: https://arxiv.org/pdf/2509.17765
I've seen it in their online materials too but can't seem to find it now.
gardnr 9 hours ago [-]
I can't find the weights for this new version anywhere. I checked modelscope and huggingface. It looks like they may have extended the context window to 200K+ tokens but I can't find the actual weights.
Was it being closed weight obvious to you from the article? Trying to understand why I was confused. Had not seen the "Flash" designation before
Also 30B models can beat a semi-recent 235B with just some additional training?
red2awn 6 hours ago [-]
They had a Flash variant released alongside the original open weight release. It is also mentioned in Section 5 of the paper: https://arxiv.org/pdf/2509.17765
For the evals it's probably just trained on a lot of the benchmark adjacent datasets compared to the 235B model. Similar thing happened on other model today: https://x.com/NousResearch/status/1998536543565127968 (a 30B model trained specifically to do well in maths get near SOTA scores)
tensegrist 8 hours ago [-]
> There is also a reasoning version, which might be a bit amusing in an interactive voice chat if it pronounces the thinking tokens while working through to a final answer.
last i checked (months ago) claude used to do this
andy_xor_andrew 8 hours ago [-]
> This is a 30B parameter MoE with 3B active parameters
Where are you finding that info? Not saying you're wrong; just saying that I didn't see that specified anywhere in the linked page, or on their HF.
plipt 7 hours ago [-]
The link[1] at the top of their article to HuggingFace goes to some models named Qwen3-Omni-30B-A3B that were last updated in September. None of them have "Flash" in the name.
The benchmark table shows this Flash model beating their Qwen3-235B-A22B. I dont see how that is possible if it is a 30B-A3B model.
I don't see a mention of a parameter count anywhere in the article. Do you? This may not be an open weights model.
Does Qwen3-Omni support real-time conversation like GPT-4o? Looking at their documentation it doesn't seem like it does.
Are there any open weight models that do? Not talking about speech to text -> LLM -> text to speech btw I mean a real voice <-> language model.
edit:
It does support real-time conversation! Has anybody here gotten that to work on local hardware? I'm particularly curious if anybody has run it with a non-nvidia setup.
potatoman22 4 hours ago [-]
From what I can tell, their official chat site doesn't have a native audio -> audio model yet. I like to test this through homophones (e.g. record and record) and asking it to change its pitch or produce sounds.
sosodev 3 hours ago [-]
Huh, you're right. I tried your test and it clearly can't understand the difference between homophones. That seems to imply they're using some sort of TTS mechanism. Which is really weird because Qwen3-Omni claims to support direct audio input into the model. Maybe it's a cost saving measure?
red2awn 7 hours ago [-]
None of inference frameworks (vLLM/SGLang) supports the full model, let alone non-nvidia.
AndreSlavescu 6 hours ago [-]
We actually deployed working speech to speech inference that builds on top of vLLM as the backbone. The main thing was to support the "Talker" module, which is currently not supported on the qwen3-omni branch for vLLM.
Nice work. Are you working on streaming input/output?
AndreSlavescu 5 hours ago [-]
Yeah, that's something we currently support. Feel free to try the platform out! No cost to you for now, you just need a valid email to sign up on the platform.
sosodev 5 hours ago [-]
Is your work open source?
whimsicalism 19 minutes ago [-]
Makes sense, I think streaming audio->audio inference is a relatively big lift.
sosodev 6 hours ago [-]
That's unfortunate but not too surprising. This type of model is very new to the local hosting space.
dsrtslnd23 10 hours ago [-]
it seems to be able to do native speech-speech
sosodev 10 hours ago [-]
It does for sure. I did some more digging and it does real-time too. That's fascinating.
ivape 6 hours ago [-]
That's exciting. I doubt there are any polished voice chat local apps yet that you can easily plug this into (I doubt the user experience is "there" yet). Even stuff like Silly Tavern is near unusable, lots of work to be done on the local front. Local voice models are what's going to enable that whole Minority Report workflow soon enough (especially if commands and intent are determined at the local level, and the meat of the prompt is handled by a larger remote model).
This is part of programming that I think is the new field. There will be tons of work for those that can build the new workflows which will need to be primarily natural language driven.
He didn't include any details regarding how the model was running though
terhechte 9 hours ago [-]
Is there a way to run these Omni models on a Macbook quantized via GGUF or MLX? I know I can run it in LMStudio or Llama.cpp but they don't have streaming microphone support or streaming webcam support.
Qwen usually provides example code in Python that requires Cuda and a non-quantized model. I wonder if there is by now a good open source project to support this use case?
tgtweak 7 hours ago [-]
You can probably follow the vLLM instructions for omni here, then use the included voice demo html to interface with it:
Whisper and Qwen Omni models have completely different architectures as far as I know
banjoe 9 hours ago [-]
Wow, crushing 2.5 Flash on every benchmark is huge. Time to move all of my LLM workloads to a local GPU rig.
embedding-shape 9 hours ago [-]
Just remember to benchmark it yourself first with you private task collection, so you can actually measure them against each other. Pretty much any public benchmark is unreliable at this moment, and making model choices based on other's benchmarks is bound to leave you disappointed.
MaxikCZ 8 hours ago [-]
This. Last benchmarks of DSv3.2spe hinted at beating basically everything, yet in my testing even sonnet is miles ahead both in terms of speed and accuracy
red2awn 6 hours ago [-]
Why would you use an Omni model for text only workload... There is Qwen3-30B-A3B.
binsquare 10 hours ago [-]
Does anyone else find that there's hard to pin down reason of life-lessness in the speech of these voice models?
Especially in the fruit pricing portion of the video for this model. Sounds completely normal but I can immediately tell it is ai. Maybe it's intonation or the overly stable rate of speech?
Lapel2742 10 hours ago [-]
IMHO it's not lifeless. It's just not overly emotional. I definitely prefer it that way. I do not want the AI to be excited. It feels so contrived.
On the video itself: Interesting, but "ideal" was pronounced wrong in German. For a promotional video, they should have checked that with native speakers. On the other hand its at least honest.
nunodonato 8 hours ago [-]
I hate with a passion the over-americanized "accent" of chatgpt voices. Give me a bland one any day of the week
vessenes 7 hours ago [-]
I'm not convinced its end-to-end multimodal - in that case, you'll have a speech synthesis section and this will be some of the result. You could test by having it sing or do some accents, or have it talk back to you in an accent you give it.
sosodev 10 hours ago [-]
I think it's because they've crammed vision, audio, multiple voices, prosody control, multiple languages, etc into just 30 billion parameters.
I think ChatGPT has the most lifelike speech with their voice models. They seem to have invested heavily in that area while other labs focused elsewhere.
esafak 10 hours ago [-]
> Sounds completely normal but I can immediately tell it is ai.
Maybe that's a good thing?
colechristensen 10 hours ago [-]
I'm perfectly ok with and would prefer an AI "accent".
sim04ful 9 hours ago [-]
The main issue I'm facing with realtime responses (speech output) is how to separate non-diegetic outputs (e.g thinking, structured outputs) from outputs meant to be heard by the end user.
I'm curious how anyone has solved this
artur44 7 hours ago [-]
A simple way is to split the model’s output stream before TTS.
Reasoning/structured tokens go into one bucket, actual user-facing text into another. Only the second bucket is synthesized. Most thinking out loud issues come from feeding the whole stream directly into audio.
pugio 6 hours ago [-]
There is no TTS here. It's a native audio output model which outputs audio tokens directly. (At least, that's how the other real-time models work. Maybe I've misunderstood the Qwen-Omni architecture.)
artur44 6 hours ago [-]
True, but even with native audio-token models you still need to split the model’s output channels. Reasoning/internal tokens shouldn't go into the audio stream only user-facing content should be emitted as audio. The principle is the same, whether the last step is TTS or audio token generation.
forgingahead 1 hours ago [-]
I truly enjoy how the naming conventions seem to follow how I did homework assignments back in the day: finalpaper-1-dec2nd, finalpaper-2-dec4th, etc etc.
devinprater 8 hours ago [-]
Wow, just 32B? This could almost run on a good device with 64 GB RAM. Once it gets to Ollama I'll have to see just what I can get out of this.
plipt 7 hours ago [-]
I see that their HuggingFace link goes to some Qwen3-Omni-30B-A3B models that show a last updated date of September
The benchmark table in their article shows Qwen3-Omni-Flash-2025-12-01 (and the previous Flash) as beating Qwen3-235B-A22B. How is that possible if this is only a 30B-A3B model? Also confusing how that comparison column starts out with one model but changes them as you descend down the table.
I don't see any FLASH variant listed on their Hugginface. Am i just missing it or are these specifying a model only used for their API service and there are no open weights to download?
apexalpha 7 hours ago [-]
I run these on a 48gb Mac because of the universal ram.
> How many resistors are used in fuzzhugger phantom octave guitar pedal?
Weird, as someone not having a database of the web, I wouldn't be able to calculate either result.
dvh 10 hours ago [-]
"I don't know" would be perfectly reasonable answer
MaxikCZ 8 hours ago [-]
I feel like theres a time in near future where LLMs will be too cautious to answer any questions they arent sure about, and most of the human effort will go into pleading the LLM to at least try to give an answer, which will almost always be correct anyways.
plufz 6 hours ago [-]
That would be a great if you could have a setting like temperature 0.0-1.0 (Only answer if you are 100% to guess as much as you like).
littlestymaar 5 hours ago [-]
It's not going to happen as the user would just leave the platform.
It would be better for most API usage though, as for business doing just a fraction of the job with 100% accuracy is often much preferable than claiming to do 100% but 20% is garbage.
kaoD 9 hours ago [-]
> as someone not having a database of the web, I wouldn't be able to calculate either result
And that's how I know you're not an LLM!
iFire 10 hours ago [-]
I tend to pick things where I think the answer is in the introduction material like exams that test what was taught.
esafak 10 hours ago [-]
This is just trivia. I would not use it to test computers -- or humans.
littlestymaar 8 hours ago [-]
It's good way to assess the model with respect to hallucinations though.
I don't think a model should know the answer, but it must be able to know that it doesn't know if you want to use it reliably.
esafak 8 hours ago [-]
No model is good at this yet. I'd expect the flagships to solve the first.
parineum 10 hours ago [-]
Everything is just trivia until you have a use for the answer.
OP provided a we link with the answer, aren't these models supposed to be trained on all of that data?
esafak 9 hours ago [-]
There is nothing useful you can do with this information. You might as well memorize the phone book.
The model has a certain capacity -- quite limited in this case -- so there is an opportunity cost in learning one thing over another. That's why it is important to train on quality data; things you can build on top of.
parineum 28 minutes ago [-]
What if you are trying to fix one of these things and needed a list of replacement parts?
esafak 15 minutes ago [-]
Not a problem for this model.
DennisP 9 hours ago [-]
Just because it's in the training data doesn't mean the model can remember it. The parameters total 60 gigabytes, there's only so much trivia that can fit in there so it has to do lossy compression.
bongodongobob 23 minutes ago [-]
Lol I asked it how many rooms I have in my house and it got that wrong. Llms are useless amirite
cindyllm 20 minutes ago [-]
[dead]
brookst 10 hours ago [-]
Where did you try it? I don’t see this model listed in the linked Qwen chat.
Interesting - when I asked the omni model at qwen.com what version it was, I got a testy "I don't have a version" and then was told my chat was blocked for inappropriate content. A second try asking for knowledge cutoff got me the more equivocal "2024, but I know stuff after that date, too".
No idea how to check if this is actually deployed on qwen.com right now.
zamadatix 7 hours ago [-]
> No idea how to check if this is actually deployed on qwen.com right now.
Assuming you mean qwen.ai, when you run a query it should take you to chat.qwen.ai with the list of models in the top left. None of the options appear to be the -Omni variant (at least when anonymously accessing it).
vessenes 7 hours ago [-]
Thanks - yes - I did. The blog post suggests clicking the 'voice' icon on the bottom right - that's what I did.
mh- 5 hours ago [-]
For what it's worth, that's not a reliable way to check what model you're interacting with.
Rendered at 02:42:41 GMT+0000 (Coordinated Universal Time) with Vercel.
You can expect this model to have similar performance to the non-omni version. [2]
There aren't many open-weights omni models so I consider this a big deal. I would use this model to replace the keyboard and monitor in an application while doing the heavy lifting with other tech behind the scenes. There is also a reasoning version, which might be a bit amusing in an interactive voice chat if it pronounces the thinking tokens while working through to a final answer.
1. https://huggingface.co/Qwen/Qwen2.5-Omni-7B
2. https://artificialanalysis.ai/models/qwen3-30b-a3b-instruct
- 650M Audio Encoder
- 540M Vision Encoder
- 30B-A3B LLM
- 3B-A0.3B Audio LLM
- 80M Transformer/200M ConvNet audio token to waveform
This is a closed source weight update to their Qwen3-Omni model. They had a previous open weight release Qwen/Qwen3-Omni-30B-A3B-Instruct and a closed version Qwen3-Omni-Flash.
You basically can't use this model right now since none of the open source inference framework have the model fully implemented. It works on transformers but it's extremely slow.
I've seen it in their online materials too but can't seem to find it now.
Their benchmark table shows it beating Qwen3-235B-A22B
Does "Flash" in the name of a Qwen model indicate a model-as-a-service and not open weights?
Was it being closed weight obvious to you from the article? Trying to understand why I was confused. Had not seen the "Flash" designation before
Also 30B models can beat a semi-recent 235B with just some additional training?
For the evals it's probably just trained on a lot of the benchmark adjacent datasets compared to the 235B model. Similar thing happened on other model today: https://x.com/NousResearch/status/1998536543565127968 (a 30B model trained specifically to do well in maths get near SOTA scores)
last i checked (months ago) claude used to do this
Where are you finding that info? Not saying you're wrong; just saying that I didn't see that specified anywhere in the linked page, or on their HF.
The benchmark table shows this Flash model beating their Qwen3-235B-A22B. I dont see how that is possible if it is a 30B-A3B model.
I don't see a mention of a parameter count anywhere in the article. Do you? This may not be an open weights model.
This article feels a bit deceptive
1: https://huggingface.co/collections/Qwen/qwen3-omni
Are there any open weight models that do? Not talking about speech to text -> LLM -> text to speech btw I mean a real voice <-> language model.
edit:
It does support real-time conversation! Has anybody here gotten that to work on local hardware? I'm particularly curious if anybody has run it with a non-nvidia setup.
Check it out here: https://models.hathora.dev/model/qwen3-omni
This is part of programming that I think is the new field. There will be tons of work for those that can build the new workflows which will need to be primarily natural language driven.
The creator posted a little demo of it working with Qwen3 Omni that is quite impressive: https://www.youtube.com/watch?v=5DBFVe3cLto
He didn't include any details regarding how the model was running though
Qwen usually provides example code in Python that requires Cuda and a non-quantized model. I wonder if there is by now a good open source project to support this use case?
https://github.com/QwenLM/Qwen3-Omni#vllm-usage
https://github.com/QwenLM/Qwen3-Omni?tab=readme-ov-file#laun...
Especially in the fruit pricing portion of the video for this model. Sounds completely normal but I can immediately tell it is ai. Maybe it's intonation or the overly stable rate of speech?
On the video itself: Interesting, but "ideal" was pronounced wrong in German. For a promotional video, they should have checked that with native speakers. On the other hand its at least honest.
I think ChatGPT has the most lifelike speech with their voice models. They seem to have invested heavily in that area while other labs focused elsewhere.
Maybe that's a good thing?
I'm curious how anyone has solved this
The benchmark table in their article shows Qwen3-Omni-Flash-2025-12-01 (and the previous Flash) as beating Qwen3-235B-A22B. How is that possible if this is only a 30B-A3B model? Also confusing how that comparison column starts out with one model but changes them as you descend down the table.
I don't see any FLASH variant listed on their Hugginface. Am i just missing it or are these specifying a model only used for their API service and there are no open weights to download?
Weird, as someone not having a database of the web, I wouldn't be able to calculate either result.
It would be better for most API usage though, as for business doing just a fraction of the job with 100% accuracy is often much preferable than claiming to do 100% but 20% is garbage.
And that's how I know you're not an LLM!
I don't think a model should know the answer, but it must be able to know that it doesn't know if you want to use it reliably.
OP provided a we link with the answer, aren't these models supposed to be trained on all of that data?
The model has a certain capacity -- quite limited in this case -- so there is an opportunity cost in learning one thing over another. That's why it is important to train on quality data; things you can build on top of.
edit: Nevermind, in spite of them linking it at the top, they are the old models. Also, the HF demo is calling their API and not using HF for compute.
Not their fault frontier labs are letting their speech to speech offerings languish.
No idea how to check if this is actually deployed on qwen.com right now.
Assuming you mean qwen.ai, when you run a query it should take you to chat.qwen.ai with the list of models in the top left. None of the options appear to be the -Omni variant (at least when anonymously accessing it).