VLLM error: google/gemma-3n-E4B-it is not a multimodal model None

#39
by tatarrecords - opened

Hi
I tried to run google/gemma-3n-E4B-it model in VLLM, but faced an error:

{
    "object": "error",
    "message": "google/gemma-3n-E4B-it is not a multimodal model None",
    "type": "BadRequestError",
    "param": null,
    "code": 400
}

Is it possible to send images to this model through VLLM framework? And if its possible, how can I fix this problem?

Google org
edited Sep 25

Hi,

Hugging Face, Github page shows that gemma 3n models had integration issues that require specific, recent dependency updates like timm. Kindly try with latest version, as support for new models is added frequently.

Once your Vllm is compatible, you must format your input to tell the model where the image is. you need to include the special image token in the prompt and pass the image data separately.

Please follow this Vllm documentation for more reference. if you have any concerns let us know will assist you.

Thank you.

I am trying to run it on vllm v0.11.0, but I only get an output for the first time. If I send the exact same message again against the chat completions endpoint, I get an empty message back, no errors. Using an A100. Can you tell me with which parameters you deploy the model on VLLM?

Sign up or log in to comment