Fix: Add correct return behaviour when output_hidden_states=True for CLIP and SIGLIP vision models#44952
Conversation
|
[For maintainers] Suggested jobs to run (before merge) run-slow: clip, siglip |
|
I think it is quite the same as #44431, no? |
|
Ah nice, it is corrected in the mentioned PR. Works when using: outputs = model.get_image_features(**inputs, output_hidden_states=True)
len(outputs.hidden_states)or outputs = model(**inputs, output_hidden_states=True)
len(outputs.vision_model_output.hidden_states)So, the parameter output_hidden_states=True in AutoModel.from_pretrained() becomes deprecated correct? Thank you 😊 |
Not really. The issue you are seeing seems to be more of a general behavior for multimodal models since they have at least two configs embedded inside a bugger config. So you are not changing |
What does this PR do?
Fixes the non existence of output dictionary change, when parameter output_hidden_states=True is passed to models like CLIP or SigLip. This is especially pertinent for the vision model config.
According to #42759 not sure what the behaviour should be for text model, it seems that the SiglipModel class does not successfully cascade this config to the vision and text model.
Reproduction Code
Fixes #42759 #43618
System Info
transformers 5.3.0
python 3.10.6
Code Agent Policy
The Transformers repo is currently being overwhelmed by a large number of PRs and issue comments written by
code agents. We are currently bottlenecked by our ability to review and respond to them. As a result,
we ask that new users do not submit pure code agent PRs at this time.
You may use code agents in drafting or to help you diagnose issues. We'd also ask autonomous "OpenClaw"-like agents
not to open any PRs or issues for the moment.
PRs that appear to be fully agent-written will probably be closed without review, and we may block users who do this
repeatedly or maliciously.
This is a rapidly-evolving situation that's causing significant shockwaves in the open-source community. As a result,
this policy is likely to be updated regularly in the near future. For more information, please read
CONTRIBUTING.md.Before submitting
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@yonigozlan @zucchini-nlp