Dear Author,
Thank you for open-sourcing your code and sharing your valuable work.
I have been reading your paper with great interest and attempting to replicate your results. According to the paper, you used blip2-flan-t5-xxl to generate the reasoning rationales. However, during my replication attempts, I found it challenging to generate similar intermediate rationale files.
I wanted to ask:
- Is blip2-flan-t5-xxl indeed the correct model to use, or might there have been a different model involved?
- Do specific prompt designs play a crucial role in obtaining the desired rationales?
If possible, could you kindly provide examples of the prompts you used? It would be incredibly helpful for reproducing the results more accurately.
Thank you for your time and for your contributions to the community. I look forward to your response.
Dear Author,
Thank you for open-sourcing your code and sharing your valuable work.
I have been reading your paper with great interest and attempting to replicate your results. According to the paper, you used blip2-flan-t5-xxl to generate the reasoning rationales. However, during my replication attempts, I found it challenging to generate similar intermediate rationale files.
I wanted to ask:
If possible, could you kindly provide examples of the prompts you used? It would be incredibly helpful for reproducing the results more accurately.
Thank you for your time and for your contributions to the community. I look forward to your response.