-
Notifications
You must be signed in to change notification settings - Fork 1
Description
I ran into two issues.
I am trying to reproduce the tea-pouring example used in the paper. I passed an image from the PhysGENBench into the stage1 config file.
EXP_NAME="initial_setup"
PROMPT="The video shows a close-up of a clear glass being filled with tea. The tea is being poured from above, and we can see the stream of tea
hitting the bottom of the glass and causing ripples and splashes. The background is a plain white surface, which contrasts with the transparency of the glass and
the clarity of the tea. The glass is cylindrical in shape and appears to be of a standard size for a drinking glass.
initial_boxes: [{'id': 0, 'name': 'tea', 'box': [241.8, 335.3, 165, 88]}]"
DATA_ROOT="data"
FIRST_FRAME_PATH="PhyGenBench/frame0.png"
GROUNDING_MODEL_PATH="IDEA-Research/grounding-dino-tiny"
SAM2_CHECKPOINT="GroundedSAM2/checkpoints/sam2.1_hiera_large.pt"
python -m VLIPP.stage1.stage1 --prompt "$PROMPT"
--data_root "$DATA_ROOT"
--first_frame_path "$FIRST_FRAME_PATH"
--exp_name "$EXP_NAME"
--grounding_model_path "$GROUNDING_MODEL_PATH"
--sam2_checkpoint "$SAM2_CHECKPOINT"
Where do I obtain grounding model?
The first error I got was: using image API for GPT 4o
messages[-1]['content'][0]['image_url'] = {"url": f"data:image/png;base64,{first_frame_b64_str}"}
I had to change all "image" keys to "image_url". Is this expected?
Issue #2:
data_dict = detect_and_segment(data_dict, output_root=output_root, sam2_model=sam2_model, sam2_predictor=sam2_predictor, processor=processor, grounding_model=grounding_model)
detect_and_segment does not have the right function signature : def detect_and_segment(info_json, output_root, sam2_predictor=None, processor=None, grounding_model=None, DEVICE="cuda"):