The technical report can be found here
The following are additional setup steps that are needed to get this working. You should still follow all the directions in the official "Introduction" below this.
- Install additional requirements
pip install -r requirements_2.txt
- Download pretrained model
./setup_download_model.sh
- Install java
sudo apt install default-jdk
- GIT model setup
pip install -r requirements.txt
python setup.py build develop
Calling ./setup_download_data.sh will do this for you and setup the following
directory structure
video_summarisation_git/data/
|
|-- category.txt ...................... # video category name to id mapping file
|
|-- train_val/ ........................ # dir for training & validation sets
| |-- train_val_videodatainfo.json .. # annotation file
| |-- pyscenedetect_frames/ ......... # dir for pyscenedetect sampled frames
| |-- random_frames/ ................ # auto-generated: dir for randonmly sampled frames
| |-- transnet_frames/ .............. # auto-generated: dir for transnet sampled frames
| `-- videos/ ....................... # parent dir for videos (each video should have its own folder inside this dir)
|
`-- test/ ............................. # dir for test set (structure same as train_val)
|-- test_videodatainfo.json ....... # annotation file
`-- [...]
Download and unzip them in /data/train_val or /data/train as appropriate.
- Training (2023-03-20)
- Test (2023-03-20)
- out of order frames (out of date)
./setup_sample_frames.sh train # sample frames for training data
./setup_sample_frames.sh test # same for test
open /setup_sample_frames.sh to get an idea of the commands to run
for each sampling method.
Alternatively, you can look at the actual samplers in /sampling_scripts
# for random frames
python command_builder/training_command.py -d data/train_val/random_frames/ -c data/train_val/train_val_videodatainfo.json
# or for transnet frames
python command_builder/training_command.py -d data/train_val/transnet_frames/ -c data/train_val/train_val_videodatainfo.json
# or for pyscenedetect frames
python command_builder/training_command.py -d data/train_val/pyscenedetect_frames/ -c data/train_val/train_val_videodatainfo.json
-
pyscenedetect
-
random frames
-
transnet
Do this for ONE selected sampling method using the following.
Alternatively you can call ./runner.sh which should have everything you need,
and will be representative of the last data you called the training command builder on
python -m generativeimage2text.finetune -p '{
"type": "train",
"model_name": "GIT_BASE",
"model_path": "model.pt",
"batch_size": 3,
"epochs": 20,
"train_csv": "data/train_val/{FRAME DIRECTORY HERE}/processed_data_train.csv", # Be sure to swap out {FRAME DIRECTORY HERE} for the directory where your frames are
"validation_csv": "data/train_val/{FRAME DIRECTORY HERE}/processed_data_validate.csv",
"validation_annotations_json": "data/train_val/train_val_videodatainfo.json" #path to annotations file
}
on test set:
python -m generativeimage2text.vc_inference -p "{'type': 'multi_video_inference', 'videos_csv': '', 'annotations_json_path': '', 'model_path':'./msrvtt_model_epoch1.pt', 'model_name':'GIT_BASE', 'predictions_file':None}"
on multiple models
python -m generativeimage2text.vc_inference -p "{'type': 'multi_video_inference_dir', 'videos_csv': '', 'annotations_json_path': '', 'model_dir':'./model_transnet', 'model_name':'GIT_BASE'}"
- one drive *Onedrive link
- gcp bucket
- Viewable HTML Results
-
models (see section #4)
-
Where can I find a A100 to finetune on?
- try the netherlands region OR salt lake city US
-
errors about cv2, pandas, numpy, etc.
- make sure you've installed the second requirements file as described above
-
errors about model.py when running the fine tuning script
- make sure you've downloaded vatex as described above in the root dir of the project
This repo presents some example codes to reproduce some results in GIT: A Generative Image-to-text Transformer for Vision and Language.
-
Install azfuse. The tool is used to automatically download the data. The configuration of AzFuse has already been in this repo.
-
Download the source code by
git clone https://github.com/microsoft/GenerativeImage2Text.git cd GenerativeImage2Text -
Install the package
pip install -r requirements.txt python setup.py build develop
-
Inference on a single image or multiple frames:
# single image, captioning AZFUSE_TSV_USE_FUSE=1 python -m generativeimage2text.inference -p "{'type': 'test_git_inference_single_image', \ 'image_path': 'aux_data/images/1.jpg', \ 'model_name': 'GIT_BASE', \ 'prefix': '', \ }" # single image, question answering AZFUSE_TSV_USE_FUSE=1 python -m generativeimage2text.inference -p "{'type': 'test_git_inference_single_image', \ 'image_path': 'aux_data/images/1.jpg', \ 'model_name': 'GIT_BASE_VQAv2', \ 'prefix': 'what is it?', \ }" # multiple images, captioning AZFUSE_TSV_USE_FUSE=1 python -m generativeimage2text.inference -p "{'type': 'test_git_inference_single_image', \ 'image_path': ['aux_data/images/1.jpg', 'aux_data/images/1.jpg', 'aux_data/images/1.jpg', 'aux_data/images/1.jpg', 'aux_data/images/1.jpg', 'aux_data/images/1.jpg'], \ 'model_name': 'GIT_BASE_VATEX', \ 'prefix': '', \ }" # multiple images, question answering AZFUSE_TSV_USE_FUSE=1 python -m generativeimage2text.inference -p "{'type': 'test_git_inference_single_image', \ 'image_path': ['aux_data/images/1.jpg', 'aux_data/images/1.jpg', 'aux_data/images/1.jpg', 'aux_data/images/1.jpg', 'aux_data/images/1.jpg', 'aux_data/images/1.jpg'], \ 'model_name': 'GIT_BASE_MSRVTT_QA', \ 'prefix': 'what is it?', \ }"
-
If
prefixis empty, it is effectively the captioning task. -
If
prefixis a question, it is effectively the visual question answering task. -
Use a list for
image_pathif it is for video. The example here is 6 identical images, only for a demo purpose. It should be different image frames from a video. -
model_namehere can be the following. Performance details can be found in the reference paper.model_name Information Performance GIT_BASE pretrained on 4M images GIT_BASE_COCO fine-tuned on COCO CIDEr: 131.4 GIT_BASE_TEXTCAPS fine-tuned on TextCaps for captioning val/CIDEr: 64.9 GIT_BASE_VQAv2 fine-tuned on VQAv2 test-dev: 72.72 GIT_BASE_TEXTVQA fine-tuned on TextVQA val/acc: 18.81 GIT_BASE_VATEX fine-tuned on VATEX for captioning public/test/CIDEr: 60.0 GIT_BASE_MSRVTT_QA fine-tuned on MSRVTT for question answering acc: 41.0 GIT_LARGE pretrained on 14M images GIT_LARGE_COCO fine-tuned on COCO CIDEr: 138.5 GIT_LARGE_TEXTCAPS fine-tuned on TextCaps for captioning val/CIDEr: 106.3 GIT_LARGE_VQAv2 fine-tuned on VQAv2 test-dev: 75.51 GIT_LARGE_TEXTVQA fine-tuned on TextVQA val/acc: 37.47 GIT_LARGE_VATEX fine-tuned on VATEX for captioning public/test/CIDEr: 72.5 GIT_LARGE_MSRVTT_QA fine-tuned on MSRVTT for question answering acc: 42.7 -
In the dataset of cc12m, the caption may contain some special tags to hide person names and the model might also predict such special tokens. To eliminate this issue, we remove these captions (around 25% in cc12m), and re-trained the large-sized model. The base-sized model is not affected as cc12 is not part of the training data.
model_name Information Performance GIT_LARGE_R pretrained on 14M images with special tag removed GIT_LARGE_R_COCO fine-tuned on COCO CIDEr: 137.6 GIT_LARGE_R_TEXTCAPS fine-tuned on TextCaps for captioning val/CIDEr: 105.3
-
-
Inference on a TSV file, which is a collection of multiple images.
- Data format (for information only)
- image TSV: Each row has two columns. The first is the image key; the second is base64-encoded jpg or png bit string.
- caption or question tsv: Each row has two columns. The first is the image
key; the second is a list of dictionaries in the json format. For caption TSV,
the dictionary should contain at least the field of
'caption'. For the question answering TSV, it should contain at leastquestion_idandquestion.
- inference on COCO Karpathy test.
- Inference.
# base AZFUSE_TSV_USE_FUSE=1 python -m generativeimage2text.inference -p "{'type': 'test_git_inference_single_tsv', \ 'image_tsv': 'data/coco_caption/test.img.tsv', \ 'model_name': 'GIT_BASE_COCO', \ 'question_tsv': null, \ 'out_tsv': 'inference/GIT_BASE_COCO/coco.tsv', \ }" # GIT_LARGE_COCO. If there are 8 GPUs, it can parallel by mpirun -n 8 AZFUSE_TSV_USE_FUSE=1 mpirun -n 8 python -m generativeimage2text.inference -p "{'type': 'test_git_inference_single_tsv', \ 'image_tsv': 'data/coco_caption/test.img.tsv', \ 'model_name': 'GIT_LARGE_COCO', \ 'question_tsv': null, \ 'out_tsv': 'inference/GIT_LARGE_COCO/coco.tsv', \ }"
- Calculate the evaluation metric
The CIDEr score should be 131.35 for
# base AZFUSE_TSV_USE_FUSE=1 python -m generativeimage2text.inference -p "{'type': 'evaluate_on_coco_caption', \ 'res_file': 'inference/GIT_BASE_COCO/coco.tsv', \ 'label_file': 'data/coco_caption/test.caption.tsv', \ }"
GIT_BASE_COCOand 138.45 forGIT_LARGE_COCO. If you get lower score (e.g. 126 for the base model), the reason could be the misalignment of the environment, e.g. pytorch version. - (optional) To exactly reproduce the number, please run the following:
nvidia-docker run --ipc=host amsword/setup:py38pt19u20cu11 \ bash -c "mkdir -p /tmp/code \ && cd /tmp/code \ && pip install git+https://github.com/microsoft/azfuse.git \ && git clone https://github.com/amsword/generativeimage2text.git \ && cd generativeimage2text \ && pip install -r requirements.txt \ && python setup.py build develop \ && AZFUSE_TSV_USE_FUSE=1 python -m generativeimage2text.inference -p "{'type': 'test_git_inference_single_tsv', \ 'image_tsv': 'data/coco_caption/test.img.tsv', \ 'model_name': 'GIT_BASE_COCO', \ 'question_tsv': null, \ 'out_tsv': 'inference/GIT_BASE_COCO/coco.tsv', \ }" \ && AZFUSE_TSV_USE_FUSE=1 python -m generativeimage2text.inference -p "{'type': 'evaluate_on_coco_caption', \ 'res_file': 'inference/GIT_BASE_COCO/coco.tsv', \ 'label_file': 'data/coco_caption/test.caption.tsv', \ 'outfile': 'inference/GIT_BASE_COCO/coco.score.json', \ }" \ && cat inference/GIT_BASE_COCO/coco.score.json \ "
- Inference.
- Inference on vqa test
-
Inference
# base model AZFUSE_TSV_USE_FUSE=1 python -m generativeimage2text.inference -p "{'type': 'test_git_inference_single_tsv', \ 'image_tsv': 'data/TaxVQAv2/test.tsv', \ 'model_name': 'GIT_BASE_VQAv2', \ 'question_tsv': 'data/TaxVQAv2/test.caption.tsv', \ 'out_tsv': 'inference/GIT_BASE_VQAv2/snapshot/vqav2.tsv', \ }" # GIT_LARGE_VQAv2 with 8 GPUs. AZFUSE_TSV_USE_FUSE=1 mpirun -n 8 python -m generativeimage2text.inference -p "{'type': 'test_git_inference_single_tsv', \ 'image_tsv': 'data/TaxVQAv2/test.tsv', \ 'model_name': 'GIT_LARGE_VQAv2', \ 'question_tsv': 'data/TaxVQAv2/test.caption.tsv', \ 'out_tsv': 'inference/GIT_LARGE_VQAv2/snapshot/vqav2.tsv', \ }"
-
Convert the output tsv to the json format for submission to evalai
# base model AZFUSE_TSV_USE_FUSE=1 python -m generativeimage2text.inference -p "{'type': 'convert_tsv_to_vqa_json', \ 'predict_file': 'inference/GIT_BASE_VQAv2/snapshot/vqav2.tsv', \ 'out_json': 'inference/GIT_BASE_VQAv2/snapshot/vqav2.json', \ }" # large model AZFUSE_TSV_USE_FUSE=1 python -m generativeimage2text.inference -p "{'type': 'convert_tsv_to_vqa_json', \ 'predict_file': 'inference/GIT_LARGE_VQAv2/snapshot/vqav2.tsv', \ 'out_json': 'inference/GIT_LARGE_VQAv2/snapshot/vqav2.json', \ }"
Submit the file of
inference/GIT_BASE_VQAv2/snapshot/vqav2.jsonto evalai and you should get72.72ontest-dev. If it isGIT_LARGE_VQAv2, the accuracy is75.51. -
(optional) To exactly reproduce the number, you can use the following:
# base model nvidia-docker run --ipc=host amsword/setup:py38pt19u20cu11 \ bash -c "mkdir /tmp/code \ && cd /tmp/code \ && pip install git+https://github.com/microsoft/azfuse.git \ && git clone https://github.com/amsword/generativeimage2text.git \ && cd generativeimage2text \ && pip install -r requirements.txt \ && python setup.py build develop \ && AZFUSE_TSV_USE_FUSE=1 python -m generativeimage2text.inference -p "{'type': 'test_git_inference_single_tsv', \ 'image_tsv': 'data/TaxVQAv2/test.tsv', \ 'model_name': 'GIT_BASE_VQAv2', \ 'question_tsv': 'data/TaxVQAv2/test.caption.tsv', \ 'out_tsv': 'inference/GIT_BASE_VQAv2/snapshot/vqav2.tsv', \ }" \ && AZFUSE_TSV_USE_FUSE=1 python -m generativeimage2text.inference -p "{'type': 'convert_tsv_to_vqa_json', \ 'predict_file': 'inference/GIT_BASE_VQAv2/snapshot/vqav2.tsv', \ 'out_json': 'inference/GIT_BASE_VQAv2/snapshot/vqav2.json', \ }" \ }"
Note that, please modify the docker command properly so that the output file can be saved permanently to the host machine. It is also recommended to run it inside the docker container by
nvidia-docker run --ipc=host amsword/setup:py38pt19u20cu11 sleep infinity docker ps # get the docker container ID docker exec -it container_id /bin/bash # attach inside the docker container # all other commands to run the inference.
-
- Data format (for information only)
The repo shows the key code path of constructing the network input with transformations and forward/backward. The code can be plugged into any trainer easily. Here is the example for the base model.
- Pretraining/captioning
python -m generativeimage2text.train -p "{'type': 'forward_backward_example', \ 'image_files': ['aux_data/images/1.jpg', 'aux_data/images/2.jpg'], \ 'captions': ['a couple of boats in a large body of water.', 'a view of a mountain with a tree'], \ }" - VQA
python -m generativeimage2text.train -p "{'type': 'forward_backward_example', \ 'image_files': ['aux_data/images/1.jpg', 'aux_data/images/2.jpg'], \ 'prefixs': ['what is this?', 'how many trees?'], \ 'captions': ['several boats in a large body of water', '1'], \ }"
-
Save the file of
LOC_synset_mapping.txtfrom Kaggle. underaux_data/imagenet/ -
Convert the wordnet ID to readable names as follows
python -m generativeimage2text.data_prepare -p "{'type': 'generate_imagenet_unique_names'}"
The input file is hard coded as
./aux_data/imagenet/LOC_synset_mapping.txtand the output file is./aux_data/imagenet/imagenet_unique_readable_names.txt
Please consider to cite the following reference if it helps.
@article{wang2022git,
title={GIT: A Generative Image-to-text Transformer for Vision and Language},
author={Wang, Jianfeng and Yang, Zhengyuan and Hu, Xiaowei and Li, Linjie and Lin, Kevin and Gan, Zhe and Liu, Zicheng and Liu, Ce and Wang, Lijuan},
journal={arXiv preprint arXiv:2205.14100},
year={2022}
}
Part of the code is based on transformers, clip, maskrcnn-benchmark, oscar, virtex.
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.