This is a gentle reminder that pre-trained models are allowed to be used, but you need to register them before 15/10/2021.
To register, please list out in this thread the pre-trained models used by your team with detailed information such as model version/link.
Please also remember to submit the user agreement form by 20/10/2021.
For more information about the important dates and instructions, please refer to the challenge's website: https://vlsp.org.vn/vlsp2021/eval/vieCap4H
If you have any inquires, feel free to contact us (privately) at [email protected] or publicly at https://groups.google.com/g/viecap4h-organizers.
VLSP 2021 - vieCap4H Challenge Organizers
My team is using the pretrained Imagenet modelPosted by: coder_phuho @ Oct. 10, 2021, 10:21 a.m.
This is the pre-trained model list that we used for this competition:
- Faster R-CNN ResNet101 Pretrained on Genome Dataset
- Using pretrained ResNet152 on COCO dataset
- Resnet101 pretrain on COCO dataset
Below we list our pre-trained models:
1. ResNeXt-101 and ResNeXt-152 for extracting image features.
2. vinai/phobert-base, vinai/phobert-large, vinai/bartpho-syllable, vinai/bartpho-word for extracting sequence features.
3. VnCoreNLP, trankit for text segmentation.
In this competition we used:
- Faster RCNN pretrained on Visual Genome
- Bert-base-multilingual, vinai/phobert-base
- Style Augmented Translation model
here is the pre-trained model list my team used:
- Faster R-CNN Pretrained on Genome Dataset,
- resnet101 pretrain on COCO, Imagenet dataset
- vinai/phobert-base, VnCoreNLP
I want to list our pretrained models used in detail than before:
Swin-Transformer, ViT Transformer, Efficientnet pretrained on Imagenet.
word2vec_vi_words_300dims pretrained model
cc.vi.300.vec.gz pretrained model
The pretrained models we used are: