Our website
https://vlsp.org.vn/cocosda2022/a-msv
Important dates
July 25th, 2022: Challenge announcement on the COCOSDA Conference webpage as well as other publicity channels. Registration open.
September 7th, 2022: Common training dataset release
October 7th, 2022: Public test set release (maximum of 20 submissions per days)
October 15th, 2022: Private test set release (maximum of 3 submissions)
October 16th, 2022: Announcing the Top 3 to be presented at the conference
November 15th, 2022: Technical report submission
November 26th, 2022: Announcing the ranking winners
Description
Speaker verification (SV) is a task of verifying whether an input utterance matches the claimed identity. Despite the growth in the development of speaker verification on VoxCeleb or CN-Celeb, which contains only English or Chinese speech samples, there is still very little research on methods for other languages, especially in low-resource scenarios. With the aim to leverage the development of speaker verification in Asian languages, the COCOSDA Multi-lingual Speaker Verification (MSV) Challenge 2022 has been designed for understanding and comparing research SV techniques on the same dataset AMSV, which includes mainly Asian languages.
In the training dataset, speaker identities are mostly decided based on devices recorded in a community project. Although pre-processed through a cleaning pipeline, there may be some noises with inaccurate labels in the training dataset, which raises another issue need to be solved in this challenge: weakly supervision with weakly labeled large-scale extra data.
COCOSDA MSV challenge 2022 will feature three evaluation sub-tasks. Teams can participate in one, two or all of the sub-tasks:
Task-01 (Seen languages): Participants are asked to verify utterance pairs of the same speaker from 06 seen languages in the training set: Vietnamese, French, Chinese, Hindi, Thai, and Japanese.
Task-02 (Unseen languages): Participants are asked to verify utterance pairs of the same speaker from 03 unseen languages: Mongolian, Arabic, and Indonesian.
Task-03 (Cross-lingual): The enrollment and test utterances come from the same speaker but in different languages: Uyghur-Chinese.
Basic Regulations:
Any uses of external speaker data and pre-trained models are prohibited.
Participants can use non-speech data (noise samples, impulse responses…) for augmenting and must specify and share with other teams.
Each task has public and private test sets. Final standings for all tasks will be decided based on private test results.
Metric for evaluation: Equal Error Rate (EER) will be used as the metric for performance evaluation for the defined test scenarios.
Participating teams need to share their final SV systems, along with a write-up in o-cocosda format (https://vlsp.org.vn/cocosda2022/paper-submission), which should give a brief description about.
The database used with appropriate citations.
Brief description of the methods used to build the system.
Github link with proper code structure and details.
Contact Us
Please feel free to contact us if you have any questions via [email protected].
Evaluation data
Each task has public and private test sets. Final standings for all tasks will be decided based on private test results.
Private test set will be made available for the three tasks: AMSV-T01, AMSV-T02 and AMSV-T03:
AMSV-T01: Includes utterance pairs of the same speaker from 06 seen languages in the training set: Vietnamese, French, Chinese, Hindi, Thai, and Japanese.
AMSV-T02: Includes utterance pairs of the same speaker from 03 unseen languages: Mongolian, Arabic, and Indonesian.
AMSV-T03: Includes a bi-lingual data of Uyghur-Chinese for enrollment and test.
Evaluation metric
The performance of the models will be evaluated by the Equal Error Rate (EER) where the False Acceptance Rate (FAR) equals the False Rejection Rate (FRR).
For evaluating the test sets of AMSV-T01 and AMSV-T02, which contain multiple languages, the final results will be the average value of EER calculated independently for each language.
Submission Guidelines
Multiple submissions are allowed but under a limitation of each phase, the evaluation result is based on the submission having the lowest EER.
The submission file comprises a header, a set of testing pairs, and a cosine similarity output by the system for the pair. The order of the pairs in the submission file must follow the same order as the pair list. A single line must contain 3 fields separated by tab character in the following format:
enrollment_wav<TAB>test_wav<TAB>score<NEWLINE>
where
enrollment_wav - The enrollment utterance
test_wav - The test utterance
score - The cosine similarity
General rules
Right to cancel, modify, or disqualify. The Competition Organizer reserves the right at its sole discretion to terminate, modify, or suspend the competition.
By submitting results to this competition, you consent to the public release of your scores at the Competition workshop and in the associated proceedings, at the task organizers' discretion. Scores may include but are not limited to, automatic and manual quantitative judgments, qualitative judgments, and such other metrics as the task organizers see fit. You accept that the ultimate decision of metric choice and score value is that of the task organizers.
By joining the competition, you affirm and acknowledge that you agree to comply with applicable laws and regulations, and you may not infringe upon any copyrights, intellectual property, or patent of another party for the software you develop in the course of the competition, and will not breach of any applicable laws and regulations related to export control and data privacy and protection.
Prizes are subject to the Competition Organizer’s review and verification of the entrant’s eligibility and compliance with these rules as well as the compliance of the winning submissions with the submission requirements.
Participants grant to the Competition Organizer the right to use your winning submissions and the source code and data created for and used to generate the submission for any purpose whatsoever and without further approval.
Eligibility
Each participant must create a AIHub account to submit their solution for the competition. Only one account per user is allowed.
The competition is public, but the Competition Organizer may elect to disallow participation according to its own considerations.
The Competition Organizer reserves the right to disqualify any entrant from the competition if, in the Competition Organizer’s sole discretion, it reasonably believes that the entrant has attempted to undermine the legitimate operation of the competition through cheating, deception, or other unfair playing practices.
Team
Participants are allowed to form teams.
You may not participate in more than one team. Each team member must be a single individual operating a separate AIHub account.
Submission
Maximum number of submissions in each phase:
Submissions are void if they are in whole or part illegible, incomplete, damaged, altered, counterfeit, obtained through fraudulent means, or late. The Competition Organizer reserves the right, in its sole discretion, to disqualify any entrant who makes a submission that does not adhere to all requirements.
Data
By downloading or by accessing the data provided by the Competition Organizer in any manner you agree to the following terms:
You will not distribute the data except for the purpose of non-commercial and academic-research.
You will not distribute, copy, reproduce, disclose, assign, sublicense, embed, host, transfer, sell, trade, or resell any portion of the data provided by the Competition Organizer to any third party for any purpose.
The data must not be used for providing surveillance, analyses or research that isolates a group of individuals or any single individual for any unlawful or discriminatory purpose.
You accept full responsibility for your use of the data and shall defend and indemnify the Competition Organizer, against any and all claims arising from your use of the data.
Start: Oct. 7, 2022, midnight
Start: Oct. 7, 2022, midnight
Start: Oct. 16, 2022, 9 a.m.
Start: Oct. 7, 2022, midnight
Start: Oct. 16, 2022, 9 a.m.
Start: Oct. 16, 2022, 9 a.m.
Oct. 17, 2022, 2 p.m.
You must be logged in to participate in competitions.
Sign In# | Username | Score |
---|---|---|
1 | underfitt | 2.641 |
2 | meoconxinhxan | 2.724 |
3 | SpeechWorld | 2.938 |