Automatic Speech Recognition Challenge

Organized by vuhl - Current server time: April 23, 2025, 1:47 a.m. UTC

Current

Private Test
Sept. 1, 2025, midnight UTC

Next

Private Test
Sept. 1, 2025, midnight UTC

End

Competition Ends
Dec. 31, 2025, midnight UTC

Automatic Speech Recognition Challenge - Overview

Automatic Speech Recognition Challenge

Overview

Welcome to the Automatic Speech Recognition (ASR) Challenge! This competition focuses on developing and evaluating speech recognition systems that can accurately transcribe Vietnamese speech into text.

Challenge Description

Automatic Speech Recognition is a technology that converts spoken language into written text. It has numerous applications including voice assistants, transcription services, accessibility tools, and more.

In this challenge, participants will develop systems that:

  • Process Vietnamese speech audio files
  • Generate accurate text transcriptions
  • Handle various acoustic conditions and speaking styles

Challenge Phases

  1. Public Test Phase: Participants submit their transcriptions on the public test set to get feedback.
  2. Private Test Phase: Final evaluation on the private test set to determine the winners.

Evaluation

Systems will be evaluated using the Word Error Rate (WER) metric, which measures the minimum number of word edits (insertions, deletions, and substitutions) required to transform the system's output into the reference transcript, divided by the number of words in the reference. Lower WER values indicate better performance.

Contact

For questions or support, please use the competition forum or contact the organizers at [email protected], cc-ing [email protected], [email protected].

Automatic Speech Recognition Challenge - Evaluation

Evaluation Criteria

Metric: Word Error Rate (WER)

The primary evaluation metric for this challenge is the Word Error Rate (WER), expressed as a percentage (%). WER measures how accurately your ASR system transcribes speech into text.

Understanding WER

Word Error Rate is calculated as:

WER = (S + D + I) / N × 100%

Where:

  • S: Number of substitutions (incorrect words)
  • D: Number of deletions (missing words)
  • I: Number of insertions (extra words)
  • N: Total number of words in the reference transcript

Lower WER values indicate better performance. A perfect system would have a WER of 0%, meaning the transcription exactly matches the reference.

We use the implementation from jiwer to calculate WER: https://github.com/jitsi/jiwer

Submission Format

Your submission should be a text file named transcripts.txt containing transcriptions for each audio file. Each line should only contain one column for the transcript, with the same order as the provided test list.

For example, if the first line of the test list is "audio1.wav", then the first line in your submitted prediction file should be the prediction for that audio.

Text Normalization

Before calculating WER, the following text normalization steps are applied to both reference and hypothesis transcriptions:

  • Converting to lowercase
  • Removing extra spaces

Evaluation Process

The evaluation process works as follows:

  1. Your submitted transcriptions are compared with the ground truth transcriptions.

  2. For each audio file, the WER is calculated between your transcription and the reference.

  3. The final score is the average WER across all audio files.

  4. The WER percentage is reported on the leaderboard (lower is better).

Example

For example, if the reference transcription is:

xin chào việt nam

And your system's transcription is:

xin chào việt nam hôm nay

The WER calculation would be:

  • Substitutions (S): 0
  • Deletions (D): 0
  • Insertions (I): 2 (the words "hôm" and "nay")
  • Reference length (N): 4
  • WER = (0 + 0 + 2) / 4 × 100% = 50%

Ranking

Participants will be ranked based on their WER score, with lower values being better. In case of ties, earlier submissions will be ranked higher.

Automatic Speech Recognition Challenge - Terms and Conditions

Terms and Conditions

Participation Rules

  1. Participation in this challenge is open to individuals and teams worldwide.
  2. Teams can consist of up to 5 members.
  3. Each participant may be a member of only one team.
  4. Participants must register for the challenge before making submissions.
  5. The organizers reserve the right to disqualify any participant who violates these terms or engages in unethical behavior.

Submission Guidelines

  1. All submissions must be made through the challenge platform.
  2. Participants are limited to the maximum number of submissions specified for each phase.
  3. Submissions must follow the required format as described in the evaluation guidelines.
  4. Participants must not attempt to reverse-engineer the test set or ground truth data.
  5. Manual transcription of test audio is strictly prohibited.

Data Usage

  1. The dataset provided for this challenge may only be used for participating in this competition.
  2. Participants are allowed to use additional external data for training their models, but this must be clearly documented.
  3. Redistribution of the challenge dataset is strictly prohibited.
  4. After the competition, participants may use the dataset for research purposes and must cite the dataset appropriately.

Intellectual Property

  1. Participants retain ownership of their submissions and the intellectual property rights to their methods.
  2. By submitting to the challenge, participants grant the organizers a non-exclusive, worldwide, royalty-free license to use their submissions for evaluating and presenting the challenge results.
  3. Participants agree that the organizers may publish their team name, member names, and performance results.

Publication and Recognition

  1. The organizers plan to publish a summary of the challenge results, including the methods used by top-performing teams.
  2. Top-performing teams may be invited to present their methods at a related workshop or conference.
  3. Participants are encouraged to publish their methods, citing the challenge appropriately.

Privacy

  1. Personal information provided during registration will be used solely for the purposes of the challenge and will not be shared with third parties.
  2. Participants' names and affiliations may be published on the challenge leaderboard and in challenge-related publications.

Disclaimer

The challenge organizers reserve the right to modify these terms and conditions at any time. Participants will be notified of any changes. The decisions of the challenge organizers regarding any aspect of the competition are final.

Contact

For questions or clarifications regarding these terms, please contact the challenge organizers at [email protected], cc-ing [email protected], [email protected].

Public Test

Start: April 1, 2025, midnight

Private Test

Start: Sept. 1, 2025, midnight

Competition Ends

Dec. 31, 2025, midnight

You must be logged in to participate in competitions.

Sign In