https://vlsp.org.vn/vlsp2023/eval/comon
Note: All deadlines are 11:59PM UTC-00:00 (~ 6:59AM next day Indochina Time (ICT), UTC+07:00).
The rapid growth of online shopping and e-commerce platforms has led to an explosion of product reviews. These reviews often contain valuable information about users’ opinions on various aspects of the products, including comparisons between different devices. Understanding comparative opinions from product reviews is crucial for manufacturers and consumers alike. Manufacturers can gain insights into the strengths and weaknesses of their products compared to competitors, while consumers can make more informed purchasing decisions based on these comparative insights. To facilitate this process, we propose the “ComOM - Comparative Opinion Mining from Vietnamese Product Reviews” shared task.
The goal of this shared task is to develop natural language processing models that can extract comparative opinions from product reviews. Each review contains comparative sentences expressing opinions on different aspects, comparing them in various ways. Participants are required to develop models that can extract the following information, referred to as a “quintuple,” from comparative sentences:
To contact us, mail to: [email protected] or [email protected].
To assess Comparative Element Extraction (CEE), we employ various evaluation metrics, including Precision, Recall, and F1 score for each element (subject, object, aspect, predicate, and comparison type label). Additionally, we calculate Micro- and Macro-averages of these scores.
In the Tuple Evaluation (TE), the entire quintuple is considered, and we measure Precision, Recall, and F1 score for the entire quintuple.
Each metric follows a format of four elements:
{Matching Strategy}-{Level of Evaluation}-{Indication}-{Metric}
There are three matching strategies (E, P, B)
All of these strategies will be utilized for the evaluation of Comparative Element Extraction (CEE), while only Exact Match and Binary Match will be applied to Tuple Evaluation (TE).
There are three levels of evaluation (CEE, T4, T5)
Indicates which element or comparison type is being evaluated.
For CEE, there are six types of indications (S, O, A, P, Micro, Macro):
For T4, no indication is used; there is only one type of T4.
For T5, there are eight types of comparison (EQL, DIF, COM, COM+, COM-, SUP, SUP+, SUP-) and two types of averages (Micro, Macro):
Three metrics are used: Precision, Recall, and F1 score (P, R, F1).
In total, 120 metrics are evaluated, with only 15 selected to appear in the leaderboard. For a comprehensive view of all metrics, please download the scoring details from your submission "Download output from scoring step".
The final ranking score is determined by the metric E-T5-MACRO-F1, representing Exact Match for Tuple of Five, Macro-averaged F1 score.
Right to cancel, modify, or disqualify. The Competition Organizer reserves the right at its sole discretion to terminate, modify, or suspend the competition.
By submitting results to this competition, you consent to the public release of your scores at the Competition workshop and in the associated proceedings, at the task organizers' discretion. Scores may include but are not limited to, automatic and manual quantitative judgments, qualitative judgments, and such other metrics as the task organizers see fit. You accept that the ultimate decision of metric choice and score value is that of the task organizers.
By joining the competition, you affirm and acknowledge that you agree to comply with applicable laws and regulations, and you may not infringe upon any copyrights, intellectual property, or patent of another party for the software you develop in the course of the competition, and will not breach of any applicable laws and regulations related to export control and data privacy and protection.
Prizes are subject to the Competition Organizer’s review and verification of the entrant’s eligibility and compliance with these rules as well as the compliance of the winning submissions with the submission requirements.
Participants grant to the Competition Organizer the right to use your winning submissions and the source code and data created for and used to generate the submission for any purpose whatsoever and without further approval.
Each participant must create a AIHub account to submit their solution for the competition. Only one account per user is allowed.
The competition is public, but the Competition Organizer may elect to disallow participation according to its own considerations.
The Competition Organizer reserves the right to disqualify any entrant from the competition if, in the Competition Organizer’s sole discretion, it reasonably believes that the entrant has attempted to undermine the legitimate operation of the competition through cheating, deception, or other unfair playing practices.
Participants are allowed to form teams.
You may not participate in more than one team. Each team member must be a single individual operating a separate AIHub account.
Start: Oct. 15, 2023, midnight
Start: Nov. 1, 2023, midnight
Start: Nov. 4, 2023, midnight
Nov. 3, 2023, 11:59 p.m.
You must be logged in to participate in competitions.
Sign In# | Username | Score |
---|---|---|
1 | pthutrang513 | 0.3384 |
2 | ThuyNT03 | 0.2578 |
3 | thindang | 0.2373 |