VoicePrivacy 2022 mailing list

Subscribe to the VoicePrivacy 2022 mailing list by sending an email to:

sympa@lists.voiceprivacychallenge.org

with “subscribe 2022” as the subject line. Successful subscriptions are confirmed automatically by return email.

To post messages to the mailing list itself, emails should be addressed to:

2022@lists.voiceprivacychallenge.org

Follow @Voice-Privacy-Challenge

Schedule

The following is an tentative schedule for VoicePrivacy 2022 and subject to change. Any specific times are for Anywhere on Earth (AoE).

Release of evaluation plan March 2022
Submission of challenge papers to the joint SPSC Symposium and VoicePrivacy Challenge workshop 15th 25th June 2022
Author notification for challenge papers 25th July (extended) 2022
Early bird registration for Interspeech 2022* 10th July 2022
Deadline for participants to submit system descriptions 31st July 2022
Deadline for participants to submit objective evaluation results and anonymized data for primary systems 1st August 2022
Deadline for participants to submit objective evaluation results and anonymized data for secondary systems and training data for primary systems 5th August 2022
Final paper upload 5th September 2022
Joint SPSC Symposium and VoicePrivacy Challenge workshop 23rd–24th September 2022

Registration

 

Registration for the VoicePrivacy Challenge

Participants are requested to register for the evaluation. Registration should be performed once only for each participating entity and by sending an email to:

organisers@lists.voiceprivacychallenge.org

with “VoicePrivacy 2022 registration” as the subject line.

The mail body should include:

(i) the name of the team; (ii) the name of the contact person; (iii) their affiliation; (iv) their country; (v) their status (academic/nonacademic).

You will recieve a confirmation email within ~24 hours after successful registration.

 

Registration for the workshop

The registration to the workshop can be performed using the Interspeech registration system: https://www.interspeech2022.org/registration/.

The event is open to everyone regardless of their contribution to the VoicePrivacy challenge or SPSC symposium. In addition, all the VoicePrivacy challenge participants, who will submit results and system descriptions by 31st July, are encouraged to present their work during the event (even if they did not submit papers to the SPSC symposium).

Submission of results

 

Each participant is strongly encouraged to make multiple submissions corresponding to different EER thresholds (see Section 7 of the evaluation plan). For each threshold, participants could submit several systems and should indicate a single system among them as primary i.e. primary.1, primary.2, primary.3, primary.4, and other systems as contrastive, i.e. contrastive1.1, contrastive1.2, …., contrastive.4.1_). Only primary systems will be used for subjective evaluation. Also, for primary systems, participants should submit anonymized training data that they used to train ASV and ASR evaluation models.

Submissions consist of two parts:

  • results, scores and anonymized speech data;
  • system descriptions.

1. Results, scores, and anonymized speech data

Deadline: August 1 2022, 23.59 Anywhere on Earth (AoE)*

*the primary systems (scores and anonymiyzed dev and test data) should be submitted before this date; the deadline to upload anonymized training data for primary systems and secondary systems is August 5 2022.

Submission: a gzipped TAR archive uploaded to the sftp challenge server voiceprivacychallenge.univ-avignon.fr. Each registered team will receive an email containing a personal login and password to upload data. The name of the archive file should correspond to the team name declared at registration. The archive should be uploaded to the sftp challenge server voiceprivacychallenge.univ-avignon.fr.

Archive structure: the archive should include directories: primary.1, …, contrastive.1.1, contrastive.1.2,… where each directory contains the full results directory generated by the run of the evaluation system and two results directories with scores and metrics (exp/results-<date>-<time> and exp/results-<date>-<time>.orig) generated by the evaluation scripts.

Each directory should contain the corresponding anonymized speech data (wav files, 16kHz, with the same names as in the original corpus) generated for dev and test datasets. Wav files should be submitted as 16-bit signed integer PCM format. These data will be used by the challenge organizers for post-evaluation analyses and to perform subjective evaluation. Only primary systems will be considered in subjective evaluation.

Primary systems also should include anonymized training data that were used to train ASV and ASR evaluation models train-clean-360_anon.

< TEAM NAME USED IN REGISTRATION >

                                \primary.1\
                                                     libri_dev\
                                                     libri_test\
                                                     vctk_dev\
                                                     vctk_test\
                                                     results-<date>-<time>
                                                     results-<date>-<time>.orig
                                                     train-clean-360_anon

                                \contrastive.1.1\
                                                     libri_dev\
                                                     libri_test\
                                                     vctk_dev\
                                                     vctk_test\
                                                     results-<date>-<time>
                                                     results-<date>-<time>.orig
                                \contrastive.1.2\
                                                     libri_dev\
                                                     libri_test\
                                                     vctk_dev\
                                                     vctk_test\
                                                     results-<date>-<time>
                                                     results-<date>-<time>.orig
                                             ...

2. System description.

Deadline: July 31 2022, 23.59 Anywhere on Earth (AoE)*

All teams that submit results should also submit system descriptions by email to organisers@lists.voiceprivacychallenge.org. System descriptions should be prepared using the Interspeech-2022 paper https://interspeech2022.org/files/IS2022_paper_kit.zip template and should be 2-6 pages in length. Descriptions should be provided for all submitted systems, primary and constrastive, and should be clearly labelled and identifiable.

Participants are requested to report results in a format consistent to that in the challenge evaluation plan.