Sub-group XXXX

WG2 Task Force
“Crowdsourcing”
Tobias Hossfeld, Patrick le Callet
https://www3.informatik.uni-wuerzburg.de/qoewiki/qualinet:crowd:patrasmeeting2013
WG2 Mechanisms and Models
Agenda (60min)
• Current Status and activities, Tobias Hossfeld, Patrick le Callet
(10min)
• Gamification and Incentives, Elena Simperl (30min): talk was shifted
to Gaming TF on Wed, 25/09/2013
• Crowdsourcing: Best practices and challenges, Matthias Hirth,
Tobias Hossfeld (15min)
• Results from Joint Qualinet Crowdsourcing Experiment (JOC),
Judith Redi (20min)
• JOC: Discussions and next steps, Judith Redi (15min)
WG2 TF„Crowdsourcing“
https://www3.informatik.uni-wuerzburg.de/qoewiki/qualinet:crowd
2
Achievements: 2nd Qualinet Summer School
„Crowdsourcing and QoE Assessment”
http://www.qualinet.eu/summerschoolPatras2013
Lectures
•
•
•
•
•
•
•
Alessandro Bozzon: Human Computation and Games with a Purpose
Elena Simperl: Incentives-driven technology design
Katrien de Moor: QoE - What’s in a name?
Kjell Brunnström: Statistical methods for video quality assessment
Matthias Hirth: Crowdsourcing best practices
Patrick le Callet: Test Design - Methodologies and Contents
Pedro Casas: An Introduction to Machine Learning in QoE
Group Work
1.
2.
3.
4.
5.
6.
HDR Images and Privacy (Tutor: Athanassios Skodras)
Incentives, Gamification, Social Context (Tutor: Katrien de Moor)
Reliability and Quality Assurance (Tutor: Pedro Casas)
3D Video and TV (Tutor: Patrick le Callet)
Mobile Crowdsourcing and Eye Tracker (Tutor: Matthias Hirth)
Video Quality (Tutor: Kjell Brunnström)
WG2 Mechanisms and models
3
Achievements: Joint Qualinet Crowdsourcing
Experiment (JOC)
• Joint Qualinet Crowdsourcing Experiment (JOC)
– What is the impact of the crowdsourcing environment on QoE results?
What is the difference to lab tests? Regarding demographics, as users
are world-wide, may
have different expectations, usage habits, etc…
– What are statistical methods to analyze data?
Can we merge data from lab and crowdsourcing?
Reliability of users and consistency of user ratings
• Crowdsourcing-based multimedia subjective evaluations: a case
study on image recognizability and aesthetic appeal
– comparing lab and crowd studies
– designed, conducted, analyzed, published jointly within the TF
• Participants:
– J. Redi, T. Hossfeld, P. Korshunov, F. Mazza, I. Povoa, C. Keimel.
• Accepted for publication at ACM CrowdMM 2013
WG2 Mechanisms and models
4
Achievements: Support of CrowdMM 2013
• 2nd ACM Workshop on Crowdsourcing for Multimedia Oct 21 - 25,
2013, Barcelona, Spain http://crowdmm.org/
• Qualinet as official supporting project
• Crowdsourcing for Multimedia Ideas Competition
– organized by Tobias Hossfeld
– strong and successful participation from Qualinet
• Keynote by Daniel Gatica-Perez. "When the Crowd Watches the
Crowd: Understanding Impressions in Online Conversational Video"
WG2 Mechanisms and models
5
Achievements: Joint Publications
•
J. Redi, T. Hossfeld, P. Korshunov, F. Mazza, I. Povoa, C.
Keimel. Crowdsourcing-based multimedia subjective evaluations: a case
study on image recognizability and aesthetic appeal. ACM CrowdMM 2013
 talk by Judith
•
T. Hoßfeld, C. Keimel, M. Hirth, B. Gardlo, J. Habigt, K. Diepold, P. TranGia. Best Practices for QoE Crowdtesting: QoE Assessment with
Crowdsourcing.
IEEE Transactions on Multimedia, accepted Sep 2013
 talk by Matthias
•
M. Varela, T. Mäki, L. Skorin-Kapov, T. Hoßfeld. Increasing Payments in
Crowdsourcing: Don't look a gift horse in the mouth. PQS 2013
•
Qualinet Newsletter, Vol.3, 2013 focuses also on crowdsourcing
– Klaus Diepold: “Crowdsourcing – a New Paradigm for Subjective
Testing? Chances and challenges brought by placing subjectiv quality
tests in the cloud.”
– Tobias Hossfeld: short report on the “Dagstuhl Seminar Crowdsourcing: From Theory to Practice and Long-Term Perspectives”
WG2 Mechanisms and models
6
Goals of this Task Force
Reflecting
•
•
•
•
•
•
•
to identify scientific challenges and problems for QoE assessment via crowdsourcing but also
the strengths and benefits,
to derive a methodology and setup for crowdsourcing in QoE assessment,
Incentive design and implementation for QoE tests
to challenge crowdsourcing QoE assessment approach with usual “lab” methodologies,
comparison of QoE tests
to develop mechanisms and statistical approaches for identifying reliable ratings from remote
crowdsourcing users,
advanced statistical methods for analysis of crowdsourcing results
to define requirements onto crowdsourcing platforms for improved QoE assessment.
Planning for next period
• Publications of ongoing work
• Publish outcome of the summer school group work!
• Shape focus in TF
– incentive design,
– reproducible crowdsourcing research,
– various apps, …
 Continue successful activities, collaboration within Qualinet
WG2 TF„Crowdsourcing“
https://www3.informatik.uni-wuerzburg.de/qoewiki/qualinet:crowd
7
Agenda (60min)
• Current Status and activities, Tobias Hossfeld, Patrick le Callet
(10min)
• Gamification and Incentives, Elena Simperl (30min): talk was shifted
to Gaming TF on Wed, 25/09/2013
• Crowdsourcing: Best practices and challenges, Matthias Hirth,
Tobias Hossfeld (15min)
• Results from Joint Qualinet Crowdsourcing Experiment (JOC),
Judith Redi (20min)
• JOC: Discussions and next steps, Judith Redi (15min)
WG2 TF„Crowdsourcing“
https://www3.informatik.uni-wuerzburg.de/qoewiki/qualinet:crowd
8