Evaluation Competition

Evaluation Competition Track

VLibras vs HandTalk: a comparative evaluation of tools for the Brazilian Sign Language (LIBRAS) using usability heuristics – Tatiana Tavares (Universidade Federal de Pelotas), Richard Aquino dos Santos (Universidade Federal de Pelotas), Pedro Augusto Marchand (Universidade Federal de Pelotas), Cássia Marigliano (Universidade Federal de Pelotas), Douglas Maliszewski (Universidade Federal de Pelotas)

Uma Análise da Interação Humano-Computador da Usabilidade do Aplicativo COBALTO – Alexandre Bender (Universidade Federal de Pelotas), Moniele Santos (Universidade Federal de Pelotas), Vinicius Borges (Universidade Federal de Pelotas)

Sobrevivendo no Sertão da Bahia do Século XIX: Uma Investigação de Usabilidade e Acessibilidade do Jogo Árida – Ludmilla Galvão (Universidade Federal do Paraná), Lucineide da Silva (Universidade Federal de Mato Grosso do Sul), João Cardoso (Universidade Federal do Paraná), Vicente Conceição Júnior (Universidade Federal do Paraná), Laura Sánchez García (Universidade Federal do Paraná)

Mecânicas de funcionamento de jogos educacionais e sua influência na Experiência do Usuário: uma análise comparativa – Elvis Leite da Silva (Instituto Federal de Educação, Ciência e Tecnologia de São Paulo), Fabio Pereira de Souza (Instituto Federal de Educação, Ciência e Tecnologia de São Paulo), Leonardo Henrique Vasconcelos (Instituto Federal de Educação, Ciência e Tecnologia de São Paulo), Mayara Giovana de Araujo (Instituto Federal de Educação, Ciência e Tecnologia de São Paulo), Eliana Moreira (Instituto Federal de São Paulo), Thiago Barcelos (Instituto Federal de São Paulo)

The Evaluation Competition seeks to motivate the participation of undergraduate and graduate students, as well as teachers and practitioners in HCI contributing to the training of these students. The Evaluation Competition has an essentially practical nature: participants make the evaluation of a computer system and thus apply their theoretical knowledge related to HCI evaluation methods.

COMPETITION THEME

In IHC 2021, the Evaluation Competition aims to explore the evaluation of interactive computational artefacts and their relationship with History and their global effects in the past, present and future perspectives. As such, the competition aims to incorporate the possibility of evaluating technologies and their relationship with the historical legacy of the past, as well as technologies and their relationship with “making and telling history” or “retelling history” in today’s striking episodes. In addition, the competition also aims to encourage the pursuit of a more critical line of history construction, opening up to evaluation with applications and technologies that help in the understanding of History and its reflections. The theme of the Evaluation Competition – IHC 2021 is in line with the call for the Evaluation Competition – IHC 2020 that was suspended, making it possible to submit works initiated from the previous call.

Among the possible artefacts that could be the focus of the evaluation, from the point of view of technologies that help to “make, tell and retell history” nowadays are social networks with situations and discussions involving ‘fake news’ (political, economic, social, cultural, legal scenarios, among others) and collaborative technologies to support of professional, educational and daily activities situations of isolation (e.g. COVID-19). From the point of view of technologies that help to understand the past historical legacy, evaluation teams could approach interactive technologies that gained great prominence with the situations of social isolation with the crisis of COVID-19 and the closure of museums and cultural spaces for the public. Possibilities to evaluate include applications, websites or virtual and augmented reality environments of museums and cultural spaces in Brazil and all over the world (e.g. Museu Afro Brasil, Museu do Amanhã in Brazil, AR RATP Museum in France, VR British Museum in the UK, Museu Nacional do Azulejo in Portugal, among others), as well as educational games and serious games related to the theme.

From a methodological point of view, in this evaluation competition, we would like to challenge students to extrapolate the simple evaluation of classical aspects of quality in use of systems (e.g. usability, accessibility, and communicability) and include aspects related to user experience (UX). According to ISO 9241-210, user experience addresses “a person’s perceptions and reactions that result from the intended use or use of a product, system or service” and “includes all emotions, beliefs, preferences, perceptions, physical and psychological responses, user behaviors and achievements that occur before, during and after use”. In several interactive technologies related to the theme of this competition, the “user experience” has an important role in the use of this technology, including their emotions, sensations and other aspects related to UX, and not only aspects related to the use of the system and performing tasks on it. Students are free to choose methods that they consider appropriate to analyze UX aspects.

We consider it important to disseminate the use of UX evaluation methods that actually cover aspects of UX that go beyond classic aspects of usability, accessibility and communicability, for example. Despite the wide dissemination of the term “UX” and its adoption in many companies, several evaluations carried out in commercial environments said “UX evaluations” do not incorporate broader aspects such as perceptions, reactions, emotions, beliefs, among others. Therefore, the competition puts this challenge forward to demonstrate the effective use of these methods, as examples that can be replicated in commercial environments.

Within the scope of the competition, evaluation teams can choose to evaluate different aspects of UX with potential relevance. For example, teams could approach the reactions with respect to beliefs and perceptions about interactive systems involving news, emotions that emerge from the interaction with content referring to real and ‘fake’ historical basis, or feelings emerging from the use of technology in situations related to historical facts, as well as the perceptions and reactions that come from the interaction with artefacts linked to the cultural legacy of the past.

For graduate students, we would like to launch an “extra” challenge, in the sense of exploring methods of evaluating UX from recent research literature in their evaluation venture. In the academic field, since the publication of the paper “User experience – a research agenda” in 2006 by Hassenzahl and Tractinsky, with more than 3.000 citations in March 2021, several research endeavours have been dedicated to the investigation of methods to evaluate aspects of user experience. Despite the advances, this field still presents more research challenges than the use of techniques consolidated decades ago to evaluate other aspects of quality in use, such as usability. Graduate students are invited to review recent HCI literature towards a search for methods with applicability to the chosen type of application. In the literature, there are specific proposals for evaluating user experience with general-purpose applications, games or with cultural heritage interactive applications. Teams will have complete freedom to carry out their research in the literature and choose the best method(s) to use. However, we suggest below some examples of articles with similar proposals, but the teams do not need to adopt them necessarily in the evaluations to be performed:

– IJSSELSTEIJN, W. A.; DE KORT, Y. A. W.; POELS, Karolien. The game experience questionnaire. Eindhoven: Technische Universiteit Eindhoven, p. 3-9, 2013.

– PETRIE, H.; OTHMAN, M. K.; POWER, C. Smartphone Guide Technology in Cultural Spaces: Measuring Visitor Experience with an iPhone Multimedia Guide in Shakespeare’s Church. International Journal of Human–Computer Interaction, v. 33, n. 12, p. 973-983, 2017.

– BRÜHLMANN, F; Vollenwyder, B.; Opwis, K.; & Mekler, E. D. Measuring the “Why” of Interaction: Development and Validation of the User Motivation Inventory (UMI). In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 2018. p. 1-13.

EVALUATION

Object and Focus: Assess the quality (usability, communicability, accessibility etc.) and the user experience of an interactive computational artefact that approaches History and its global effects in the past, present and future prospects. The computational artefacts evaluated can be for individual (single user) or collaborative use, and can be implemented in different ways that offer interaction, such as software, hardware, an environment, application, game, or other computer systems. Proposals with evaluations of computational artefacts that enhance social collaboration in the field of History are also welcome.

Methods: Teams will be free to choose the method(s) they will use to evaluate the computational artefact. Teams of graduate students are encouraged to seek the use of recent UX assessment methods proposed in research papers in the HCI literature.

Product: After the evaluation, each team must generate a report containing:
– Name(s) of employed evaluation method(s);
– Justification of the choice of method(s);
– Description of the chosen platform(s): brand, model, operating system;
– Description of the evaluation process (part of the interface under consideration, evaluated features, procedures involved);
– Evaluation results;
– Detected problems: description, location and context of occurrence, including justification;
– Closing remarks: conclusion and other observations;
– An analysis from the team on how the employed method(s) were able (or not) to identify aspects related to the experience of use of the application(s).

CODE OF ETHICS

We abide by the ACM Code of Ethics. We expect that all submissions, underlying research, and behavior during the review process and conference comply with the principles and responsibilities outlined by the code.

SUBMISSIONS

Submissions of reports should be anonymous and have up to 10 pages following the ACM Master Article Template (SIGCHI). Authors should submit a PDF version of their paper through the JEMS system, in PDF format.

TEAM REQUIREMENTS

Two kinds of teams are allowed to participate in the competition:

Undergraduate students: between 3 and 5 undergraduate students;
Graduate students: between 2 and 4 graduate students.

All undergraduate students’ teams must have one or two advisors associated with a higher education institution, research institution or private company related to the HCI field. A graduate student can be one of the advisors, if he/she helps in the supervision of an undergraduate students’ team together with a professor-advisor.

The graduate students’ teams also must have one or two advisors associated with a higher education institution or research institution. In this case, the advisors must be a supervisor and a co-supervisor of the graduate students.

Mixed teams with undergraduate and graduate students are not allowed. If these requirements will not be followed by the teams, they will be automatically excluded from the selection process.

INSTRUCTIONS FOR PARTICIPATION

To participate in the competition, each team should follow three steps:
– Run the evaluation;
– Submit the evaluation report;
– If selected by the scientific committee, present the evaluation report during the Evaluation Competition session at IHC 2021, which will be held online.

EVALUATION OF SUBMITTED REPORTS

Each report will be evaluated by reviewers with experience in HCI or evaluation of interactive systems. 

The following criteria will be considered:
– Adequacy of the system/application to the Evaluation Competition theme;
– Readability, organization and presentation of the evaluation report;
– Clear definition of the evaluation scope and purpose;
– Adequacy of the chosen method(s) and evaluation procedure(s) for the evaluation goal;
– Quality of results considering the established scope and purpose;
– Consideration of ethical issues involved in the execution of the evaluation (in the case of using methods involving users or people outside the team);
– Quality of the critical analysis of the ability of the method to disclose (or not) potential problems of use of the system.

SELECTION

Three (3) finalists of each category (undergraduate and graduate students) will be selected for a short oral presentation, followed by questions from a board of evaluators during the symposium, which will be held online, in October 18-22, 2021. The registration and the presence of at least one member of each finalist team is required.

PRESENTATION

Paper presentations during the event will be made via pre-recorded videos, hosted on the conference platform, and later archived on the IHC 2021 channel on YouTube. However, the discussions will be synchronous. Hence, the participation of at least one of the authors in the session is mandatory.

The track coordination will send emails to the authors specifying the details, including the maximum length, video submission format, and other points relevant to the track.

The Resources and Accessibility page presents resources that can be useful in preparing videos, including slide and recording templates using OBS Studio. When preparing the presentation video, follow the recommendations available in: How can I make my pre-recorded presentation more accessible? (available in Portuguese only). Note that due to the virtual format of the IHC 2021, the ability for attendees to understand presentations using only vision or audio is now a requirement. Therefore, all presentations must include subtitles. We have prepared a tutorial to help you with the subtitling process.

AWARDS

Each finalist team who presents their work at IHC 2021 will receive a Certificate of Recognition. The winning team will also receive a prize (to be defined by the organization).

SUPPORT FOR FINALISTS

The organization will try to provide free registration for one member of each team selected for oral presentation in this edition. However, authors are initially responsible for ensuring their registration and presence in the symposium with their own resources.

PUBLICATION

The reports of the finalist teams will be published as the extended abstracts of the IHC 2021 proceedings.

SUMMARY

System to be evaluated: computational artefacts that support History and its global effects in the past, present and future prospects;

Criteria to be evaluated: Quality of use and/or User experience;

Methods to be used: to be selected by the teams;

Team size:

 – 3 to 5 undergraduate students + 1 to 2 supervisors (professors, researchers, graduate students or practitioners) OR

 –  2 to 4 graduate students + 1 to 2 supervisors (professors or researchers who must be a supervisor and a co-supervisor of the graduate students).

CO-CHAIRS

Anna Beatriz dos Santos Marques (UFC/Russas) – beatriz.marques@ufc.br

Natasha Malveira Costa Valentim (UFPR) – natasha@inf.ufpr.br

Important Deadlines

Deadline for submission of reports: 27 June 2021 (it was 13 June)
Notification of finalists: 09 August 2021
Deadline for final reports: 23 August 2021 (it was 16 August)
Presentation: 18 October 2021 (to be confirmed)

Accepted Papers

VLibras vs HandTalk: a comparative evaluation of tools for the Brazilian Sign Language (LIBRAS) using usability heuristics
Authors: Tatiana Tavares (Universidade Federal de Pelotas), Richard Aquino dos Santos (Universidade Federal de Pelotas), Pedro Augusto Marchand (Universidade Federal de Pelotas), Cássia Marigliano (Universidade Federal de Pelotas), Douglas Maliszewski (Universidade Federal de Pelotas)

Uma Análise da Interação Humano-Computador da Usabilidade do Aplicativo COBALTO
Authors: Alexandre Bender (Universidade Federal de Pelotas), Moniele Santos (Universidade Federal de Pelotas), Vinicius Borges (Universidade Federal de Pelotas)

Sobrevivendo no Sertão da Bahia do Século XIX: Uma Investigação de  Usabilidade e Acessibilidade do Jogo Árida
Authors: Ludmilla Galvão (Universidade Federal do Paraná), Lucineide da 
Silva (Universidade Federal de Mato Grosso do Sul), João Cardoso  (Universidade Federal do Paraná), Vicente Conceição Júnior (Universidade  Federal do Paraná), Laura Sánchez García (Universidade Federal do Paraná)

Mecânicas de funcionamento de jogos educacionais e sua influência na Experiência do Usuário: uma análise comparativa
Authors: Elvis Leite da Silva (Instituto Federal de Educação, Ciência e Tecnologia de São Paulo), Fabio Pereira de Souza (Instituto Federal de Educação, Ciência e Tecnologia de São Paulo), Leonardo Henrique Vasconcelos (Instituto Federal de Educação, Ciência e Tecnologia de São  Paulo), Mayara Giovana de Araujo (Instituto Federal de Educação, Ciência e Tecnologia de São Paulo), Eliana Moreira (Instituto Federal de São Paulo), Thiago Barcelos (Instituto Federal de São Paulo)

SESSÃO TÉCNICA

Acessibilidade

  1. Developing a Set of Design Patterns Specific for the Design of User Interfaces for Autistic Users
    Dayanne Gomes (UFMA), Nathasha Pinto (UFMA), Aurea Melo (UEA), Ivana Márcia Maia (IFMA), Anselmo Cardoso de Paiva (UFMA), Raimundo Barreto (UFAM), Davi Viana (UFMA), Luis Rivero (UFMA)
  2. Flying colors: Using color blindness simulations in the development of accessible mobile games
    Mateus Carneiro (UFC), Windson Viana (UFC), Rossana Andrade (UFC), Ticianne Darin (UFC)
  3. Image Descriptions’ Limitations for People with Visual Impairments: Where Are We and Where Are We Going?
    Alessandra Jandrey (PUC-RS), Duncan Ruiz (PUC-RS), Milene Silveira (PUC-RS)
  4. Making Design of Experiments (DOE) accessible for everyone: Prototype design and evaluation
    Fabiani de Souza (CPQD), Gabriela Vechini (UNICAMP), Graziella Bonadia (CPQD)
  5. The Windows 10’s Color Filter Feature as an Aid for Color Blind People in the Use of Websites
    Isa Maria de Paiva (UNIRIO), Sean Siqueira (UNIRIO), Simone Bacellar Leal Ferreira (UNIRIO)
  6. When just Ok, is not Ok – An Experimental Study through Sequential Chronological Cuts, with Prescriptive and Semantic Analyzes on the Dynamic Translation by VLibras Avatar
    André Silva (UNIRIO), Tatiane Militão de Sá (UFF), Ruan Diniz (PUC Campinas), Simone Bacellar Leal Ferreira (UNIRIO), Sean Siqueira (UNIRIO), Saulo Cabral Bourguignon (UFF)
  7. Evaluation of Assistive Technologies from the perspective of Usability, User Experience and Accessibility: a Systematic Mapping Study
    Tatiany Xavier de Godoi (UFPR), Guilherme Guerino (UEM), Natasha Valentim (UFPR)