The Hypertext panel is discussing about evaluation of Hypertext. The panelists include Elaine Toms, Steve Szigeti, Mark Chignell, and Peter Brusilovsky, and being moderated by Joan Cherry. First panelist is Elaine Toms who is talking about how the rest of the panelists do not know how to do evaluation and evaluation is not needed. The second panelist is Steve Szigeti who says that evaluation is necessary, removing the user from the evaluation process is problematic, hypertext research has to consider the user. Qualitative research brings about the natural setting, it focuses on the user’s perceptions and recognizes the bias that all researchers bring into the research. Qualitative research is messy, evaluating hypertext has to taken into account this uncertainty which should taken into account the qualitative approach. The third panelist is Mark Chignell and he talks about we have to be careful about evaluation of hypertext. According to Mark, the proof is in the pudding. There is a rigorous procedure that if it not built will not be well used for evaluation. Qualitative tools are too blunt for the job. The last panelist is Peter Brusilovsky. According to Peter, he is saying why do we need to evaluate, which was his earlier impression when he was in Russia. But after going to CHI conference, then his position changed. We shouldn’t ignore evaluation, but it depends. When you start publishing in archival media and journals, there are so many conferences and workshops, show me the value of your work in journals. The evaluation should not be done for its sake, you have to think and be creative, what exactly is your argument, depending on this, evaluation will be different.
Now the panel is debating each other, according to Mark, numbers are misused. Peter talked about how if good numbers are used, then the evaluation can be good. Abby Goodrum from the audience asked a question about evaluation in papers. Abby is saying that we should have conference venues for broader thinking for “nibble and spit” for new and bright ideas. Mark is proposing a grand jury of professionals to judge papers instead of just grad students. Markus Strohmaier is giving a question about rigor vs. relevance in peer review of papers. Alvin Chin gave a question about the possibility of creating a conference where there may not be a proceedings, that encourage people to submit but not publish and present their work, so people are not so pressured to have to do the evaluation, and encourage new and bright ideas. So according to Peter, we do not have to do the evaluation of the numbers. From Mark, we need frameworks for hypertext that is different from psychology, or human computer interaction.
From the moderator, Joan Cherry, there was a paper from CHI 2008 from Greenburg and Buxton, 70% of the usability papers were quantitative, 20% were qualitative, Greenburg and Buxon said that all papers should not require evaluation and be open to all types of evaluation. It could be reflections that could serve as evaluation. Joan says that we should think about what kind of evaluation we want to do, what students want to do, students can push their teachers and supervisors to be more accommodating, reviewers need to be more open, special conferences for wild ideas, and perhaps have a track with some appropriate title that would encourage people to step aside conventions.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment