The Hypertext panel is discussing about evaluation of Hypertext. The panelists include Elaine Toms, Steve Szigeti, Mark Chignell, and Peter Brusilovsky, and being moderated by Joan Cherry. First panelist is Elaine Toms who is talking about how the rest of the panelists do not know how to do evaluation and evaluation is not needed. The second panelist is Steve Szigeti who says that evaluation is necessary, removing the user from the evaluation process is problematic, hypertext research has to consider the user. Qualitative research brings about the natural setting, it focuses on the user’s perceptions and recognizes the bias that all researchers bring into the research. Qualitative research is messy, evaluating hypertext has to taken into account this uncertainty which should taken into account the qualitative approach. The third panelist is Mark Chignell and he talks about we have to be careful about evaluation of hypertext. According to Mark, the proof is in the pudding. There is a rigorous procedure that if it not built will not be well used for evaluation. Qualitative tools are too blunt for the job. The last panelist is Peter Brusilovsky. According to Peter, he is saying why do we need to evaluate, which was his earlier impression when he was in Russia. But after going to CHI conference, then his position changed. We shouldn’t ignore evaluation, but it depends. When you start publishing in archival media and journals, there are so many conferences and workshops, show me the value of your work in journals. The evaluation should not be done for its sake, you have to think and be creative, what exactly is your argument, depending on this, evaluation will be different.
Now the panel is debating each other, according to Mark, numbers are misused. Peter talked about how if good numbers are used, then the evaluation can be good. Abby Goodrum from the audience asked a question about evaluation in papers. Abby is saying that we should have conference venues for broader thinking for “nibble and spit” for new and bright ideas. Mark is proposing a grand jury of professionals to judge papers instead of just grad students. Markus Strohmaier is giving a question about rigor vs. relevance in peer review of papers. Alvin Chin gave a question about the possibility of creating a conference where there may not be a proceedings, that encourage people to submit but not publish and present their work, so people are not so pressured to have to do the evaluation, and encourage new and bright ideas. So according to Peter, we do not have to do the evaluation of the numbers. From Mark, we need frameworks for hypertext that is different from psychology, or human computer interaction.
From the moderator, Joan Cherry, there was a paper from CHI 2008 from Greenburg and Buxton, 70% of the usability papers were quantitative, 20% were qualitative, Greenburg and Buxon said that all papers should not require evaluation and be open to all types of evaluation. It could be reflections that could serve as evaluation. Joan says that we should think about what kind of evaluation we want to do, what students want to do, students can push their teachers and supervisors to be more accommodating, reviewers need to be more open, special conferences for wild ideas, and perhaps have a track with some appropriate title that would encourage people to step aside conventions.
Wednesday, June 16, 2010
Panel discussion on Past Visions of Hypertext
The second panel discussion is on Past Visions of Hypertext with Mark Bernstein, Cathy Marshall, J Nathan Matias and Frank Tompa, moderated by Darren Lunn. Cathy Marshall from Microsoft Research, Silicon Valley is the first panelist and she mentions an article that says is Google making us stupid. The first two generations of pre-Web hypertext were catalyzed by the need for links so there is this problem of link anxiety disorder. In 1989-90, a system was created called Aquanet which is a collaborative hypert4ext tool for creating, storing, editing and browsing graphical knowledge structures. Aquanet replaces links with complex relations, are we headed in the right direction? By Hypertext 1994, there was hypertext without explicit links. VIKI was a spatial hypertext system in 1994, we still want linkiness but want it to be less formal, want to have implied relationships. In 1997, Cathy gave a keynote on the Smith System for defensive driving and applied it to hypertext. So looking back to look forward, links make us stupid, linkiness anxiety disorder has brought us to the world of App Islands, VB's motivation for links was to address the Balkanization of the scientific literature, we need to rethink links not getting rid of them.
The second panelist is Frank Tompa from the University of Waterloo talking about is whether we should still be influenced by Vannevar Bush. Wikis is almost what Vannevar Bush had in mind. Today's hypertext links are uni-directional, although Bush had it as bi-directional. WWW could expose reverse links via search engines. Very little of the content of the article was indexed in early hypertext. Search is now more powerful than what Bush had envisioned at the time. Another thing that might be worth considering is making trails into first-class objects like links. Memex objects were originally meant to be links to data, but now is broader on the web. According to Frank, further challenges to explore is that links can be typed, whereas most hypertext systems are untyped, therefore record the semantics of link labels (cf. Aquanet); and also to pursue Bush's vision that the Memex is a library and a current awareness medium. From Jon Kleinberg's keynote address at SIGMOD 2010, we need to understand how information flows from person to person through the web (including the influences of time and space) and how competing messages attract attention.
The third panelist is J. Nathan Matias. We have a long way to go for augmentation. There is a growing mountain of research, and we are being bogged down. The document systems we have are still not as reliable or easy to use, compared with note cards.
The fourth panelist is Mark Bernstein. Vannevar Bush's article in 1945 wasn't new, it was written in 1939. Engelbart and Nelson both acknowledge the influence of Bush. Also, Bush's vision of Memex wasn't new, it was built by IBM 10 years earlier by Emmanuel Goldberg. Engelbart was inventing with singularity. Why are we not citing H.G. Wells instead of Bush? Bush was a safe and respectable ancestor, and was useful in ways goldberg, H.G. Wells, and Leinster and others were not. A question that was asked in the audience is why hypertext relies on studying its past history compared to other disciplines that don't do as much. As hypertext researchers, is relying on our past hampering us moving forward? Hypertext panelists say not, it is actually helping us reflect. Another question is about what in the vision of Bush is still not realized in hypertext today? Frank Tompa responds about how it is not easy to find other ideas that are related to ideas that I have. According to Cathy Marshall, we still want to have the stumbling (in human terms) and not have search engines be perfect.
All in all, a great panel discussion and nice to reflect on what we have done, and what more needs to be done.
The second panelist is Frank Tompa from the University of Waterloo talking about is whether we should still be influenced by Vannevar Bush. Wikis is almost what Vannevar Bush had in mind. Today's hypertext links are uni-directional, although Bush had it as bi-directional. WWW could expose reverse links via search engines. Very little of the content of the article was indexed in early hypertext. Search is now more powerful than what Bush had envisioned at the time. Another thing that might be worth considering is making trails into first-class objects like links. Memex objects were originally meant to be links to data, but now is broader on the web. According to Frank, further challenges to explore is that links can be typed, whereas most hypertext systems are untyped, therefore record the semantics of link labels (cf. Aquanet); and also to pursue Bush's vision that the Memex is a library and a current awareness medium. From Jon Kleinberg's keynote address at SIGMOD 2010, we need to understand how information flows from person to person through the web (including the influences of time and space) and how competing messages attract attention.
The third panelist is J. Nathan Matias. We have a long way to go for augmentation. There is a growing mountain of research, and we are being bogged down. The document systems we have are still not as reliable or easy to use, compared with note cards.
The fourth panelist is Mark Bernstein. Vannevar Bush's article in 1945 wasn't new, it was written in 1939. Engelbart and Nelson both acknowledge the influence of Bush. Also, Bush's vision of Memex wasn't new, it was built by IBM 10 years earlier by Emmanuel Goldberg. Engelbart was inventing with singularity. Why are we not citing H.G. Wells instead of Bush? Bush was a safe and respectable ancestor, and was useful in ways goldberg, H.G. Wells, and Leinster and others were not. A question that was asked in the audience is why hypertext relies on studying its past history compared to other disciplines that don't do as much. As hypertext researchers, is relying on our past hampering us moving forward? Hypertext panelists say not, it is actually helping us reflect. Another question is about what in the vision of Bush is still not realized in hypertext today? Frank Tompa responds about how it is not easy to find other ideas that are related to ideas that I have. According to Cathy Marshall, we still want to have the stumbling (in human terms) and not have search engines be perfect.
All in all, a great panel discussion and nice to reflect on what we have done, and what more needs to be done.
Day 3: Last session - eLearning and navigation
Today is the last day of the Hypertext conference with the last session on eLearning and navigation. The first paper is Design and Evaluation of a Hypervideo Environment to Support Veterinary Surgery Learning by Claudio AB Tiellet, André Grahl Pereira, Eliseo Berni ategui, José Valdeni Lima, and Teresa Chambel. In this work, the authors' goal was to provide interactive access to high volume of nonlinear structured info construction of knowledge for animal doctors to perform surgery. Hypervideo is the integration of video in hypermedia structured through links which can be used by the doctors for searching, indexing, real time annotation and learning at different phases and at different situations. The hypervideo environment that they created is called HVet. The surgical index is structured as structured text and synchronized with video. The students had classes using the HVet with theory and then practice with live animals. Students believe they are able to perform surgery only through HV e-learning.
The second paper is The Value of Adaptive Link Annotation in E-Learning: A Study of a Portal-Based Approach by I-Han Hsiao, Peter Brusilovsky, Michael Yudelson, and Alvaro Ortigosa. They created QuZGuide, a topic-based adaptive navigation for quizzes. A non-adaptive portal does not have icons, colour of the icons and tell the students whether it is a good time to start on this topic. They did not use collaborative tools and tagging in this work, but did find out that adaptive pages (like how many students have used this part of the course) helped weak students rather than strong students.
The third paper is Agents, Bookmarks and Clicks: A topical model of Web navigation by Mark Meiss, Bruno Goncalves, Jose Ramasco, Alessandro Flammini, and Filippo Menczer. Their premise is that PageRank is not good enough for web navigation and they want to create a model for web navigation. Therefore they created the BookRank algorithm and the ABC model. The ABC model adds energy into the model, and their results show that ABC recovers entropy. They got the empirical data from a study of 1000 students. For their model, the bookmark list and initial energy are obtained from empirical data.
The second paper is The Value of Adaptive Link Annotation in E-Learning: A Study of a Portal-Based Approach by I-Han Hsiao, Peter Brusilovsky, Michael Yudelson, and Alvaro Ortigosa. They created QuZGuide, a topic-based adaptive navigation for quizzes. A non-adaptive portal does not have icons, colour of the icons and tell the students whether it is a good time to start on this topic. They did not use collaborative tools and tagging in this work, but did find out that adaptive pages (like how many students have used this part of the course) helped weak students rather than strong students.
The third paper is Agents, Bookmarks and Clicks: A topical model of Web navigation by Mark Meiss, Bruno Goncalves, Jose Ramasco, Alessandro Flammini, and Filippo Menczer. Their premise is that PageRank is not good enough for web navigation and they want to create a model for web navigation. Therefore they created the BookRank algorithm and the ABC model. The ABC model adds energy into the model, and their results show that ABC recovers entropy. They got the empirical data from a study of 1000 students. For their model, the bookmark list and initial energy are obtained from empirical data.
Tuesday, June 15, 2010
Third session at Day 2 Hypertext: Tagging
The first paper in this session is Of Categorizers and Describers: An Evaluation of Quantitative Measures for Tagging Motivation by Christian Körner. This talk looks into the motivation for tagging. A categorizer does not reuse his/her own tags often, so the tags do not contain much synonyms, whereas a describer contains lots of synonym tags. For approximating tagging motivation, they measure the tag/resource ratio (how many tags does a user use), orphaned tag ratio, overlap factor, and tag/title intersection ratio (how likely does a user choose words from the title as tags). They also look into the properties of the measures. For their quantitative evaluation, users who prefer folksonomy-based recommendation (describers) can be best identified by a high tag/title intersection ratio, and users who prefer personomy-based recommendation (categorizers) can be best identified by a log tag/resource ratio.
The second paper is Of Kings, Traffic Signs and Flowers: Exploring Navigation of Tagged Documents by Jacek Gwizdka. Can we improve the navigation process of tagged documents like CiteULike for example? One way is to provide history to support continuity in tag-space navigation using tag trails (eg. visualize with tag clouds). A user interface can be created with a heat map of history and relationship between set of documents that were visited. Jacek uses the concept of kingdom to describe the hierarchical relationship between tags. From the experiment, subjects experienced switching. Users desire to have a simpler user interface for tagging navigation continuity.
The third paper is The impact of resource title on tags in collaborative tagging systems by Marek Lipczak and Evangelos Milios. The authors wanted to figure out if title words are important for finding a profile tag and they found out from Delicious and CiteULike that title words are important. They looked into synonymous tags and they were interested into the frequencies of one form and another form. There is a relation between title and tags but it introduces redundancy and variability.
The fourth and last paper is Conversation Tagging in Twitter by Jeff Huang. In Jeff's talk, they looked into reviewing trends for 100s of newly coined popular tags from Twitter and Delicious and did a statistical analysis of time-series for popular tags from Twitter and Delicious. They claim that this is the first large-scale dataset of Twitter hashtags. This is good for characterizing tag trends. They found that many tags in Twitter are conversational, tags in Delicious are purely organizational.
The second paper is Of Kings, Traffic Signs and Flowers: Exploring Navigation of Tagged Documents by Jacek Gwizdka. Can we improve the navigation process of tagged documents like CiteULike for example? One way is to provide history to support continuity in tag-space navigation using tag trails (eg. visualize with tag clouds). A user interface can be created with a heat map of history and relationship between set of documents that were visited. Jacek uses the concept of kingdom to describe the hierarchical relationship between tags. From the experiment, subjects experienced switching. Users desire to have a simpler user interface for tagging navigation continuity.
The third paper is The impact of resource title on tags in collaborative tagging systems by Marek Lipczak and Evangelos Milios. The authors wanted to figure out if title words are important for finding a profile tag and they found out from Delicious and CiteULike that title words are important. They looked into synonymous tags and they were interested into the frequencies of one form and another form. There is a relation between title and tags but it introduces redundancy and variability.
The fourth and last paper is Conversation Tagging in Twitter by Jeff Huang. In Jeff's talk, they looked into reviewing trends for 100s of newly coined popular tags from Twitter and Delicious and did a statistical analysis of time-series for popular tags from Twitter and Delicious. They claim that this is the first large-scale dataset of Twitter hashtags. This is good for characterizing tag trends. They found that many tags in Twitter are conversational, tags in Delicious are purely organizational.
Session on Networked CommunitiesThe
The first paper in this session on Networked Communities is Modularity in Heterogeneous Networks by Tsuyoshi Murata. This is very important because Newman-Girvan modularity is usually only performed on homogeneous networks and bi-partite graphs. The authors propose a tripartite modularity and optimization of their measure. Their future work is looking into how to apply this and validate this with a real life dataset like for example Delicious.
The second paper is Link Prediction Applied to an Open Large-Scale Online Social Network by Dan Corlette and Frank Shipman. Can we build a model of large online social networks? They view the network as a graph using link topology. For them, an online social network is user generated content and list of friends. Their model is used for prediction. They use LiveJournal as their dataset and used a naiive Bayes classifier for training the dataset. Their main results is that the difficulty of predicting new friendships grows the longer users have been members of the network. Currently, they are building the model, their future work involves using user interests and centrality measures for improving link prediction.
The third paper is Community-Based Ranking of the Social Web by Said Kashoob, James Caverlee. and Krishna Kamath. Their research questions on do user-based communities manifest themselves in social bookmarking systems and how to model them? Their hypothesis is community-based tagging and the problem is how to uncover underlying communities given only observed tags and users. They create their own model called the CTAG model that emphasize on user role. They use the Gibbs Sampler for their model. For each community, they get a distribution over users and distribution over tags. They use experimental likelihood to test their CTAG model.
The last paper is Social Networks and Interest Similarity: The Case of CiteULike by Danielle H. Lee and Peter Brusilovsky. They use unilateral relationships as edges for the social network like "following" on Twitter, "watching" on CiteULike, this is a one-way relationship and does not require mutual agreement about being in the relationship. They used information similarity in order to find interest similarity, and with application to CiteuLike, users undershared items.
The second paper is Link Prediction Applied to an Open Large-Scale Online Social Network by Dan Corlette and Frank Shipman. Can we build a model of large online social networks? They view the network as a graph using link topology. For them, an online social network is user generated content and list of friends. Their model is used for prediction. They use LiveJournal as their dataset and used a naiive Bayes classifier for training the dataset. Their main results is that the difficulty of predicting new friendships grows the longer users have been members of the network. Currently, they are building the model, their future work involves using user interests and centrality measures for improving link prediction.
The third paper is Community-Based Ranking of the Social Web by Said Kashoob, James Caverlee. and Krishna Kamath. Their research questions on do user-based communities manifest themselves in social bookmarking systems and how to model them? Their hypothesis is community-based tagging and the problem is how to uncover underlying communities given only observed tags and users. They create their own model called the CTAG model that emphasize on user role. They use the Gibbs Sampler for their model. For each community, they get a distribution over users and distribution over tags. They use experimental likelihood to test their CTAG model.
The last paper is Social Networks and Interest Similarity: The Case of CiteULike by Danielle H. Lee and Peter Brusilovsky. They use unilateral relationships as edges for the social network like "following" on Twitter, "watching" on CiteULike, this is a one-way relationship and does not require mutual agreement about being in the relationship. They used information similarity in order to find interest similarity, and with application to CiteuLike, users undershared items.
Day 2 of Hypertext: Algorithms and Methods session
Today is Day 2 of Hypertext and bright early in the morning, the first session is Algorithms and Methods. The first talk is Assisting Two-Way Mapping Generation in Hypermedia Workspace by Haowei Hsieh, Katherine Pauls, Amber Jansen, Gautam Nimmagadda, and Frank Shipman. Here, the authors create a Mapping Designer for helping with two-way mapping generation, and can become a good instructional tool. This is for creating 2-way mappings for database for making a spatial hypertext system.
The second talk is Analysis of Graphs for Digital Preservation Suitability by Charles Cartiledge and Michael Nelson. The problem is the pictures that we collect, will they still exist? So the problem is can web objects be constructed to act in an autonomous manner to create a network of web objects that live on the web and can be expected to outlive the people an institutions that created them? They use graphs to figure out how to reconstruct the graph when being attacked so it is self-sustaining. Their solution is creating a USW graph.
The third and last talk in this session is iMapping – A Zooming User Interface Approach for Personal and Semantic Knowledge Management by Heiko Haller and Andreas Abecker. This paper won the Ted Nelson Best Newcomer award. Heiko is using the iMapping tool to show his presentation, which is really cool. The problem is dealing with knowledge management with gathering and processing knowledge from our brains and from computers. Using visual helps to support mental cognition. Mind maps are a way of doing this but overrated. Spatial hypertext can be used but it has no explicit links. With iMaps, you can avoid tangle like you have in a graph, and you can use hierarchy by nesting. This iMapping tool looks really nice for zooming in and out and for doing notetaking, and personal knowledge management. It is actually really nice for doing presentations.
The second talk is Analysis of Graphs for Digital Preservation Suitability by Charles Cartiledge and Michael Nelson. The problem is the pictures that we collect, will they still exist? So the problem is can web objects be constructed to act in an autonomous manner to create a network of web objects that live on the web and can be expected to outlive the people an institutions that created them? They use graphs to figure out how to reconstruct the graph when being attacked so it is self-sustaining. Their solution is creating a USW graph.
The third and last talk in this session is iMapping – A Zooming User Interface Approach for Personal and Semantic Knowledge Management by Heiko Haller and Andreas Abecker. This paper won the Ted Nelson Best Newcomer award. Heiko is using the iMapping tool to show his presentation, which is really cool. The problem is dealing with knowledge management with gathering and processing knowledge from our brains and from computers. Using visual helps to support mental cognition. Mind maps are a way of doing this but overrated. Spatial hypertext can be used but it has no explicit links. With iMaps, you can avoid tangle like you have in a graph, and you can use hierarchy by nesting. This iMapping tool looks really nice for zooming in and out and for doing notetaking, and personal knowledge management. It is actually really nice for doing presentations.
Posters and demos reception
There was a posters and demos reception last night at the Ted Rogers School of Management, Ryerson University. A great time was had by all with great posters, intellectual and personal discussions. We all headed down from the conference walking down along Bay Street and personally I used my Nokia Ovi Maps to find out where I was. It was a great day so nice for walking and I'm sure many of the attendees shared that same feeling.
Toronto is a great multicultural city. There were finger food, appetizers and a bar for attendees to talk and socialize. Afterwards, attendees did their own thing, some went for dinner to further eating and discussions.
Toronto is a great multicultural city. There were finger food, appetizers and a bar for attendees to talk and socialize. Afterwards, attendees did their own thing, some went for dinner to further eating and discussions.
Subscribe to:
Posts (Atom)