The Hypertext panel is discussing about evaluation of Hypertext. The panelists include Elaine Toms, Steve Szigeti, Mark Chignell, and Peter Brusilovsky, and being moderated by Joan Cherry. First panelist is Elaine Toms who is talking about how the rest of the panelists do not know how to do evaluation and evaluation is not needed. The second panelist is Steve Szigeti who says that evaluation is necessary, removing the user from the evaluation process is problematic, hypertext research has to consider the user. Qualitative research brings about the natural setting, it focuses on the user’s perceptions and recognizes the bias that all researchers bring into the research. Qualitative research is messy, evaluating hypertext has to taken into account this uncertainty which should taken into account the qualitative approach. The third panelist is Mark Chignell and he talks about we have to be careful about evaluation of hypertext. According to Mark, the proof is in the pudding. There is a rigorous procedure that if it not built will not be well used for evaluation. Qualitative tools are too blunt for the job. The last panelist is Peter Brusilovsky. According to Peter, he is saying why do we need to evaluate, which was his earlier impression when he was in Russia. But after going to CHI conference, then his position changed. We shouldn’t ignore evaluation, but it depends. When you start publishing in archival media and journals, there are so many conferences and workshops, show me the value of your work in journals. The evaluation should not be done for its sake, you have to think and be creative, what exactly is your argument, depending on this, evaluation will be different.
Now the panel is debating each other, according to Mark, numbers are misused. Peter talked about how if good numbers are used, then the evaluation can be good. Abby Goodrum from the audience asked a question about evaluation in papers. Abby is saying that we should have conference venues for broader thinking for “nibble and spit” for new and bright ideas. Mark is proposing a grand jury of professionals to judge papers instead of just grad students. Markus Strohmaier is giving a question about rigor vs. relevance in peer review of papers. Alvin Chin gave a question about the possibility of creating a conference where there may not be a proceedings, that encourage people to submit but not publish and present their work, so people are not so pressured to have to do the evaluation, and encourage new and bright ideas. So according to Peter, we do not have to do the evaluation of the numbers. From Mark, we need frameworks for hypertext that is different from psychology, or human computer interaction.
From the moderator, Joan Cherry, there was a paper from CHI 2008 from Greenburg and Buxton, 70% of the usability papers were quantitative, 20% were qualitative, Greenburg and Buxon said that all papers should not require evaluation and be open to all types of evaluation. It could be reflections that could serve as evaluation. Joan says that we should think about what kind of evaluation we want to do, what students want to do, students can push their teachers and supervisors to be more accommodating, reviewers need to be more open, special conferences for wild ideas, and perhaps have a track with some appropriate title that would encourage people to step aside conventions.
Wednesday, June 16, 2010
Panel discussion on Past Visions of Hypertext
The second panel discussion is on Past Visions of Hypertext with Mark Bernstein, Cathy Marshall, J Nathan Matias and Frank Tompa, moderated by Darren Lunn. Cathy Marshall from Microsoft Research, Silicon Valley is the first panelist and she mentions an article that says is Google making us stupid. The first two generations of pre-Web hypertext were catalyzed by the need for links so there is this problem of link anxiety disorder. In 1989-90, a system was created called Aquanet which is a collaborative hypert4ext tool for creating, storing, editing and browsing graphical knowledge structures. Aquanet replaces links with complex relations, are we headed in the right direction? By Hypertext 1994, there was hypertext without explicit links. VIKI was a spatial hypertext system in 1994, we still want linkiness but want it to be less formal, want to have implied relationships. In 1997, Cathy gave a keynote on the Smith System for defensive driving and applied it to hypertext. So looking back to look forward, links make us stupid, linkiness anxiety disorder has brought us to the world of App Islands, VB's motivation for links was to address the Balkanization of the scientific literature, we need to rethink links not getting rid of them.
The second panelist is Frank Tompa from the University of Waterloo talking about is whether we should still be influenced by Vannevar Bush. Wikis is almost what Vannevar Bush had in mind. Today's hypertext links are uni-directional, although Bush had it as bi-directional. WWW could expose reverse links via search engines. Very little of the content of the article was indexed in early hypertext. Search is now more powerful than what Bush had envisioned at the time. Another thing that might be worth considering is making trails into first-class objects like links. Memex objects were originally meant to be links to data, but now is broader on the web. According to Frank, further challenges to explore is that links can be typed, whereas most hypertext systems are untyped, therefore record the semantics of link labels (cf. Aquanet); and also to pursue Bush's vision that the Memex is a library and a current awareness medium. From Jon Kleinberg's keynote address at SIGMOD 2010, we need to understand how information flows from person to person through the web (including the influences of time and space) and how competing messages attract attention.
The third panelist is J. Nathan Matias. We have a long way to go for augmentation. There is a growing mountain of research, and we are being bogged down. The document systems we have are still not as reliable or easy to use, compared with note cards.
The fourth panelist is Mark Bernstein. Vannevar Bush's article in 1945 wasn't new, it was written in 1939. Engelbart and Nelson both acknowledge the influence of Bush. Also, Bush's vision of Memex wasn't new, it was built by IBM 10 years earlier by Emmanuel Goldberg. Engelbart was inventing with singularity. Why are we not citing H.G. Wells instead of Bush? Bush was a safe and respectable ancestor, and was useful in ways goldberg, H.G. Wells, and Leinster and others were not. A question that was asked in the audience is why hypertext relies on studying its past history compared to other disciplines that don't do as much. As hypertext researchers, is relying on our past hampering us moving forward? Hypertext panelists say not, it is actually helping us reflect. Another question is about what in the vision of Bush is still not realized in hypertext today? Frank Tompa responds about how it is not easy to find other ideas that are related to ideas that I have. According to Cathy Marshall, we still want to have the stumbling (in human terms) and not have search engines be perfect.
All in all, a great panel discussion and nice to reflect on what we have done, and what more needs to be done.
The second panelist is Frank Tompa from the University of Waterloo talking about is whether we should still be influenced by Vannevar Bush. Wikis is almost what Vannevar Bush had in mind. Today's hypertext links are uni-directional, although Bush had it as bi-directional. WWW could expose reverse links via search engines. Very little of the content of the article was indexed in early hypertext. Search is now more powerful than what Bush had envisioned at the time. Another thing that might be worth considering is making trails into first-class objects like links. Memex objects were originally meant to be links to data, but now is broader on the web. According to Frank, further challenges to explore is that links can be typed, whereas most hypertext systems are untyped, therefore record the semantics of link labels (cf. Aquanet); and also to pursue Bush's vision that the Memex is a library and a current awareness medium. From Jon Kleinberg's keynote address at SIGMOD 2010, we need to understand how information flows from person to person through the web (including the influences of time and space) and how competing messages attract attention.
The third panelist is J. Nathan Matias. We have a long way to go for augmentation. There is a growing mountain of research, and we are being bogged down. The document systems we have are still not as reliable or easy to use, compared with note cards.
The fourth panelist is Mark Bernstein. Vannevar Bush's article in 1945 wasn't new, it was written in 1939. Engelbart and Nelson both acknowledge the influence of Bush. Also, Bush's vision of Memex wasn't new, it was built by IBM 10 years earlier by Emmanuel Goldberg. Engelbart was inventing with singularity. Why are we not citing H.G. Wells instead of Bush? Bush was a safe and respectable ancestor, and was useful in ways goldberg, H.G. Wells, and Leinster and others were not. A question that was asked in the audience is why hypertext relies on studying its past history compared to other disciplines that don't do as much. As hypertext researchers, is relying on our past hampering us moving forward? Hypertext panelists say not, it is actually helping us reflect. Another question is about what in the vision of Bush is still not realized in hypertext today? Frank Tompa responds about how it is not easy to find other ideas that are related to ideas that I have. According to Cathy Marshall, we still want to have the stumbling (in human terms) and not have search engines be perfect.
All in all, a great panel discussion and nice to reflect on what we have done, and what more needs to be done.
Day 3: Last session - eLearning and navigation
Today is the last day of the Hypertext conference with the last session on eLearning and navigation. The first paper is Design and Evaluation of a Hypervideo Environment to Support Veterinary Surgery Learning by Claudio AB Tiellet, André Grahl Pereira, Eliseo Berni ategui, José Valdeni Lima, and Teresa Chambel. In this work, the authors' goal was to provide interactive access to high volume of nonlinear structured info construction of knowledge for animal doctors to perform surgery. Hypervideo is the integration of video in hypermedia structured through links which can be used by the doctors for searching, indexing, real time annotation and learning at different phases and at different situations. The hypervideo environment that they created is called HVet. The surgical index is structured as structured text and synchronized with video. The students had classes using the HVet with theory and then practice with live animals. Students believe they are able to perform surgery only through HV e-learning.
The second paper is The Value of Adaptive Link Annotation in E-Learning: A Study of a Portal-Based Approach by I-Han Hsiao, Peter Brusilovsky, Michael Yudelson, and Alvaro Ortigosa. They created QuZGuide, a topic-based adaptive navigation for quizzes. A non-adaptive portal does not have icons, colour of the icons and tell the students whether it is a good time to start on this topic. They did not use collaborative tools and tagging in this work, but did find out that adaptive pages (like how many students have used this part of the course) helped weak students rather than strong students.
The third paper is Agents, Bookmarks and Clicks: A topical model of Web navigation by Mark Meiss, Bruno Goncalves, Jose Ramasco, Alessandro Flammini, and Filippo Menczer. Their premise is that PageRank is not good enough for web navigation and they want to create a model for web navigation. Therefore they created the BookRank algorithm and the ABC model. The ABC model adds energy into the model, and their results show that ABC recovers entropy. They got the empirical data from a study of 1000 students. For their model, the bookmark list and initial energy are obtained from empirical data.
The second paper is The Value of Adaptive Link Annotation in E-Learning: A Study of a Portal-Based Approach by I-Han Hsiao, Peter Brusilovsky, Michael Yudelson, and Alvaro Ortigosa. They created QuZGuide, a topic-based adaptive navigation for quizzes. A non-adaptive portal does not have icons, colour of the icons and tell the students whether it is a good time to start on this topic. They did not use collaborative tools and tagging in this work, but did find out that adaptive pages (like how many students have used this part of the course) helped weak students rather than strong students.
The third paper is Agents, Bookmarks and Clicks: A topical model of Web navigation by Mark Meiss, Bruno Goncalves, Jose Ramasco, Alessandro Flammini, and Filippo Menczer. Their premise is that PageRank is not good enough for web navigation and they want to create a model for web navigation. Therefore they created the BookRank algorithm and the ABC model. The ABC model adds energy into the model, and their results show that ABC recovers entropy. They got the empirical data from a study of 1000 students. For their model, the bookmark list and initial energy are obtained from empirical data.
Tuesday, June 15, 2010
Third session at Day 2 Hypertext: Tagging
The first paper in this session is Of Categorizers and Describers: An Evaluation of Quantitative Measures for Tagging Motivation by Christian Körner. This talk looks into the motivation for tagging. A categorizer does not reuse his/her own tags often, so the tags do not contain much synonyms, whereas a describer contains lots of synonym tags. For approximating tagging motivation, they measure the tag/resource ratio (how many tags does a user use), orphaned tag ratio, overlap factor, and tag/title intersection ratio (how likely does a user choose words from the title as tags). They also look into the properties of the measures. For their quantitative evaluation, users who prefer folksonomy-based recommendation (describers) can be best identified by a high tag/title intersection ratio, and users who prefer personomy-based recommendation (categorizers) can be best identified by a log tag/resource ratio.
The second paper is Of Kings, Traffic Signs and Flowers: Exploring Navigation of Tagged Documents by Jacek Gwizdka. Can we improve the navigation process of tagged documents like CiteULike for example? One way is to provide history to support continuity in tag-space navigation using tag trails (eg. visualize with tag clouds). A user interface can be created with a heat map of history and relationship between set of documents that were visited. Jacek uses the concept of kingdom to describe the hierarchical relationship between tags. From the experiment, subjects experienced switching. Users desire to have a simpler user interface for tagging navigation continuity.
The third paper is The impact of resource title on tags in collaborative tagging systems by Marek Lipczak and Evangelos Milios. The authors wanted to figure out if title words are important for finding a profile tag and they found out from Delicious and CiteULike that title words are important. They looked into synonymous tags and they were interested into the frequencies of one form and another form. There is a relation between title and tags but it introduces redundancy and variability.
The fourth and last paper is Conversation Tagging in Twitter by Jeff Huang. In Jeff's talk, they looked into reviewing trends for 100s of newly coined popular tags from Twitter and Delicious and did a statistical analysis of time-series for popular tags from Twitter and Delicious. They claim that this is the first large-scale dataset of Twitter hashtags. This is good for characterizing tag trends. They found that many tags in Twitter are conversational, tags in Delicious are purely organizational.
The second paper is Of Kings, Traffic Signs and Flowers: Exploring Navigation of Tagged Documents by Jacek Gwizdka. Can we improve the navigation process of tagged documents like CiteULike for example? One way is to provide history to support continuity in tag-space navigation using tag trails (eg. visualize with tag clouds). A user interface can be created with a heat map of history and relationship between set of documents that were visited. Jacek uses the concept of kingdom to describe the hierarchical relationship between tags. From the experiment, subjects experienced switching. Users desire to have a simpler user interface for tagging navigation continuity.
The third paper is The impact of resource title on tags in collaborative tagging systems by Marek Lipczak and Evangelos Milios. The authors wanted to figure out if title words are important for finding a profile tag and they found out from Delicious and CiteULike that title words are important. They looked into synonymous tags and they were interested into the frequencies of one form and another form. There is a relation between title and tags but it introduces redundancy and variability.
The fourth and last paper is Conversation Tagging in Twitter by Jeff Huang. In Jeff's talk, they looked into reviewing trends for 100s of newly coined popular tags from Twitter and Delicious and did a statistical analysis of time-series for popular tags from Twitter and Delicious. They claim that this is the first large-scale dataset of Twitter hashtags. This is good for characterizing tag trends. They found that many tags in Twitter are conversational, tags in Delicious are purely organizational.
Session on Networked CommunitiesThe
The first paper in this session on Networked Communities is Modularity in Heterogeneous Networks by Tsuyoshi Murata. This is very important because Newman-Girvan modularity is usually only performed on homogeneous networks and bi-partite graphs. The authors propose a tripartite modularity and optimization of their measure. Their future work is looking into how to apply this and validate this with a real life dataset like for example Delicious.
The second paper is Link Prediction Applied to an Open Large-Scale Online Social Network by Dan Corlette and Frank Shipman. Can we build a model of large online social networks? They view the network as a graph using link topology. For them, an online social network is user generated content and list of friends. Their model is used for prediction. They use LiveJournal as their dataset and used a naiive Bayes classifier for training the dataset. Their main results is that the difficulty of predicting new friendships grows the longer users have been members of the network. Currently, they are building the model, their future work involves using user interests and centrality measures for improving link prediction.
The third paper is Community-Based Ranking of the Social Web by Said Kashoob, James Caverlee. and Krishna Kamath. Their research questions on do user-based communities manifest themselves in social bookmarking systems and how to model them? Their hypothesis is community-based tagging and the problem is how to uncover underlying communities given only observed tags and users. They create their own model called the CTAG model that emphasize on user role. They use the Gibbs Sampler for their model. For each community, they get a distribution over users and distribution over tags. They use experimental likelihood to test their CTAG model.
The last paper is Social Networks and Interest Similarity: The Case of CiteULike by Danielle H. Lee and Peter Brusilovsky. They use unilateral relationships as edges for the social network like "following" on Twitter, "watching" on CiteULike, this is a one-way relationship and does not require mutual agreement about being in the relationship. They used information similarity in order to find interest similarity, and with application to CiteuLike, users undershared items.
The second paper is Link Prediction Applied to an Open Large-Scale Online Social Network by Dan Corlette and Frank Shipman. Can we build a model of large online social networks? They view the network as a graph using link topology. For them, an online social network is user generated content and list of friends. Their model is used for prediction. They use LiveJournal as their dataset and used a naiive Bayes classifier for training the dataset. Their main results is that the difficulty of predicting new friendships grows the longer users have been members of the network. Currently, they are building the model, their future work involves using user interests and centrality measures for improving link prediction.
The third paper is Community-Based Ranking of the Social Web by Said Kashoob, James Caverlee. and Krishna Kamath. Their research questions on do user-based communities manifest themselves in social bookmarking systems and how to model them? Their hypothesis is community-based tagging and the problem is how to uncover underlying communities given only observed tags and users. They create their own model called the CTAG model that emphasize on user role. They use the Gibbs Sampler for their model. For each community, they get a distribution over users and distribution over tags. They use experimental likelihood to test their CTAG model.
The last paper is Social Networks and Interest Similarity: The Case of CiteULike by Danielle H. Lee and Peter Brusilovsky. They use unilateral relationships as edges for the social network like "following" on Twitter, "watching" on CiteULike, this is a one-way relationship and does not require mutual agreement about being in the relationship. They used information similarity in order to find interest similarity, and with application to CiteuLike, users undershared items.
Day 2 of Hypertext: Algorithms and Methods session
Today is Day 2 of Hypertext and bright early in the morning, the first session is Algorithms and Methods. The first talk is Assisting Two-Way Mapping Generation in Hypermedia Workspace by Haowei Hsieh, Katherine Pauls, Amber Jansen, Gautam Nimmagadda, and Frank Shipman. Here, the authors create a Mapping Designer for helping with two-way mapping generation, and can become a good instructional tool. This is for creating 2-way mappings for database for making a spatial hypertext system.
The second talk is Analysis of Graphs for Digital Preservation Suitability by Charles Cartiledge and Michael Nelson. The problem is the pictures that we collect, will they still exist? So the problem is can web objects be constructed to act in an autonomous manner to create a network of web objects that live on the web and can be expected to outlive the people an institutions that created them? They use graphs to figure out how to reconstruct the graph when being attacked so it is self-sustaining. Their solution is creating a USW graph.
The third and last talk in this session is iMapping – A Zooming User Interface Approach for Personal and Semantic Knowledge Management by Heiko Haller and Andreas Abecker. This paper won the Ted Nelson Best Newcomer award. Heiko is using the iMapping tool to show his presentation, which is really cool. The problem is dealing with knowledge management with gathering and processing knowledge from our brains and from computers. Using visual helps to support mental cognition. Mind maps are a way of doing this but overrated. Spatial hypertext can be used but it has no explicit links. With iMaps, you can avoid tangle like you have in a graph, and you can use hierarchy by nesting. This iMapping tool looks really nice for zooming in and out and for doing notetaking, and personal knowledge management. It is actually really nice for doing presentations.
The second talk is Analysis of Graphs for Digital Preservation Suitability by Charles Cartiledge and Michael Nelson. The problem is the pictures that we collect, will they still exist? So the problem is can web objects be constructed to act in an autonomous manner to create a network of web objects that live on the web and can be expected to outlive the people an institutions that created them? They use graphs to figure out how to reconstruct the graph when being attacked so it is self-sustaining. Their solution is creating a USW graph.
The third and last talk in this session is iMapping – A Zooming User Interface Approach for Personal and Semantic Knowledge Management by Heiko Haller and Andreas Abecker. This paper won the Ted Nelson Best Newcomer award. Heiko is using the iMapping tool to show his presentation, which is really cool. The problem is dealing with knowledge management with gathering and processing knowledge from our brains and from computers. Using visual helps to support mental cognition. Mind maps are a way of doing this but overrated. Spatial hypertext can be used but it has no explicit links. With iMaps, you can avoid tangle like you have in a graph, and you can use hierarchy by nesting. This iMapping tool looks really nice for zooming in and out and for doing notetaking, and personal knowledge management. It is actually really nice for doing presentations.
Posters and demos reception
There was a posters and demos reception last night at the Ted Rogers School of Management, Ryerson University. A great time was had by all with great posters, intellectual and personal discussions. We all headed down from the conference walking down along Bay Street and personally I used my Nokia Ovi Maps to find out where I was. It was a great day so nice for walking and I'm sure many of the attendees shared that same feeling.
Toronto is a great multicultural city. There were finger food, appetizers and a bar for attendees to talk and socialize. Afterwards, attendees did their own thing, some went for dinner to further eating and discussions.
Toronto is a great multicultural city. There were finger food, appetizers and a bar for attendees to talk and socialize. Afterwards, attendees did their own thing, some went for dinner to further eating and discussions.
Monday, June 14, 2010
Third session on Adaptation
The third session is on Adaptation, with the first paper on Providing resilient Xpaths for external adaptation engines. The author is talking about using adaptation rules to specify which elements the adaptation is done that affects the page. They use XPath expressions to select the contents that will be adapted.
The second paper is on The Influence of Adaptation on Hypertext Structures and Navigation by Vincius Ramos and Paul de Bra. They use the AHA framework which is adaptive technique for hypertext and uses annotation/hiding and using coloring links. Bad links point to non-recommended concepts and are in black, whereas good links point to recommended concepts.
The third paper is on The next generation Authoring Adaptive Hypermedia: Using and Evaluating the MOT3.0 and PEAL tools by Jonathan G K Foss and Alexandra I Cristea. There is the concept of separation of concerns where you want to separate adaptive content from static content for reusability and flexibility. Adaptive hypermedia looks into user modeling and how to use that and presentation models. They have created MOT3.0, a framework system that allows for Powerpoint and wiki import into the adaptive hypermedia framework. They also have PEAL which can be used for labelling code fragments. You can try out their systems for MOT3.0 and PEAL.
The second paper is on The Influence of Adaptation on Hypertext Structures and Navigation by Vincius Ramos and Paul de Bra. They use the AHA framework which is adaptive technique for hypertext and uses annotation/hiding and using coloring links. Bad links point to non-recommended concepts and are in black, whereas good links point to recommended concepts.
The third paper is on The next generation Authoring Adaptive Hypermedia: Using and Evaluating the MOT3.0 and PEAL tools by Jonathan G K Foss and Alexandra I Cristea. There is the concept of separation of concerns where you want to separate adaptive content from static content for reusability and flexibility. Adaptive hypermedia looks into user modeling and how to use that and presentation models. They have created MOT3.0, a framework system that allows for Powerpoint and wiki import into the adaptive hypermedia framework. They also have PEAL which can be used for labelling code fragments. You can try out their systems for MOT3.0 and PEAL.
Second session on Recommender Systems
The second session on Recommender Systems starts with the first paper on Automatic Construction of Travel Itineraries using Social Breadcrumbs by Munmun De Choudhury, Moran Feldman, Sihem Amer-Yahia, Nadav Golbandi, Ronny Lempel, and Cong Yu. They create these travel itineraries by taking photo streams from the Flickr dataset that map the photo with the city, then extract candidate POIs using the Yahoo! Maps API, then segment the photo streams to created timed paths to form the travel itineraries. They create the POI graph from this. They compare their automated travel itineraries with real itineraries using Amazon Mechanical Turk. The extensive survey-based user studies on Amazon Mechanical Turk gave promising results against bus tour companies' itineraries.
The second paper is on Speak the Same Language with Your Friends: Augmenting Tag Recommenders with Social Relations by Kaipeng Liu and Binxing Fang. Their proposed graph is to add edges between users and resources in addition to users linked to tags and tags linked to resources. They use random walk-based similarity measures. They compare personalized-CF with User-CF and showed that the proposed Personalized-CF algorithm with MFA as similarity measure performs best.
The third paper is Connecting Users and Items with Weighted Tags for Personalized Item Recommendations by Huizhi Liang, Yue Xu, Yuefeng Li, and Richi Nayak. The problem is that we have a tag quality problem, there is semantic ambiguity, tags could be personal and there are synonyms of tags that mean the same thing. Their proposed apprach is to use the multiple relationships of tagging. Usually only 2 dimensional relationships are used (user-item, user-tag, and item-tag), but in actuality there are three dimensional relationships of user-tag-item relationship: (userXtag)-item mapping, item-(userXtag) mapping. But what about user-(itemXtag) mapping?
The second paper is on Speak the Same Language with Your Friends: Augmenting Tag Recommenders with Social Relations by Kaipeng Liu and Binxing Fang. Their proposed graph is to add edges between users and resources in addition to users linked to tags and tags linked to resources. They use random walk-based similarity measures. They compare personalized-CF with User-CF and showed that the proposed Personalized-CF algorithm with MFA as similarity measure performs best.
The third paper is Connecting Users and Items with Weighted Tags for Personalized Item Recommendations by Huizhi Liang, Yue Xu, Yuefeng Li, and Richi Nayak. The problem is that we have a tag quality problem, there is semantic ambiguity, tags could be personal and there are synonyms of tags that mean the same thing. Their proposed apprach is to use the multiple relationships of tagging. Usually only 2 dimensional relationships are used (user-item, user-tag, and item-tag), but in actuality there are three dimensional relationships of user-tag-item relationship: (userXtag)-item mapping, item-(userXtag) mapping. But what about user-(itemXtag) mapping?
First session: Information Searching
The first paper from the conference program is on Is A Good Title? This paper looks into is there a way to name a URI to give a better title that will help in retrieval of search results. Many times, when you enter a search term, the URIs provided are not the relevant results. The author mentions about title evolution and gave the example of Sun.com. How much do titles change over time? They analyzed their title performance prediction but didn't find any new evidence that a page of a page that has a title of more than 24 terms is most likely spam.
The second paper is on Parallel Browsing on the Web by Jeff Huang and Ryan White. Browsers have tabbed browsing and they look into parallel browsing and gathered data from Internet Explorer 8. This was joint work with Microsoft Research. From their results, they found that users don't visit more pages when using more tabs.
The third paper is on A semiotic approach for the generation of themed photo narratives by Charlie Hargood, David Millard, and Mark Weal. According to Charlie, narratives surround us and we need to create systems to accommodate these narratives. They take a semiotic approach where semiotics are the study of signs and how we understand them. They did an experiment to evaluate the effectiveness of the Themed Montage Builder. The Themed Montage Builder (TMB) is a prototype using definitions in terms of model to generate themed montages, eg. from Flickr images. They found that the TMB shows a slight improvement. It is interesting how this approach could be used to create narrative from travel blogs.
The last paper in this session is The Impact of Bookmarks and Annotations on Refinding Information by Ricardo Kawase, George Papadakis, Eelco Herder, and Wolfgang Nejdl. They created an annotation system called SpreadCrumbs which is a social annotation system and it supports hypertrails which is a path of links. They then did experiments to test their SpreadCrumbs system compared to Delicious for bookmarks and annotations. The results showed that it was faster for the group that used annotations and SpreadCrumbs to find the information and outperform search in terms of performance.
The second paper is on Parallel Browsing on the Web by Jeff Huang and Ryan White. Browsers have tabbed browsing and they look into parallel browsing and gathered data from Internet Explorer 8. This was joint work with Microsoft Research. From their results, they found that users don't visit more pages when using more tabs.
The third paper is on A semiotic approach for the generation of themed photo narratives by Charlie Hargood, David Millard, and Mark Weal. According to Charlie, narratives surround us and we need to create systems to accommodate these narratives. They take a semiotic approach where semiotics are the study of signs and how we understand them. They did an experiment to evaluate the effectiveness of the Themed Montage Builder. The Themed Montage Builder (TMB) is a prototype using definitions in terms of model to generate themed montages, eg. from Flickr images. They found that the TMB shows a slight improvement. It is interesting how this approach could be used to create narrative from travel blogs.
The last paper in this session is The Impact of Bookmarks and Annotations on Refinding Information by Ricardo Kawase, George Papadakis, Eelco Herder, and Wolfgang Nejdl. They created an annotation system called SpreadCrumbs which is a social annotation system and it supports hypertrails which is a path of links. They then did experiments to test their SpreadCrumbs system compared to Delicious for bookmarks and annotations. The results showed that it was faster for the group that used annotations and SpreadCrumbs to find the information and outperform search in terms of performance.
First day of Hypertext conference - Andrew Dillon keynote
Today is the first day of the Hypertext conference. After a great day of workshops last night and open bar (unheard of at conferences according to Chair Mark Chignell) at the reception, the Hypertext conference officially starts today. Andrew Dillon is the keynote speaker talking about As We May Have Thought. Andrew is talking about how the trend in Hypertext is declining while web and web science are increasing. Is there a need for a new discipline? Web Science attempts to take an interdisciplinary approach, just like HCI does as well, but should we do that?
Information and computer science has started and it keeps going. Hypertext, hypermedia, no matter what your area of focus, it is one more point in the timeline in the sharing of information. Andrew says that according to Steve Jobs of Apple, "nobody reads anymore". Civilization is shifting as information explodes, computing is moving from calculation to augmentation. We are at a moment of profound change in the ecology of information. Are we going backward?
From Jakob Nielsen (usability expert), we haven't changed that much for usability. Nielsen has usability guidelines for web sites, even though his web site is not that usable. In fact Nielsen came out of Hypertext into HCI. The best way to understand in the world now is that technology is used is the natural human condition. There was this 500 year period during Gutenberg where oral culture was shifted to written culture. Is the web a return to pre-Gutenberg era?
According to Andrew, the human is the key, there is too much emphasis on search, location and retrieval and too little emphasis on longitudinal outcomes.
Andrew ended with the quote from Vannevar Bush: "A record if it is to be useful..must be continuously extended, it must be stored, and above all it must be consulted." (Bush, 1945)
Information and computer science has started and it keeps going. Hypertext, hypermedia, no matter what your area of focus, it is one more point in the timeline in the sharing of information. Andrew says that according to Steve Jobs of Apple, "nobody reads anymore". Civilization is shifting as information explodes, computing is moving from calculation to augmentation. We are at a moment of profound change in the ecology of information. Are we going backward?
From Jakob Nielsen (usability expert), we haven't changed that much for usability. Nielsen has usability guidelines for web sites, even though his web site is not that usable. In fact Nielsen came out of Hypertext into HCI. The best way to understand in the world now is that technology is used is the natural human condition. There was this 500 year period during Gutenberg where oral culture was shifted to written culture. Is the web a return to pre-Gutenberg era?
According to Andrew, the human is the key, there is too much emphasis on search, location and retrieval and too little emphasis on longitudinal outcomes.
Andrew ended with the quote from Vannevar Bush: "A record if it is to be useful..must be continuously extended, it must be stored, and above all it must be consulted." (Bush, 1945)
Friday, June 11, 2010
Only 2 more days until Hypertext conference!
Well, the date is fast approaching, Hypertext conference is upon us in 2 days! Workshops will start on Sunday and there are 3 great workshops to attend.
Also, if you haven't registered or are wondering whether to register, think no more.
Special registration for Ontario residents, US$195 for Ontario students and US$375 for all others residing in Ontario for registration to Hypertext! Visit the registration page to register. The registration includes conference, coffee breaks, two receptions and a banquet.
Hypertext is the venue to connect and link with researchers working in hypertext, hypermedia, web and social networking.
While you're in Toronto before and after the conference, come visit our beautiful city with wonderful attractions. You can also buy a Toronto City pass that will save you on many attractions.
Happy traveling to Toronto and see you at Hypertext!
Also, if you haven't registered or are wondering whether to register, think no more.
Special registration for Ontario residents, US$195 for Ontario students and US$375 for all others residing in Ontario for registration to Hypertext! Visit the registration page to register. The registration includes conference, coffee breaks, two receptions and a banquet.
Hypertext is the venue to connect and link with researchers working in hypertext, hypermedia, web and social networking.
While you're in Toronto before and after the conference, come visit our beautiful city with wonderful attractions. You can also buy a Toronto City pass that will save you on many attractions.
Happy traveling to Toronto and see you at Hypertext!
Subscribe to:
Posts (Atom)