Tuesday, March 30, 2010

Context-based page unit recommendation for web-based sensemaking tasks

Authors:
Wen-Huang Cheng National Taiwan University, Taipei, Taiwan Roc
David Gotz IBM T.J. Watson Research Center, Hawthorne, NY, USA

Paper (pdf):
http://portal.acm.org/citation.cfm?id=1502650.1502668&coll=ACM&dl=ACM&type=series&idx=SERIES823&part=series&WantType=Proceedings&title=IUI&CFID=81639924&CFTOKEN=12013848

Summary:
This paper explores the authors program InsightFinder, a web tool that aides in connection discovery during sensemaking tasks, as well as provides details to the algorithm, interface, and user study behind it.

The program they developed is called The InsightFinder and is described as 'a smart web-browsing system that assists in connection discovery during sensemaking tasks by providing context-based page unit recommendations.' The sensemaking that they refer to is the process a user faces after they have selected a website such as from a browser query and they are scanning through the website to find the information that they were looking for and then connecting that information to other relevant data from other websites. One of the examples that they gave was someone who was relocating to a new city. Some of the things they might research is an apartment complex within their price range, relative location to a day care for their children, and also proximity to their place of work. Rather than have to search all of these out individually and accumulate a series of notes that illustrate all the different possible solutions the program InsightFinder would do this for you, linking these different websites together.

Below are the properties that a tool of this nature should have:
  • Site Independence: A sensemaking tool must be independent of any specific site or content provider to allow cross-site connection discovery.
  • Note-Taking Functionality: A sensemaking tool should allow for the collection of information fragments into a task-specific workspace to help users organize their findings across multiple sessions and sites.
  • Assistance in Connection Discovery: Most critically, a sensemaking tool should assist the user in performing the difficult process of uncovering connections between their notes and what is currently being explored in their browser.
The authors went into detail describing the algorithm behind the program, and I've included an excerpt that briefly describes this.
The insight loop is
triggered directly through the InsightFinder interface,
which provides tools for users to record or organize their
notes. As the user’s notes evolve, the InsightFinder
maintains a context model which represents the user’s captured data. The exploration loop occurs while users
interact with the normal browser interface. As users
navigate the web, the InsightFinder performs a series of
steps each time a new page is visited. At the conclusion
of both loops, the InsightFinder provides a ranked list
of recommended web page fragments that are most relevant
to the content in the user’s notes. To provide this functionality, the architecture includes modules for interface management, content extraction, context model
management, page segmentation, and relevance computation.

Below is a screen shot of InsightFinder, followed by another illustrating the ability to take notes.

Figure 2. A screenshot of the InsightFinder system.

Figure 3. Users can record notes by dragging content
fragments (links, images, text, or entire pages) from the
browser to folders in the InsightFinder.

The last part of the paper went into detail describing the user study they performed and the results. Their program ran as a sidebar on mozilla firefox using XUL for the interface and java/javascript for the computational components. Overall their program was proved to work as it was suppose to and it did indeed improve sensemaking tasks by reducing the amount of time required to perform them, on average a reduction of 30 seconds. In their possible future work they mentioned extending the granularity of their node-weighing as well as improving the note taking capabilities.

Discussion:
The InsightFinder that the authors developed was a good novel program. I hadn't thought much of tools aiding in finding connections between websites based on notes and so forth. I have only ever used a search engine and trial and error to find more specifically what I was looking for. I think a program like this would be a good extension for web-browsers. It may not be used on a day-to-day basis but when doing things such as research I can see how this would come in handy, scanning an accumulation of notes to suggest/recommend websites to view.

Thursday, March 11, 2010

Is the sky pure today? AwkChecker: an assistive tool for detecting and correcting collocation errors

Authors:
Taehyun Park University of Waterloo, Waterloo, ON, Canada
Edward Lank University of Waterloo, Waterloo, ON, Canada
Pascal Poupart University of Waterloo, Waterloo, ON, Canada
Michael Terry University of Waterloo, Waterloo, ON, Canada

Paper (Mov and Pdf):
http://portal.acm.org/citation.cfm?id=1449736&coll=ACM&dl=ACM&CFID=81067528&CFTOKEN=37358406&ret=1#Fulltext

Summary:
The purpose of this paper was to describe the AwkChecker program created by the authors that detects improper word phrases so that they can be replaced with the more commonly used phrases. In the paper these phrases are described as Collocation Preferences, which include such things as commonly used expressions, idioms, and word pairings. The goal for this program was to aide non-native speakers who would most likely encounter problems with these collocation preferences. The AwkChecker program works on the basis of a webbased text editor that flags collocation errors and suggests replacement phrases.

The paper also went into detail describing the language problems that Non Native Speakers (NNS) encounter as opposed to what Native Speakers generally encounter. One of the points they were trying to make with this is that the majority of English speakers, roughly 70%, are also NNS and as such there is a great demand in language tools to assist NNS. The authors created their program based on a guideline for NNS language tool design from Knutsson et al. Below is the guideline as described by this paper:
  • Real-time feedback is always desirable, especially
    since it helps one improve one’s understanding of
    the language as it is produced
  • Tools should not only indicate what is wrong, but
    also provide sufficient information (e.g.,
    examples, grammar rules, etc.) so that users can
    reason about the error and its solution
  • The tool should be transparent with respect to its
    capabilities and limitations; users should
    understand what it can and cannot do
  • The tool should not be too technical with its
    terminology and should avoid linguistic terms
  • Users should be able to focus on producing
    content, not on low-level details such as spelling,
    grammar, etc. That is, the tool should not distract
    from their primary goal of communication
The paper goes on to describe L2 Error Detection Tools and L2 Tutoring Systems, citing many recent work in the fields. The last portion of the paper went into detail describing the functions and algorithms involved in the AwkChecker program.

Discussion:
The AwkChecker program seems like a great step forward in linguistic tools. As a native English speaker I generally don't encounter collocation errors but having worked with non native speakers I have seen the use for a program such as this. I think that a system like AwkChecker would be a great tool to use in any text editor, something to go with already existing tools like spelling and grammar checking.

An interface for targeted collection of common sense knowledge using a mixture model

Authors:
Robert Speer MIT CSAIL, Cambridge, MA, USA
Jayant Krishnamurthy MIT CSAIL, Cambridge, MA, USA
Catherine Havasi Brandeis University, Waltham, MA, USA
Dustin Smith MIT Media Lab, Cambridge, MA, USA
Henry Lieberman MIT Media Lab, Cambridge, MA, USA
Kenneth Arnold MIT Media Lab, Cambridge, MA, USA

Summary:
This paper discusses a common sense knowledge gathering system constructed by the authors which is perceived as a 20 Questions type game while at the same time gathering information from its users. Below is an example of their program running:

Figure 1. Open Mind learns facts about the concept “microwave oven” from a session of 20 Questions.

The reason behind using a '20 Questions' game to collect this common sense knowledge is based on studies showing that users are not willing to freely contribute information unless they can be enticed somehow such as by entertainment. There have been some previous common sense acquisition games including Peekaboom and the ESP Game, both of which pitted two users against each other in an attempt to label images with the same description. The ESP Game focused primarily on generic labels for images while Peekaboom focused on particular components of images. There have also been a couple that work with matching phrases and words that generally correspond or describe each other, such as 'horse' and 'it has a hoof'. A couple examples of these games are Verbosity and Common Consensus.

The model that the authors used to collect common sense knowledge was built on a concept/relation representation similar to that of ConceptNet's data model. With these models they could determine certain 'features' which could simplify the algorithm in their 20 Questions game, where a features is described as 'A feature is a pairing of a concept with a relation which forms a complete assertion when combined with a concept.'. Through these features the authors were able to graphically show the AnalogySpace of these concepts and relations in a clustering model.

The authors also went into great detail in demonstrating equations and algorithms behind their common sense acquisition models. Below is another example of their game running:

Figure 3. Using the 20 Questions interface to develop a concept.

Later on in the paper they started to discuss some of the interface design objectives for their system. The primary goals are listed below:
  • Feedback, in this case the authors want a system that will provide what the computer is currently thinking so that the user can see how their responses are directly affecting the computers deterministic approach.
  • User Enjoyment, they just want the interface to be as enjoyable as possible to keep users interested in playing.
  • Minimalism, the game shouldn't be a stand alone or stand out in any situation, but should be there when needed and run seamlessly with the website.
  • Effortless Acquisition, in this case they don't want users to feel that they have to work at providing information, but instead it should appear 'effortless'.
A user study was done in which the authors presented an online comparison between the current manual OpenMind interface and the new designed 20 Questions. Users operated each system and afterward were asked a sample of questions to determine their enjoyment and the effectiveness of each system. The results of the study showed that the 20 Questions system out-performed the current OpenMind one on such fields as “I would use this system
again”, “I enjoyed this activity”, and “The system was adapting as I entered new information”. Apart from that the 20 Questions system took considerably less time to complete as seen in Figure 8.

Figure 7. The mean and SEM of the survey responses, grouped by test
condition.

Figure 8. Themean and SEMof the elapsed time to complete each task.

The results of their study and the conclusions they drew were that with this interface users will be more willing to contribute data, and that this will lead to better knowledge acquisition.

Discussion:
The overall point of this paper was that the authors designed a new interface for data acquisition to replace the current OpenMind one, and that their new system is based off of the '20 Questions' game. Despite this relatively simple point, they somehow found a way to express this through 10 pages. I did like that they modeled their system after a game, because it is quite normal to expect people to not want to contribute unless they can get something out of it, in this case some mild entertainment. Overall I did not find this paper interesting, but perhaps it has uses I can't foresee.

Wednesday, February 17, 2010

TypeRight: a Keyboard with Tactile Error Prevention

By Alexander Hoffmann, Daniel Spelmezan, Jan Borchers

Summary:

TYPERIGHT is a newly designed keyboard with pressurized key entry variable resistance that helps prevent users from entering keys that would lead to misspelled words or incorrect grammar. The authors goal was to implement a preventative method to incorrect typing that would hopefully increase the users efficiency in the long run compared to after-the-fact correction like spell checking and/or highlighting of possible mistakes. They compared their method to that of Apple's iPhone, in that like the iPhone's ability to minimize/enlarge keys to prevent mistyping, their keyboard would make keys harder to press that would lead to incorrect spelling or grammatical mistakes.

The actual design of the keyboard implemented solenoids that when activated created a magnetic resistance for their associated key. Therefore if a user tried to press a key that would lead to a mistake, he would face an increased resistance in the key. Below are a couple of screenshots showing the top layer of the keyboard, followed by a cut-out picture that shows the interaction of the solenoids.


Figure 1. TYPERIGHT: Full-keyboard prototype.



Figure 2. TYPERIGHT: Cross section of a key. The solenoid controls key resistance.

The authors conducted a user study in which they tested the performance of an after-the-fact correction method to their TYPERIGHT keyboard. In their study with novice users, the results showed that 'on average, the number of backspace key presses was reduced by 46% in the tactile feedback condition' and 'tactile feedback reduced the number of mistyped letters by 87%'. However in terms of efficiency the results showed that 'Average execution times were similar in both conditions (522 s with tactile feedback vs. 520 s with graphical feedback).' and 'Questionnaires confirmed that 75% of participants did not consider TYPERIGHT to be
a “big changeover compared to typing on a standard keyboard”'.

The authors also did one run with an expert user who had practiced the TYPERIGHT keyboard over a 3-month period and the results of his efficiency test were much better:
'The execution time with the first text was 10% faster than with the second text with graphical feedback. With tactile feedback activated, 16 corrections were necessary, compared to 23 corrections with graphical feedback (a 44% increase). With graphical feedback, the user typed 78 words that were not part of the dictionary, compared to zero(!) words with tactile feedback on the first text.' The authors indicated that this further confirms their indications that TYPERIGHT can effectively increase performance, but they also stated that they need further study with expert users to solidify their results.

Discussion:
Reading this paper made me think of the pressure sensitive keyboard from the UIST group. One of the major differences from this keyboard besides general function is that this one does not have a practical commercial model yet. Their design in this paper actually required a large modification to the keyboard in that they had to put solenoids between the keyboard and key caps for each key. One thing that wasn't covered in this paper was the comparison between TYPERIGHT and auto-correction, although the authors said that was a possibility for future work.

Effects of Real-time Transcription on Non-native Speaker’s Comprehension in Computer-mediated Communications

By Ying-xin Pan, Dan-ning Jiang, Michael Picheny, Yong Qin

Summary:

In this paper the performance of a non-native speaker's comprehension of communication with the help of real-time transcription was studied. The authors hypothesized that with the help of real-time transcription a user could better understand audio and audio/video communications if a real-time transcription was provided. They also looked to see if providing a transcription history instead of just 'line-by-line' would be more beneficial. The three questions they looked to answer in their study were:
  1. Does real-time transcription help non-native speakers improve comprehension in multilingual communication utilizing audio and video conferencing systems?
  2. How do users perceive real-time transcription in terms of usefulness, preferences and willingness to use such a feature if provided?
  3. How do users allocate their cognitive resources when presented with multiple information sources?
They described their experiment as a 2x3 design, in that they tested two modalities ie audio and audio+video, and three transcription settings ie no transcription, streamed transcription, and stream transcription with history. They had 48 non-native English speaking students which they split up to evenly test their 2x3 design. For the communication examples they used small 3-5min English language clips followed by a timed question and answer session rounded off with a questionnaire.

The authors of this paper were looking to measure three qualities of the experience of the users, these being performance on the questions, confidence in their answers, user experience with the system, and cognitive resource allocation. The results of their study showed that transcription, 'had a significant main effect on both performance (F (2, 92) =11.28, p<.01) and confidence (F (2, 92) =13.69, p<.01)'. They also noted that providing a transcription history didn't really improve performance of the user. They also noted that performance and confidence on transcription was not affected by modality. The authors lastly stated that for possible future work they would like to investigate automated speech recognition '(with the associated imperfections) as a practical alternative to human transcriptionists'.

Discussion:
This paper pretty much demonstrated what I already thought was common knowledge. I think it is widely believed that if a user is listening to a non-native language communication that transcripted text would help their understanding. I mean, isn't that what subtitles are (although subtitles generally aren't real-time transcripted)? I think their study did have some small success to it; I didn't know that adding video to audio would not really aide the understanding and comprehension. I also thought it was interesting that even given the option to scroll transcription history the users did not really use it, the streaming transcription was sufficient.

Sunday, February 14, 2010

The Inmates Are Running The Asylum (Chapters 1-7)

Summary:

This book has primarily been focused on interaction design. In these first few chapters several topics have been discussed related to this general theme including the encroachment of computers into all other devices, the struggle with interacting with computers, the value of interaction design on a company/market scale, the typical design problems in software and the development of software, the importance of interaction design to the success of a product, the influence of software engineers and their misguided mindset of design, and the psychology of the common computer programmer.

The author has used several analogies to illustrate his ideas, a very common one being his 'dancing bear' analogy. This 'dancing bear' illustrates a user's tolerance of bad quality design with the satisfaction of a software's primary purpose, ie the crappy dancing of the bear and the fact that a bear is able to dance all together. Another analogy that shares the title with this book is 'the inmates are running the asylum'. This is in reference to the fact that interaction design is generally left up to the software developers, and because of the influence of their decisions they are often left in charge of the outcome of a product albeit their lack of understanding the importance of design through every level of development.

In some of the latest chapters the author has included more and more excerpts from colleagues and friends who have had first hand experience with the need to invest in the importance of interaction design. Several of these cases were about a possible product that although was 'capable' and 'viable', lacked the 'desirable' trait that users value immensely. In these examples the companies that did not implement a strong sense of design either saw their product fall short of every being released or saw their product fall to the way side in the market by disloyal customers who switched to the competition's product.

Discussion:
So far this reading hasn't been too bad. It really follows in suit with the first book we read. I guess considering this class is based on interaction then that is most likely the content of all the books we will be reading. On a template scale though, this book is really similar to the prior; discussing interaction, the focus on the user, including several analogies and examples to illustrate their points, etc. I wonder if this book will conclude with how to incorporate good interaction design into software development, because so far it has only been highlighting the issues surrounding it. One last note that I want to add is that I think we should have read this book in its entirety instead of splitting it up. It took me about four hours to read these first seven chapters and if we have to wait three weeks to read the rest I'm going to have a hard time remembering all of this. Not to mention we are breaking it up with another book to read in between this one, which I foresee causing me to lose my grasp on the ideals of this book. Oh and another I particularly liked was his discussion about Microsoft, Apple, and Novell; there pros and cons as a business and the resulting status of where they are now because of their experiences.

Thursday, February 11, 2010

Learning from IKEA Hacking: “Iʼm Not One to Decoupage a Tabletop and Call It a Day.”

By Daniela Rosner and Jonathan Bean

Comment:
Lupfer, Nicholas

Summary:

This paper discussed IKEA Hackers and the role the internet plays in their lives. IKEA Hackers are people who take the relatively cheap mass marketed IKEA furniture and customize it for personal use or for creative purposes. The authors of this paper interviewed several IKEA Hackers to understand what they do, why they do it, and how the internet and particular forums and webistes play a role. IKEA Hackers themselves are typically Do-It-Yourself (DIY) persons. They like to manipulate the already existing products to fit their wants. Not only does this give a more personal feel to the furniture they modify, but the act of hacking itself gives them an artistic and creative feel.

The larger part of this paper discussed how the internet influences IKEA Hackers. Although IKEA Hacking is generally a personal experience, a lot of the hackers like to post their ideas online for others to appreciate and learn from. Likewise these online resources serve for good ideas and starting points for other IKEA hackers to pick up the hobby. A couple of common websites used by IKEA Hackers are IKEAHacker.com and Instructables.com. Below is a picture of a typical IKEA Hacker's workspace:


Figure 2. A common site for IKEA hacking: a residential kitchen. IKEA cabinets are “hacked” through modifications or the addition of custom components. One participant blogged his kitchen remodeling project.

Discussion:

The main reason I read this paper is because IKEA Hacking was mentioned in my assigned reading. In the previous paper I read IKEA Hacking was mentioned as an example of sustainable interaction design in that the IKEA furniture was often used and modified for reuse by the user who inherently reflected the principles of an 'everyday designer' mentioned in the paper. This paper however was far less interesting than the first, and although it was short (preferable length), it relatively had very little content except for introducing the concept of an IKEA Hacker. IKEA Hacking is an interesting idea but an entire paper devoted to it with emphasis on how online resources are used felt unnecessary.

Wednesday, February 10, 2010

A Sustainable Identity: The Creativity of an Everyday Designer

By Ron Wakkary and Karen Tanenbaum

Summary:

This paper brought to the surface the inadequacies of interaction design in that they are consumer based, and not designed for sustainability. This paper goes into research to study how the end-user is an 'everyday designer' in that they adapt products to fit their required needs (design-in-use), and that several of these sustainable principles could be implemented for interaction design. The target of this paper was an ethnography study involving three families in which the authors studied how these families would redesign and reuse items in their household to fit their changing needs.

A lot of the study for this paper was based off of Blevis’s principles for SID (Sustainable Interaction Design) which in short are: disposal, salvage, recycling, remanufacturing for reuse, reuse as is, achieving longevity of use, sharing for maximal use, achieving heirloom status, finding wholesome alternatives to use, and active repair of misuse. There were three specific examples included in this paper that were accompanied with their possible implementations into interaction design.

The first example in this paper was a planner book, which showed the SID principles of promoting renewal and reuse, as well as linking invention and disposal. One of the participants, Lori, had a planner in which instead of using the template layout and calendar features of the original design, instead adapted it to fit her needs by using it to take notes and make lists with the additional use of sticky notes which when used could easily be discarded. The possible implications of this that could be used for interaction design are:
  1. Design the capacity for users to overlook the formalized design and still find the artifact usable in ways equal to or greater than the original design intentions for use.
  2. Incorporate materials and software qualities to allow for renewal and invention.
The planner book from the above example can be seen below:


Figure 1 Lori shows how a sticky note allows reuse of a page of her planner.

The third example given in this paper was a recipe book which was originally a journal of the participants mother, which now was not only a recipe book but was also used to store very important information like Christmas lists. This example illustrates the principle of promoting quality and equality. Its possible interaction design implications are:
  1. Consider collaboration to include the broader notion of sharing, e.g. conceive of a Personal Digital Assistant (PDA) designed for maximal use by a family and therefore is easily shared;
  2. Consider that longevity in interactive technology is not only a result of usefulness and that we design emotional qualities into artifacts.
The recipe book of this example can be seen below.


Figure 2 Kerry's recipe book was originally her mother's
journal and has been in use for over a decade.

The third example presented in this paper was a family calendar placed on the refrigerator. The hopes of this calendar was that the whole family would use it and that it would be an easy way for them to see everyone's schedule. However, in some cases this was not practical. Although this worked well for the mother who was involved in most of all the families activities, the father was reluctant to put his running schedule on the calendar since it really only affected him. In another case the daughter chose not to put some of her schedule on the calendar since she preferred to keep it private. The sustainable interaction design illustrated by this example was sharing for maximal use, but in several contexts this conflicted with de-coupling ownership and identity as seen by the withholding of some information from the calendar. The possible implications of this example are:
  1. Design for maximal sharing
  2. Allow for low risk ad hoc and public testing/experimentation
The calendar mentioned in this example is shown below:


Figure 3 Timmie placing a sticker on the family calendar.

The results of the ethnography done by the authors lead to the formation of design-in-use principles that work alongside Blevis's sustainable interaction design principles. However it is important to note that the authors believe that rather than focus on the material properties of the design as does Blevis, that instead the focus should be on the use of the object and its reuse/adaptability.

The design-in-use principles developed in this paper are:
  • Design-in-use involves a high degree of creativity that in the best sense of the word makes a user unpredictable.
  • Design artifacts become resources for further creativity as an outcome of design-in-use.
  • Design-in-use qualities emerge over time as do design actions.
The paper also discusses how the user should be viewed differently than just a consumer. As stated in the paper, 'We claim that the everyday designer represents a sustainable identity for the user, one that is different than the traditional HCI construct.' The differences they mentioned are:
  • From consumer to creator
  • From over-determined to underdetermined
  • From user to designer
In conclusion this paper believes that if the focus of the consumer changes to that of an everyday-designer, and that with their principles for sustainability and the focus of design-in-use that interaction design can be done to promote sustainability.

Discussion:

This paper brought an interesting and abstract idea to interaction design in that products should be modeled with sustainability in mind. I think this is an ever growing importance in today's world; that people are more and more focused on conservation. This makes me think of some of the electronic devices that I have. When they go bad, what happens to them? You can't really adapt them to work for other scenarios and fixing them can be timely and costly, so most of them just get thrown out. The bad thing about this is that in most cases that was the way they were designed, for use and then discard. I think the shift mentioned in this paper that the user should instead be viewed as an everyday designer rather than a consumer can really have an impact on how things should be designed and the type of products that we could potentially get from this.

Saturday, February 6, 2010

Team Analytics: Understanding Teams in the Global Workplace

By Jan H. Pieper, Julia Grace, Stephen Dill

Summary:
This paper described the web application Team Analytics and a user study done for the program. The Team Analytics application is designed to help users who work in groups to find important information about the members without having to see each one's individual profile. It is composed of several widgets that help relay information to the user that otherwise might be painstakingly hard to gather and analyze by oneself. Below is a screenshot showing the program:


Figure 1. Screenshot of the Team Analytics application showing information
about ten people.

As can be seen in the picture the program utilizes several widgets to display the information. Of these include a photo gallery at the top, an organization chart that shows the hierarchy of the people and other information such as their location or company, there is a 'timezone pain' chart that helps display optimal times for organizing meetings or conversations, the pie chart shows the distribution of people across the group so that the user can see how many belong to one department or one company, and the last widget is the 'bizcard section' which displays small information boxes for individual people which is linked to the person's full profile.

The Team Analytics program was deployed in February 2007 and has since been well received. A user study was conducted by a third party in which the original authors were invited to contribute some questions which were used to test the satisfaction and use of the program. A large number of participants reported using the application weekly and a good number even daily. The overall satisfaction of the program was around 90%. Some of the more detailed parts of the study reported on the individual widgets themselves.

The most popular widgets were the organization chart, bizcard section and email plugin followed by the picture gallery, timezone pain, and attribute pie chart. The widget that had the least satisfactory rating was the timezone pain, which came to the surprise of the authors. Based off of the comments given they reasoned that even though people found the widget very useful it was confusing. Some of the future work for this would be modifying some of the requested changes such as making the timezone pain less confusing and perhaps a better scheme than using colors as an identifying property. Another thing they are looking into is integration with Sametime, their corporate instant messaging system.

Discussion:
I liked this paper and I like that the technology discussed is not just research but is actually already out in the real world being used. I found the relative simplicity of the application but that it was overwhelmingly liked intriguing. I also greatly appreciate that the program was designed for convenience to the user and that some of the work they did on it after original deployment was to integrate it into email to make it even easier to access. I would like to think that this is something similar to what I could be working on in the future.

An Exploration of Social Requirements for Exercise Group Formation

By Mike Wu, Abhishek Ranjan, and Khai N. Truong

Summary:
This paper discussed a need for a solution to finding exercise partners and what the characteristics that this system would need to have in order to function appropriately. It started off detailing some of the positive aspects of exercising followed by the study in which the authors conducted. Their study was done in two parts, the first being an online questionnaire in which approximately 100 participants were involved, followed by two small focus groups to reflect on the results of the questionnaire and to bring in some more specific data. The main goal of the study was to answer the following questions:
  1. Do people who exercise have partners? If so, how did they find them?
  2. If people do not have exercise partners, what are the reasons?
  3. What happens when people do not have an exercise partner?
  4. What do people look for in their ideal exercise partner?
  5. What information would people be willing to share to find compatible exercise partners?
After the initial questionnaire phase the two focus groups were invoked to get a better understanding to some of the answers given. With these focus groups they were able to narrow down the underlying issues such as what characteristics are involved in finding an ideal exercise partner and what information people are willing to share in order to do this. The conclusion of their work is best described by this insert from the paper:

"We found that for our participants, (1) collaboration among exercise partners is a two-phase process: discovery of common activities and subsequent collaboration, (2) there are temporal variations in privacy levels in opportunistic types of exercise collaboration, (3) lack of a partner can affect the perceived quality of the exercise experience, (4) skill range, location, and schedule similarity are key criteria for compatible partners, and (5) there is a willingness to share some personal information to enable spontaneous exercise."

Discussion:
I am glad that a study is being done to see what kind of system is needed to help people in finding exercise partners. I think exercising is becoming increasing important especially with the health issues our nation is facing and some of the points they made about the benefits of working with a partner are well justified. I think a paper like this could easily lead to some sort of social site for finding activities and exercise partners by locality, or perhaps as like a feature/plugin on existing social sites like facebook.

Wednesday, February 3, 2010

(Perceived) Interactivity: Does Interactivity Increase Enjoyment and Creative Identity in Artistic Spaces?

By Amy L. Gonzales, Thomas Finley, and Stuart Paul Duncan

Summary:
This paper tested user interaction and its correlation to user satisfaction, as well as interaction promoting a self-conception of user creativity. The goal of the study was to test two research questions in an experimental context:
  1. How does interactive art impact user satisfaction?
  2. How does interactive art shape the self-concept of the user as creative?
The authors believed that interaction with an installed art system would give users more satisfaction and promote a self-concept of creativity as opposed to a non interactive art installation. The art installation that they used was a room that hosted two participants at a time and played a combination of musical sounds, which for the interactive users could be changed according to a set of physical motions transferred through Wiimotes. The User Study that they conducted tested over 71 pairs of participants whom were told to experience the art installation and then later reflect on their experience through the form of a questionnaire.

The results of the study proved that the users who reported that the system was interactive also reported that they enjoyed it more than those who could not interact with it. However both the interactive and non-interactive participants reported relatively similar results in that they didn't feel more creative after experiencing the system. Some of the possible reasons that the authors gave to explain this were that the art system did not offer a wide enough range of influence to truly let the user feel creative, another being that users were never prompted to try to be creative while experiencing the system, or perhaps that the self-concept of creativity was secluded to just the experience with the installation, and did not carry with the user afterward, and a final suggestion 'that interactive art does not actually induce a sense of creativity as otherwise presumed'.

Discussion:
I think the study they conducted was quite good. I do agree with their conclusion that an interactive art installment is more enjoyable than a non-interactive one. Another thing I liked about this study is that they tested using musical sounds because I believe that music is a great medium for testing things such as emotional experience and creativeness. I do think that their second question was answered correctly by their user study, and that the self-concept of creativeness is confined to the actual experience with the art installation. If they did the test again except expanded on the ability of the user to impact the music I believe they would have seen better results for the first question and similar for the second unless they prompted the users to be creative.

Interactivity Attributes: A New Way of Thinking and Describing Interactivity

By Youn-kyung Lim, Sang-Su Lee, Kwang-young Lee

Summary:
This paper described the research done by the authors involving the correlation between interactive attributes and emotional effects. The paper discussed testing different interactive methods that they believed were concrete and could be measured. The aim of their research as stated in the paper was to 'develop a set of attributes that works as a language to describe the shape of any interactivity of an interactive artifact.' The seven attributes that they believed could be described are displayed below, each with its opposing counterpart totaling to fourteen measurable attributes of interactivity.

  1. Concurrency (concurrent-sequential)
  2. Continuity (continuous-discrete)
  3. Expectedness (expected-unexpected)
  4. Movement range (narrow-wide)
  5. Movement speed (fast-slow)
  6. Proximity (precise-proximate)
  7. Response speed (delayed-prompt)
Here is a picture showing a couple of the interactivity attributes:



The research done also included a user study which tested the two main questions they were looking to answer:
  1. Are interactivity attributes perceivable as we perceive the attributes of physical materials?
  2. Do interactivity attributes have meaningful emotional effects as other physical materials have?
The user study conducted tested participants through an online survey in which they were tasked to answer questions regarding the perceivable qualities of the flash prototypes which represented the fourteen attributes and any emotions that could be used to describe them. The results were analyzed using Wilcoxon’s paired signed rank test and yielded a positive result showing that 'all the interactivity attributes showed significance'. In terms of the emotional aspects of these interactive attributes there was also a positive result that showed emotional quality can be associated with different interactivity attribute values.

Discussion:
This paper was interesting in that it analyzed the emotions behind interactivity, concluding that there are concrete interactive attributes which can be used to describe the feelings felt when manipulating an interactive object and the perceived mapping that a user might use to compare the interaction to that of a material property. I liked the idea behind this paper, but I don't feel that they presented much evidence or even a very thorough principle. I believe that their conclusions were correct, but I think it really only scratched the surface of the topic and a further investigation is needed to really present a novel concept.

Sunday, January 31, 2010

The Design of Everyday Things

by Donald A. Norman

Summary:
So as can be inferred from the title this book covered design principles; that which makes a good/bad design and numerous examples of each so that the reader could understand what makes a good design, why some designs are not good (i.e. on purpose or through other priorities such as price etc.), and that if something seems simple but use of the object is frustrating or erroneous, then it is probably the design at fault. Some of the key points that the author made that distinguish a good design are reflected in his 'Seven Principles for Transforming Difficult Tasks into Simple Ones' p.188:
  1. Use both knowledge in the world and knowledge in the head.
  2. Simplify the structure of tasks.
  3. Make things visible: bridge the gulfs of Execution and Evaluation
  4. Get the mappings right.
  5. Exploit the power of constraints, both natural and artificial.
  6. Design for error.
  7. When all else fails, standardize.
The book was pretty thorough on the topic and went into great detail on each of his design attributes which included examples from self experience and photos/diagrams to illustrate his arguments.

Discussion:
The book had some intriguing arguments as to what makes a good design and why so many objects out in the world today suffer from poor design. I liked that in most of his examples he countered the obvious design flaws with logical reasoning as to why they existed. I liked that he continually referred to other material in the book either prior to or after what was currently being read, in that he was tying all of his ideas together. It would have been easier for me had I read all of the book in one or two sittings, but instead I spaced out the reading into perhaps five or six sessions so some of his references to prior material were lost on me since it had already left my memory. The included photos and diagrams helped reinforce his statements, but I would have still preferred more examples and evidence of poor design rather than related stories that were told to him by friends/coworkers.

Ethnography: Traffic Trends Concerning Turn Signals

This Ethnography done by myself and Zach Edens is going to study the correlation between cars and the use of turn signals. The three main aspects of the car that we want to record are:
  • Type
  • Make
  • Color
After getting this data we can easily compare makes with each other to see which ones used their turn signals more or less, and we can do the same comparisons for type and color. After doing these initial measurements, we want to move into a more qualitative look and compare things such as:
  • Length of turn signal
  • Popularity of make
  • Average price of car per make
  • Color and its association with personality type
  • The use of blinkers when people are alone or surrounded by other cars

That sums up the majority of the study, but we also have some notes that we are considering such as how we want to induct this:

  • We are looking into finalizing what qualitative comparisons we want
  • We want to put a minimum on the number of cars that are required to enter the study to represent that make.
  • We are thinking of doing the tests during the day, afternoon most likely on a weekend.
  • In order to best capture the data we are thinking of using some tools such as binoculars and perhaps a video camera.
  • With the two of us working on this, we were thinking of using two different viewpoints to help reinforce the data.
  • Another thing we are considering is doing the test not only at the exit off 2818 onto University, but perhaps at an intersection with either traffic lights or just stop signs.
  • The last thing we wanted to make note of is to include lots of sources to back up our qualitative comparisons, especially since some will be based off of assumptions and such.

Tuesday, January 26, 2010

A Reconfigurable Ferromagnetic Input Device

By Jonathan Hook*, Stuart Taylor, Alex Butler, Nicolas Villar, Shahram Izadi

Commented on:
Bill Hamilton, Jacob Faires

Summary:
The focus of this paper was on using a ferromagnetic hardware device that could sense changes in magnetic fields that could be used to interpret input. Another key feature is that the device is ‘reconfigurable’ in that once set you are not stuck with that input method. The main setup presented in this paper was using a set of sensor coils that surrounded permanent magnets, with a deformable ferromagnetic bladder laid on top of the coils to be used as input. Once the bladder received pressure on it, it would in turn displace the magnetic field of a permanent magnet causing a small voltage in the sensing coil which could be used to measure input. In addition to this design two possible applications were discussed including a virtual sculpting application and a musical synthesizer. Below are a couple of pictures showing the device with a ferromagnetic bladder above it, and the other depicts using the device for a sculpting program.

Discussion:
The reconfigurable quality of the ferromagnetic input device stood out in this paper above all else. Instead of having only a set application of the device, the user could adjust and modify it to fit any situation they wanted. If this became a common household product I imagine that as time progressed there would be many different products that would use this device, but would require the user to reconfigure it. It made me think of a Swiss army knife that has all of those different tools in it, so that no matter the situation all you need is the Swiss army knife and you can get it done. I see the same thing for a device like this, where no matter what software comes out you can still use the ferromagnetic device to operate and use it.

A Practical Pressure Sensitive Computer Keyboard

By Paul H. Dietz, Benjamin Eidelson, Jonathan Westhues and Steven Bathiche

Summary:

The focus of this paper was on the implementation of pressure sensitivity to computer keyboards. The design presented in the paper was a relatively simple adjustment to the already common design of computer keyboards. This simple and low cost adjustment would allow for possible mass marketing of the device. One of the first topics discussed in this paper was that the design was a simple modification to the common keyboard design, with some alterations to the contact and space layers. The pressure sensitive design would require larger top and bottom contacts as well as a large space layer between them, so that when the user pressed down on the key the more pressure used would create more contact with the device and would register a stronger signal. Some of the possible applications mentioned in this paper were gaming and emotional instant messaging. A pressure sensitive keyboard could allow gamers the ability to scale movement actions such as running and jumping by how hard they pressed on the appropriate keys. In emotional instant messaging font size could easily be scaled by how hard the user pressed on the keys. A pressure sensitive keyboard such as this could have many affects on typing in general, such as allowing more functions to single keys like allowing backspace to either delete letters, words, or lines at a time depending on how hard the key is pressed.

Below is a picture showing the difference between a normal keyboard(left), and the new pressure sensitive design(right).

Discussion:

Unlike some of the other papers I read this one was most commercially probable. Even though the design is simplistic, I liked how practical it was and how easy it would be to modify the already common keyboard design with this new adjustment. This paper really presented something that could be easily integrated into my everyday life. A pressure sensitive keyboard could allow more intuitive typing and allow for more macros and functions based on key combinations. Of all the papers presented so far in class this is the one that I would most like to sample, because it is the only one I can see made commonly available in the near future.

Thursday, January 21, 2010

TapSongs: Tapping Rhythm-Based Passwords on a Single Binary Sensor

Summary:
This paper discussed user authentication based on a binary receptor and tapped rhythms as opposed to textual passwords. It discussed some benefits of a system such as this like not needing any screen or keypad to enter in the information. Instead you would only need a binary receptor such as a button or switch. It was mostly focused on the implementation of a tapped rhythm. Through the author's study a small jingle's rhythm such as Shave and a Haircut, Two Bits could be used as a password for logging into a system. In order for a user to do this they would need to select a jingle or compose their own, practice it so that a model can be created, and then the user could enter the rhythm for log in. They also had a way for the model to adapt with each entry so that it became more accurate to the individual. In their testing users were able to log in successfully 83.2% of the time. The paper also discussed a user study that tested eavesdropping and password theft and the ability of the impostor to log in. In both cases the results were small, approximately ten percent in the first case and nineteen in the second.

Discussion:
This paper was pretty interesting. I haven't ever considered other methods of authentication but this one is pretty interesting. I myself often tap rhythms to songs I hear or play in my head and to move that into a log in scenario is pretty neat. I liked that their study had an adaptation technique so that it became customized to the users input so that it more accurately matched the model to them. I also like the simplicity of the binary receptor, and that password theft would be very different in this case. I think technology such as this could easily be moved to many things we use today such as safeguarding a portable external hard drive by having a small button on the side that served as an authentication means.

A Screen-Space Formulation for 2D and 3D Direct Manipulation

Summary:
This paper was about using a touch screen to move objects in a three dimensional space using multiple contact points in direct manipulation. It discussed several aspects of this including world space transformations, minimization methods, three-finger rotations, ambiguous rotations, rotational exhaustion, and more. The main content was focused on how to move the object with particular finger movements such as using two contact points to signify an axis, and using another finger to rotate about that axis. Another example would be using one finger to select a point on the object, and moving that finger to directly translate the object. Two fingers could be used to do things such as scaling the object by moving on finger closer/further away from the other, translating similar to the one finger movement, and so on. The direct manipulation aspect meant that the contact points were directly mapped to the object, so when the fingers moved those points on the object moved in that fashion. To compensate for the various movements a user could make the object would either translate, rotate, scale, or a combination of the three in order to achieve the desired result. The paper discussed the authors experience with the system and problems they encountered such as ambiguous transformations and rotational exhaustion, as well as methods to minimize the affects of these like using pressure, biasing, and many others.

Here is an image of some three finger movements:



Discussion:
This paper turned out to be pretty interesting because I have some basic experience in graphics so I understood most of what was being discussed such as the projection and orthogonal views, the use of quaternions, and the transformations such as scaling, translating, rotating, shear, etc. The direct manipulation of the finger contact points to the object was pretty interesting because I think that when dealing with this situation that is the most intuitive way users would want to move the objects because as it said in the paper it lets them feel as if they are 'gripping' the object. I really liked how the author discussed his first experience with the system and the problems he encountered. The ambiguous rotations was pretty interesting because it would be something I wouldn't have thought of. I also liked that some of the ways they treated these problems were more like patches since technically the system behaved correctly just not as the user would have expected. I'm not really sure how this could be advanced for further use, but I liked the idea of using it for moving around a landscape as some of the images showed. I think using pressure add another dimension to the system, allowing the user and alternative movement to make the object move.