Skip to the content of the web site.

Systems Design 348
User Centred Design Team Project

 

Interim Report #3 Final Report

 

Date: Tuesday, March 27, 2001
Submitted to: Professor Carolyn MacGregor

Submitted by:
Shawn Kavanaugh
Chris Klachan
Gerald Lai
Rhoda Lee
Dylan Lum

 

Table of Contents:

  1. Introduction

  2. Revised Interactive Systems Problem Statement
    2.1 Human Activity
    2.2 Users
    2.3 Level of Support
    2.4 Form of Solution

     

  3. Project Constraints
    3.1 Project Requirements

     

  4. Phase 1 UCD Methods
    4.1 WALKTHROUGHS 4.1.1 Intent of Method
    4.1.2 Impact of Method on Prototype 1

    4.2 HEURISTICS 4.2.1 Intent of Method
    4.2.2 Impact of Method on Prototype 1

    4.3 COMPETITIVE ANALYSIS 4.3.1 Intent of Method
    4.3.2 Impact of Method on Prototype 1

    4.4 HIERARCHICAL TASK ANALYSIS 4.4.1 Intent of Method
    4.4.2 Impact of Method on Prototype 1

  5. Phase 2 UCD Methods
    5.1 COMPETITIVE ANALYSIS USING HEURISTICS 5.1.1 Intent of Method
    5.1.2 Impact of Method

    5.2 COMPETITIVE ANALYSIS USING DESIGN WALKTHROUGHS 5.2.1 Intent of Method
    5.2.2 Impact of Method

    5.3 CARD SORTING
    5.3.1 Intent of Method
    5.3.2 Impact of Method

  6. Phase 3 – Lab-based Usability Testing
    6.1Intent of Lab-Based Usability Testing

    6.2Methodology

    6.2.1 Participants
    6.2.2 Procedure

    6.3 Results 6.3.1 Keystroke Level Analysis
    6.3.2 Lab Based Results
    6.3.3 Questionnaire Results

    6.4 Discussion 6.4.1 Implications of lab observations
    6.4.2 Implications of Participant feedback

  7. Final Specifications

Appendix A: Summary of Client Meetings

Appendix B: Summary of Task Allocations

Appendix C: Summary of Learning

 

Table of Tables

Table 1: Interactive Systems Problem Statement (ISPS) Component Summary

Table 2: Summary of user walkthrough of web usability questions using current gateway design

Table 3: Summary of user walkthrough of web usability questions using prototype design

Table 4: Results from Keystroke Level Analysis and Lab-based Usability Testing

Table 5: User Rated Usability

Table 6: Suggested List of Headings and Related Subheading Groups

 

Executive Summary

In response to a request from the Community Needs Assessment Group, the Systems Design 348 class embarked on a project to identify possible usability issues with Waterloo’s library gateway page. Given that this was an open ended task which involved the application of a variety of user-centred design concepts, the circular method approach (i.e., repeated revision) of design was used to improve upon the numerous iterations that were made throughout the process. The progression of this project involved three main status reports (interim reports #1 - #3). During this process, test methods advanced from discount usability to formal user testing, while prototypes evolved from low to high fidelity and functionality.

The initial assessment of the library gateway page involved four main techniques: heuristics evaluations, hierarchical task analysis, walkthroughs, and competitive analysis. Usability flaws were immediately apparent upon these analyses. In general, they can be described as problems related to navigation and inability to foster user learning. These will be described in more detail in the Phase 1 UCD Methods. The low-fidelity prototype at this stage served only to illustrate the suggested design features to address the usability concerns. Accordingly, user testing was required to justify whether these changes improved the usability of the gateway page. The six main design features were as follows:

  1. Proper user feedback for mouse-over menus

  2. A consistent navigation bar throughout the sub-layers

  3. Creating help and search options

  4. Use of appropriate graphics

  5. Appropriate wording and defining unknown acronyms/labels

  6. Eliminate the need to scroll down

Our prototype remained low-functioning during the second stage of our project, consequently, discount usability methods were employed both to test our proposed design features and to obtain user input regarding their ideals for appropriate webpage structure. This involved competitive analysis using heuristics & walkthroughs, and the card sorting technique.

The competitive analysis using heuristics served to further refine the proposed changes that were suggested in prototype one. Additionally, these heuristics principles helped justify the changes. One of the notable recommendations that followed from this assessment was related to visibility. Specific to the mouse over features, was the concept of feedback. According to the heuristic of visibility of system status (Nielsen and Mack, 1994), the webpage should keep the user informed about the system status. Consequently, we made menu items light up when they were moused over to inform the user about which sub-items corresponded to which menu headings. The complete list of issues and recommendation based on this method can be found in the Impact of Phase 2 Methods section.

It should be noted that a comparative user walkthrough test was done after interim report #2 since it was omitted due to miscommunication. At this point, our prototype was fully functional. This enabled us to compare the performance of users in the current webpage versus their performance in our prototype. The tasks performed by the users were identical to those used in our first walkthrough assessment (see Phase 2 UCD methods for procedure). Although the results of this test did not indicate drastic improvements in performance for the given tasks, we generally received positive feedback from users about improvements in the naming of menu items. We attributed questions where errors were observed for both webpages to our lack of control over sub-layers (see Phase 2 UCD methods for details).

The main revisions that were made as a result of the card sorting technique involved a reduction in the number of menus (from 9 à 7) as well as the re-naming of menus. These menus were arranged in order of ranked importance by users, with those that were ranked highest being presented first in the webpage. Card sorting was also useful in identifying the ‘jargon’ that was most foreign to users. This led to our recommendation for ‘pop-up’ descriptor boxes to help clarify this issue (details in Phase 2 UCD methods).

  1. Introduction
  2. The University of Waterloo library is joined with Wilfred Laurier and Guelph University to form the TriUniversity Group as a way to share and provide the student body, faculty and community a wide range of resources. The development of an online application accessible by the internet made it possible for patrons to explore all three university libraries at once. It became clear that although resource sharing is advantageous, problems exist with system users having difficulties operating the system and finding the desired information. This, along with the aims of placing additional information on library services at the fingertips of users, led to the development of a gateway page with the goals of a user-friendly system.

    The goal of the Community Needs Assessment Group (CNAG) was to improve the UW library Gateway to make it as user-friendly as possible. The Systems Design 348 class was recruited to provide feedback to CNAG by assessing the state of usability in the current gateway page and to make design recommendations on how it can be improved in terms of usability. At the same time, this design project was also an opportunity for the students to apply user-centred design concepts and evaluation methods learned in the course to a real world application in a simulation of a consulting environment.

    The usability analysis was carried out in these phases or iterations using specific evaluation methods directed by the Professor as well using additional methods selected by the team. At the end of each of the iterations, a report and prototype that represented the recommendations was created so that at the completion of the final report, a working high fidelity prototype was produced.

  3. Revised Interactive Systems Problem Statement
  4. The results for the interactive system problem statement (ISPS) were derived from reviewing the facts and assumptions associated with the human activity required, the users and the level of support needed. These are discussed below then summarized in Table 1.

    2.1 Human Activity

    The Gateway page was to be specifically designed to aid users in the activity of accessing traditionally used library resources such as books, journals, electronic resources easily. It was assumed that other features of the Gateway that would be helpful to users are links that allow them to look up daily news, textbook listings and special functions such as seminars offered by the library. It was required that the format and characteristics of the Gateway ensure that resources are found easily with minimal errors as this was the main goal of this project as well as a link to concept of user-centred design. As the design process progressed, it was important to promote services that may be of value to students but of which they may be unaware.

    2.2 Users

    The users for the Gateway page are primarily University of Waterloo students including undergraduate, graduate and distance education students, faculty members and library staff. It was assumed that other users of the Gateway would be members of the K-W community who are not university students, alumni and students from other colleges or universities. The diverse user pool created a large anticipated range of user experience levels with computers, available UW library resources and online Gateways. Therefore, it became a requirement of the Gateway to accommodate novice users in the area of computers, library services and Internet, but also allow frequent or expert users to skip steps. Since the largest user population of the library are undergraduate students, the Gateway was primarily tailored to undergraduates.

    2.3 Level of Support

    The purpose of the Gateway was to enable easy access to library resources of all three libraries in the TriUniversity Group (TUG) and to accommodate to the varying computer capabilities on campus as different UW computer labs have a different generation of computers (some will be more updated and faster than others). In making it user-friendly and helpful, it was assumed that the Gateway is to facilitate user learning so that users can increase their efficiency when returning to use the Gateway at a later date. Learning can be increased by the use of intuitive mappings, clear labels and language and feedback that follows errors. Search index capabilities would allow novices users to locate topics when they cannot find it easily by browsing the Gateway or if a user wishes to find it quickly without travelling the path to it. Supporting remote access for students would allow flexibility and ease for users if they are not confined to accessing the library resources from only on campus and can also access it from home. It is anticipated that there will be times when desired resources are not available from TUG. Thus, an assumption is that the Gateway should aid users in accessing other libraries for these items.

    2.4 Form of Solution

    The solution to the design problem must be an online Gateway page. According to the facts concerning the activity, the users and the level of support, the categories on the webpage should be indicative of the menu contents and the mouse-overs should show clear mappings in order to meet a user-friendly level of support. The Gateway should limit graphics only to those necessary in order to sustain reasonable page loading time for older generation computer models.

    It was assumed that in addition to the students and UW staff, members of the K-W community, alumni and students from other universities or colleges may use the Gateway page as well. A search index will allow these users (who are likely to be novice users) and first time UW students and staff to refer to if they encounter trouble. As well, an outside user may just want to know if the UW library has one particular service and a search function would answer this question quickest. It is also possible that a resource is not available in TUG. Providing links to other libraries from the Gateway would be valuable for users in this scenario. It was also assumed that remote access would be desirable. It is essential that users know that this feature is available and be made salient.

    The four components of the ISPS are summarized in Table 1 below.

    Table 1: Interactive Systems Problem Statement (ISPS) Component Summary

      Human Activity Users Level of Support Form of Solution
    Facts Find resources:
  5. Books
  6. Journals
  7. Electronic resources
  8. Students (undergraduate, graduate, distance education)
  9. Professors
  10. Library staff
  11. Enable user-friendly access resources: Intuitive mappings, clear labels and language, feedback during errors
  12. Accommodate most computer capabilities

    eg. slow vs. fast

  13. Online Gateway page
  14. Categories should be indicative of menu contents
  15. Clear mappings in mouse-overs
  16. Limit graphics to minimize page loading time
  17. Assumptions Look up:
  18. Daily News
  19. Text book listings
  20. Library seminars
  21. K-W community members
  22. Alummi
  23. Students from other universities or colleges
  24. Remote access
  25. Search capabilities
  26. Help user access other libraries
  27. Make remote access capabilities salient
  28. Provide Search Index on Gateway
  29. Provide links to other libraries
  30. Additional Comments/ Resources Promote resources that students are unaware of. Design mainly for the undergraduate level student. Important that users can access site with different internet browsers. Important that users can access site with different internet browsers.
    Requirements Primary goal is to improve ease to find resources. Give frequent/expert users the ability to skip steps. One primary goal is to foster user learning. It must be online.

    The following Interactive Systems Problem Statement (ISPS) allows for the design problem and conditions of the situation to be summarized in a single statement:

    Design an online Library Gateway page for widespread access and ease of use for finding information about the services and resources that the library offers to be used by the university’s faculty, student, alumni and librarians as well as the K-W community.

  1. Project Constraints
  2. There were several constraints in this project that limited possible solutions and ideas in our design recommendations. First, the questions that we based our expert testing (phase 1) and user testing (phase 2 and phase 3) were given to us by CNAG and possibly did not reflect all uses of the webpage. Thus, there may be areas that were unintentionally ignored in our testing and could be improved upon in terms of usability. Since these questions were given by the client prior to our design process, we may have been biased into creating a webpage that placed great emphasis on answering those 20 questions and as a result let other "questions" go unattended.

    The make-up of our design team consisted of five members only one of which had any previous experience designing webpages. The team’s inexperience with webpage design software placed a huge responsibility on one team member to be the sole programmer and it took a much greater time to complete the prototypes than anticipated. This in turn left minimal time to perform user testing in Phase 2 and lead to the additional limitation of the small number of participants in our testing. Because of our limited time constraints, we were not able to test as many users as we would have liked. Thus the power of our results from phase 2 was hindered.

    We found that it was difficult to be restricted to solely concentrating on creating a Gateway page without considering the sublayers when deciding on items such as the type of navigation bar (top or side), consistency of colour, format, graphics and language. As a result, often we were distracted from our main goal of focusing only on the Gateway and found we added features relating to the sublayers. For example, in phase 1, it was felt that a side navigation bar would be most salient for users and this was shown in our prototype. In phase 2, we then considered that a side navigation bar may interfere with sublayer format and text layout and thus returned to a top navigation bar.

    3.1 Project Requirements

    For us to effectively understand how well a user will interact with our Gateway, it was imperative to define what key required elements would provide a good experience for the user. As described in our ISPS, this design was required to be an online Gateway page that would user-friendly for the university’s faculty, student, alumni and librarians as well as the K-W community. Because of this requirement, the Gateway must be flexible to the extent of being suitable for a novice user, expert user and those who fall between these two endpoints while containing all the information that would be desired for these users.

  3. Phase 1 UCD Methods
  4. 4.1 WALKTHROUGHS

    Phase 1 of the design process was focussed on evaluating the current Library Gateway page using the cognitive walkthrough, heuristics, competitive analysis and hierarchical task analysis methods. This evaluation helped provide a starting point to the design of the new Gateway page.

    4.1.1 Intent of Method

    Cognitive walkthrough is an expert inspection method of evaluating a design. It involves thinking though the steps a user may need to perform in order to reach their goal or goals (Sy De 348 Class Notes, 2001). It is useful as an evaluation tool early on in the design process because it can identify major problems of usability so that changes can be made before presenting the product to a "real" user. There are limitations to this method because it employs experts as the evaluator. Experts, especially members of the design team, have more advanced knowledge about the current design than the average does, so the results might show that design is capable, but not show that the design’s functions are difficult to access.

    The cognitive walkthrough method was used to address the "Find It" related "Web Usability Questions" provided by the client. This method was selected so that the design team would have an opportunity to experience the mental demands and actions that the user experiences when searching for resources from the library gateway page. Each team member performed the cognitive walkthrough and the results were discussed.

    Each individual web page was evaluated based on the eight different categories deemed most important by the client: the overall site-layout of the Library Gateway; the graphics presented on the Gateway; mouse-over versus text layout usage; dead spaces on the various pages; the language and jargon used in the labels found in the interface; layout consistency and clarity of top navigation bar. The strengths and weakness for each of the gateways were identified and recorded.

    4.1.2 Impact of Method on Prototype 1

    The major finding from the use of the cognitive walkthrough was the navigational issues. It showed that the current Gateway have links that are misclassified, that is, they are placed under categories that did not allow for intuitive mappings. For example, during the walkthrough, exam timetables were looked for under ‘Find It’ but were located under ‘Local Sites’. Walkthroughs were able to show that inappropriate mappings can cause the user to get ‘lost’ or confused while navigating the Gateway. It influenced our design of the prototype in that we felt it was beneficial to place the Search Index on the Gateway rather than only on the sublayers in order to provide a ‘last resort’ or ‘fall-back’ option when users are unable to locate what they are looking for. This also stressed the need to use card sorting in phase 2 as a way to arrange menu items under intuitive categories to improve the navigation of our design.

    4.2 HEURISTICS

    4.2.1 Intent of Method

    A heuristic evaluation is an informal method of evaluating the overall usability of an interface where usability specialists assess whether each of the dialogue elements in the interface conform to the principles, also known as heuristics, that are established to guide the design of that general type of interface (Nielsen and Mack, 1994). Heuristics evaluations, when performed in a proper manner, are a good method for finding both major and minor problems in the interface. The evaluation was carried out using Jakob Nielsen’s Top Ten Mistakes in Web Design, which can be found at his website, www.useit.com. Each member of the group went through the current page and compared it with the guidelines set by Nielsen. In the assessment of the University of Waterloo Library Gateway page the heuristics evaluation was one of the method used to find minor and major problems in the interface.

    Each of the usability testing methods, including the heuristic evaluation, assessed the usability of the Gateway page in eight different categories deemed most important by the client (overall site-layout; graphics; mouse-over usage; dead spaces; language; layout consistency and clarity of top navigation bar). The options assessed by this technique included: "About the Library", "Services for", "Help" and "Local Sites". Furthermore, each of the eight categories was evaluated using related heuristics from Nielsen’s "Revised set of usability heuristics derived from factor analysis of 249 usability problems" (Nielsen and Mack, 1994). The heuristics used for the evaluation include: visibility of system status, match between system and real world, consistency and standards, recognition rather than recall, flexibility and efficiency of use, aesthetic and minimalist design.

    Usability problems were identified and rated for severity as either: minimal (1), moderate (2) or severe (3). The interface was evaluated twice to determine if any of the usability problems were missed in the first attempt.

    4.2.2 Impact of Method on Prototype 1

    The use of heuristics as an evaluation method identified navigational issues as the main finding. This was shown by the design team members’ difficulty in selecting the correct option, the difficulty encountered in attempting to find relevant information in non-structured data and with the user unsure of where the link will lead to until they try it. This suggested that our design incorporate more descriptive titles and labels, provide a means of searching when lost and provide better feedback after actions. This was reflected in our prototype 1 by the addition of the search index on the gateway page and in prototype 2 by the addition of a grouping line used to indicate a selection was required from the menu arising from the mouse-over.

    4.3 COMPETITIVE ANALYSIS

    4.3.1 Intent of Method

    The competitive analysis was used to determine the strengths of similar web pages. These strengths serve a number of purposes. One, the strengths of the other sites can be used to help pinpoint further weakness in the current design of the Waterloo gateway. As well, the strengths of the other web pages can be used in improving the usability errors associated with the current design. Lastly, weaknesses of the other websites compared to the current Waterloo gateway may help to indicate what features in the current design should be used in the future design.

    Each individual web page was evaluated based on the eight different categories deemed most important by the client (overall site-layout; graphics; mouse-over usage; dead spaces; language; layout consistency and clarity of top navigation bar). The strengths and weakness for each of the gateways was observed and recorded.

    4.3.2 Impact of Method on Prototype 1

    The competitive analysis findings echoed those found by walkthroughs and heuristics: the need to improve the ease of navigation. This was shown by the fact that the most user-friendly library sites provided a search function on the Gateway. It was also noted that these sites also presented their information in a clear, logical manner allowing users to navigate with ease based on their mental models. From the competitive analysis, two main ideas were gathered for prototype 1. Firstly, it provided further justification for the addition of a search function on the Gateway and secondly, showed that a permanent side navigation bar is beneficial for travelling to the most commonly used links quickly without taking the conventional multiple step path. This facilitates frequent or expert users who know where they need to go to access what they desire and also aids novice uses who may get lost in the sublayers but can always travelling back to a familiar page in one keystroke by use of the permanent side navigation bar.

    4.4 HIERARCHICAL TASK ANALYSIS

    4.4.1 Intent of Method

    The intent to use the Hierarchical task analysis was based on the methods ability to break down the task into goals, plans and operations from the start of the process to the end. It is important to break a task down into its components to determine if all the steps involved in the process are necessary, and it helps uncover problematic usability issues in terms of the efficiency of the process. The method also serves to describe the task if a user was to go through each of the operations.

    A hierarchical task analysis (HTA) was used to evaluate the "Get It" function on the University of Waterloo library gateway page. The HTA was carried out by analysing the "Web Usability Questions" related to the "Get It" function provided in the design project handout. By breaking down each task into it’s essential components, problematic areas were identified for changes and the successful actions were utilized in making these modifications. HTA was chosen for this reason and the results assisted in targeting the web page features that needed revision and prioritizing the most severe issues for immediate attention.

    4.4.2 Impact of Method on Prototype 1

    The main finding from performing the HTA showed a need to return to upper layers to execute tasks after travelling to sub-layers. This was most evident in situations such as finding out how to reserve a book. In this example, travelling to the lower layers was required to find the instructions on how to perform the task. Then to execute, it was necessary to return to one of the top layers. This reinforced our design idea to create a permanent side bar the Gateway and each sublayer, which will provide links to the most frequently used pages. A secondary benefit to this feature is, as previously discussed, allows expert users to skip steps.

  5. Phase 2 UCD Methods
  6. 5.1 COMPETITIVE ANALYSIS USING HEURISTICS

    5.1.1 Intent of Method

    This method, which was originally misinterpreted as an appropriate method of discount usability testing was later revised to include user walkthroughs (i.e., testing), (see section 5.2). Although it was not required, the competitive analysis using heuristics for our prototype served to refine the six main design feature proposed in Interim Report #1:

    1. Proper user feedback for mouse-over menus

    2. A consistent navigation bar throughout the sub-layers

    3. Creating help and search options

    4. Use of appropriate graphics

    5. Appropriate wording and defining unknown acronyms/labels

    6. Eliminate the need to scroll down

    Since this was an inappropriate method of ‘discount’ user testing, we will not dwell on the procedure but focus on the constructive findings that led to further revisions (see Impact of Phase 2 methods).

    5.1.2 Impact of Method

    This method was vital in examining our second prototype after changes were made which stemmed from our findings and recommendations from the first iteration. Moreover, the assessment was useful in justifying our design features which addressed heuristics issues of visibility of system status, match between system and real world, consistency and standards, recognition rather than recall, and flexibility and efficiency of use (Nielsen and Mack, 1994).

    It was found from our first evaluation of the UW Gateway page that although the mouse-over features were novel and effective, several alterations could be done to greatly increase its functionality and friendliness to users. Mouse-over elements can be useful particularly because they provide action-response feedback to users, which is stimulating and exciting. Mouse-overs reveal to the user that they can and are expected to select an option. Thus, clear feedback is very important for mouse-over techniques to work effectively, otherwise they can be confusing and frustrating.

    The current navigation bar is not consistent across pages and its function was determined in the first iteration to be inconspicuous to users, causing it to be frequently ignored. Thus, the bar was modified so that the links on the navigational bar were more evident and characteristic of buttons to foster use. This increased the visibility of the available links to the user.

    One of the links added to the navigation bar on the Gateway is a search option that was stressed in the first iteration. It was felt that it was vital to provide an immediate option for users to revert to if they are unable to find what they are looking for by simply viewing the Gateway. One of the benefits of placing this link on the navigation bar ensures that it is easy to access when in any sub-layer.

    The importance of using appropriate language and wording for websites is instinctively seen. It was felt that there were many areas of the UW Gateway page where labelling could become problematic for a novice user and thus, hamper its efficiency of use. For example, what does "Find It" and "Get It" mean? What are the differences between the two categories? Card Sorting was used as a technique to address this issue (see Phase 2 Methods). As an additional aid, mouse-over pop-up text boxes were placed into our prototype to explain and describe the function and/or contents of each menu category in the case that labelling is not clear to the user.

    5.2 COMPETITIVE ANALYSIS USING DESIGN WALKTHROUGHS

    5.2.1 Intent of Method

    The design walkthrough method is an evaluation tool in which the evaluator develops the steps of how a user can achieve common or frequent goals using the product then "walk" through the actions as the user would while looking for problems that may arise (SY DE 348 Class Notes, 2001). This is different from a cognitive walkthrough since the cognitive one uses experts and the design involves modeling the average user. This late stage design walkthrough involved a comparative approach in which novice users were asked to perform identical tasks, but on different webpage designs (current vs. prototype). Two male and two female participants were asked to perform a set of search tasks on each web design. All participants were undergraduate students at the University of Waterloo and were considered novice users of the University of Waterloo library gateway page. The order in which the webpages were presented to the participants was randomized. Note that the experimental tasks were identical to those used for our cognitive walkthrough assessment. This way, it could be determined if similar problems were encountered by users and whether the new prototype improved upon these issues. The following questions were those presented to the participants – the numbers in parenthesis correspond to the original set of Web Usability Questions given by the client:

    • How would you find if the library has Margaret Atwood’s Alias Grace? (1)

    • Does the Library have any electronic dictionaries? (4)

    • Where can I find electronic maps? (5)

    • Where do I find the URL for Yahoo? (6)

    • Where can I find a database in which to locate articles on Anthropology subjects? (7)

    • Where can I find information on how to cite web sites? (8)

    • Can I read an article of a journal without coming to the library? (9)

    • Where can I find the exam timetables? (14)

    Each task was broken down into the 4 main components of the model for exploratory learning (i.e., goal, system, action, interpretation), (SY DE 348 Class Notes, 2001). The stopping point for the walkthrough was defined (to the participant) as the point where it was no longer clear as to which action to take (i.e., which buttons to click on) to correctly perform a given task. The participant was encouraged to verify the stopping point and any additional comments by communicating verbally. Discussion beyond the stopping point was not extensive because the earliest sign of error effectively represents the origin of user error.

    5.2.2 Impact of Method

    In relation to the expert walkthrough done in the first interim, it was confirmed (as expected) that novice users experienced identical problems (see table 1 & 2 for presentation of results). The comparison of the current gateway to the prototype indicated that there were minor improvements in user performance in terms of ease of use and efficiency to find library information. This was primarily due to the fact that the majority of questions were answered successfully in both designs. The most distinct difference between the two designs could be observed in question 9, where information regarding connection from home was provided through a more direct and intuitive link by the prototype (i.e., Services à Connect from Home). There were however two other questions involving user errors which were not addressed by the prototype (5 & 14). We argue that providing links for these tasks were beyond our control given that while navigating in the layers below the Gateway page, we as designers were also unable to locate the correct links to find electronic maps or exam timetables. This assessment then suggests that user performance on any given webpage will vary according to which questions are designated as the experimental task. Therefore, there is a need to standardize the selection of test questions in order draw valid conclusions from the results obtained. As such, a standardized procedure was used to conduct formal user testing in phase three of the project (see Phase 3 – Lab based Usability Testing).

    Table 2: Summary of user walkthrough of web usability questions using current gateway design

    0. Goal of User based on "Web Usability Question" 1. System that user explored to find action 2. User actions selected 3. User interpretation and comments
    1 – Find Atwood’s Alias Grace Waterloo Library Gateway Find it à Trellis à Title Search Correct action but Trellis was not immediately apparent as the library
    4 – Find e – dictionaries Waterloo Library Gateway Find it à Reference Tools à Dictionaries Correct actions selected; successful completion of task
    5 – Find electronic maps Waterloo Library Gateway Find it à Reference Tools à Travel & Recreation Could not find maps, lost
    6 – Find URL for Yahoo Waterloo Library Gateway Find it à Internet Search Tools à Comprehensive Subject Dictionaries Correct actions selected; however required several navigation trials
    7 – Find database to locate articles on Anthropology subjects Waterloo Library Gateway Find it à By Subject à Anthropology Correct actions selected; successful completion of task
    8 – Find info on how to cite web sites Waterloo Library Gateway Find it à Reference Tools à Style Manual Correct actions selected; successful completion of task
    9 – Find out how to read journal article without coming to library Waterloo Library Gateway Get it Wrong action. Not apparent to user that you need to set up proxy settings
    14 – Find exam timetables Waterloo Library Gateway Find it Info not under menu selected

    Table 3: Summary of user walkthrough of web usability questions using prototype design

    0. Goal of User based on "Web Usability Question" 1. System that user explored to find action 2. User actions selected 3. User interpretation and comments Observed differences relative to current gateway page
    1 – Find Atwood’s Alias Grace Prototype Gateway Page Search UW resources à Trellis à Title Search Correct action, ‘pop up’ boxes helped confirm that Trellis was the library None
    4 – Find e – dictionaries Prototype Gateway Page Search UW resources à Reference Tools à Dictionaries Correct actions selected; successful completion of task None
    5 – Find electronic maps Prototype Gateway Page Search UW resources à Reference Tools à Travel & Recreation Could not find maps, lost None, same result getting lost
    6 – Find URL for Yahoo Prototype Gateway Page Outside Links à Selected Search Engines à Yahoo Correct actions selected; successful completion of task None
    7 – Find database to locate articles on Anthropology subjects Prototype Gateway Page Search UW resources à Search by Program à Anthropology Correct actions selected; successful completion of task None
    8 – Find info on how to cite web sites Prototype Gateway Page Search UW resources à Reference Tools à Style Manual Correct actions selected; successful completion of task None
    9 – Find out how to read journal article without coming to library Prototype Gateway Page Services à Connect from Home Correct actions selected; successful completion of task Difference found: information was more intuitive for user and required less steps to find
    14 – Find exam timetables Prototype Gateway Page Search UW resources Info not under menu selected None, same result could not find timetables

    5.3 CARD SORTING

    5.3.1 Intent of Method

    The benefit of using card sorting was in its ability to gain insight on how people organized information presented to them in a website. This information could then be used to structure a webpage design. The advantage of card sorting was that it could be performed on our low fidelity prototype (at the time) so that it allowed the user to develop a mental model without the complete functioning system. A mental model in this context can be defined as the user’s dynamic model of the website components, how the website works, how the components are related, what the internal processes of the website are and how they effect the components (Wickens, et al, 1998). In our search for the optimal mental model to match user expectations, we hoped that by carrying out the card sorting process, we would be able to define an ideal web hierarchy that could reduce user errors and promote ease of navigation.

    Four participants (2 male and 2 female) between the ages of nineteen and twenty-three participated in the card sorting experiment. All participants were undergraduate students at the University of Waterloo and were considered novice users of the University of Waterloo library gateway page. Participants were presented with index cards that displayed all of the subheadings from the current University of Waterloo library gateway page.

    The participants were provided with the index cards containing the subheadings from the current library gateway page and asked to sort the index cards into four meaningful categories. The participants were told that each of the subheadings related to a sub-layer of the current library website. Upon finishing these categories, the participants were asked the significance of each category, and asked to explain why they grouped cards the way they did. They were then asked to give a name for the category based on its contents, and to rank the categories from most important to least important. After this first task the participants were given the opportunity to ask questions about the meaning of cards they did not understand, and this data was recorded with the intent of making the descriptions of these cards more representative.

    The participants were then asked to separate the cards second time, however this time they were instructed to create eight representative categories. When they were done they were asked to explain the reasons for separating the cards into the categories they were in, and to give a name for each category. They were again asked to rank the categories from the most to the least important.

    The third task, required the participants to separate the cards into the number of categories they felt was required. They were then asked why they chose these categories, and asked to name each category. The participants were then asked to rank their categories from most to least important.

    The data from all three tasks was then compared to find similarities in groupings and category names. Category titles were chosen based on the design team’s interpretation of the participants’ choices for headings. The number of categories to be used was based on the final task (subject chose number of categories that represented the data best). Related categories were grouped together and the frequency of appearance for each card label was determined. The subheading was placed under the category in which it appeared with the greatest frequency. In the event that the subheading appeared an equal amount of times in two or more categories the experimenter chose the category based on the subheading definition. Some of the subheadings were renamed or eliminated based on the subject difficulties to interpret or differentiate. This data was used to determine more accurate subheadings and create more instinctive categories.

    5.3.2 Impact of Method

    Following the analysis of the data, it was found that there was a great deal of similarity between subjects, with respect to category names (labels), arrangement of subheadings and rating of importance. Subheadings that involved library searches for reference materials were frequently placed in the same or similar categories. Furthermore, this category always ranked first for importance. Another similarity found between subjects is the order of ranking for most of the categories. There was some inconsistency between the assignment of subheadings between the subjects. This was attributed to the variations in user mental models. The task was performed outside of the library setting using only index card, which does not facilitate an accurate mental model of the organization of library information (lack of context would contribute to confusion in the subjects). Overall, the findings demonstrated a need for more descriptive/intuitive categories, subheadings and category layout.

    As an overview, by accounting for the rank order of subjects perceived importance of menu items as well as the appropriateness of labelling, the following menu items were established for prototype revision:

    1. Search UW Resources

    2. Other Libraries

    3. Library Information

    4. Services

    5. What’s New

    6. Outside Links

    7. Help

    With this basic structure established by card sorting methods, we proceeded to tests its representativeness through formal user testing in phase 3.

  7. Phase 3 – Lab-based Usability Testing
  8. 6.1 Intent of Lab-Based Usability Testing

    Lab-based usability testing was performed on a number of participants that had various amounts of experience with the current Library Gateway Page. The results from the testing were compared to a Keystroke Level Analysis (KLA). KLA was used to create a benchmark for the prototype. This benchmark was then used to determine if the prototype was intuitive enough for the user to achieve a pre-set goal in the same time, or faster than the KLA time. Experimenters have used KLA to determine expert user responses to ensure the system can support the abilities of the users (Thompson 1998).

    6.2 Methodology

    6.2.1 Participants

    A total of eleven participants took part in the lab-based usability testing. Five of them were female, and six were male. The mean age of the participants was 22.7 years. Ten of the eleven participants were undergraduate students at the University of Waterloo, and each had varying amounts of experience using the UW Library Gateway page. One of the participants was a part of the library staff and has had a great deal of experience using the current gateway page. Across the group of participants, the average level of familiarity with computer use was 4.27 out of a possible five (a score of 1 indicates that the individual is not familiar with computer use and a score of 5 indicates the individual is very familiar with the use of computers). As well, participants rated themselves on level of familiarity with: on-line use; web page construction; UW’s Library web page; and UW’s library services as having scores of 3.91, 2.55, 2.55, and 2.36 respectively.

    6.2.2 Procedure

    The procedure involved determining the time a user took to select the appropriate web page subheading in order to react to a predetermined goal. A Keystroke Level Analysis (KLA) was first completed to determine the expected time that an experienced user would take to complete each of the desired tasks. Prior to completing the lab-based usability testing each participant was asked to complete a background questionnaire. The purpose of the questionnaire was to determine:

    1. The degree of familiarity with everyday computing activities.

    2. The degree of familiarity with on-line computing.

    3. The degree of familiarity with design and creation of web pages.

    4. The degree of familiarity with the library gateway page.

    5. The degree of familiarity with the UW Library services.

    For each of the questions in the questionnaire the participant was asked to rate their degree of familiarity with a number from 1 to 5 (1 would mean not at all familiar and 5 would mean very familiar). The tasks selected for completion in the lab-based usability testing were chosen because they were considered to be goals most commonly required by users of the library gateway page and reflected the initial concerns of the client. The methods used in the lab-based testing are explained later. Following completion of the lab-based testing the participant was given another questionnaire to determine how usable the prototype web page is, on the basis of:

    1. Key words

    2. Major links

    3. Graphics

    4. Navigation bar

    5. Overall layout.

    Each of these questions was also ranked on a rating scale from 1 to 5 with 1 being not at all user friendly and 5 being very user friendly. Additionally, the participants were asked to state if the prototype was: much better than the current UW Library Gateway page; marginally better; about the same; marginally worse and much worse and not applicable if the participant had no prior knowledge of the UW Library Gateway page. Lastly, the participants were asked to comment on what major problems made the new design worse than the current page (if they stated that the new design was marginally or much worse than the current page) and provide additional comments and feedback for the new design.

    In performing the KLA a modified version is used rather than the traditional version which is intended for tasks performed by experienced users. The version involves summing the times required to perform each portion of the task. The new design is a web page, meaning that the information is scattered throughout the page and participants are required to scan and read the information to find the appropriate link. Using the modified version of the KLA it is important to add time it takes to read the on-screen information. It was assumed that all participants would read from top to bottom and left to right to find the optimal link. It was also assumed that an individual would read the information on the screen at a rate of 2 words per second or 0.5 seconds a word. The KLA values were calculated by summing the assumed times to: mentally prepare (1.35 seconds); scan the left side links (number of words and duration to select desired word varies); movement time to point to the link (1.10 seconds); mentally prepare (scanning information upon mouse-over, 1.35 seconds); scan for appropriate subheading (number of words and scan duration varies); point to desired subheading (1.10 seconds); and click mouse button (0.20 seconds). A sample of the KLA for the one of the lab-based question is provided below.

    Question 1:

    1. Where would you go to find general help with a research topic in your department?

    2. Mentally prepare (following presentation of question)à 1.35 s

    3. Scan left side links to "Search UW Resources" (3 words)à 1.5 s

    4. Point to link (Search UW Resources)à 1.10 s

    5. Mentally prepareà 1.35 s

    6. Scan menu to Search by Program (8 words)à 4.0 s

    7. Point to Search by Programà 1.10 s

    8. Click on Search by Programà 0.20 s

    Total time = 10.60 seconds

    The lab-based usability testing was performed with each of the participants using one of the two versions of the questions. Version A and version B contained the same questions, however in each version the questions were presented in opposite order. The participants were also informed that the experimenters were testing the usability of the web page design and not the participant’s ability to use the system. For each of the tasks, the participant was instructed to place their hand on the mouse and move the cursor to the bottom centre of the screen. The experimenter then read the question to the participant. The participant was instructed to find the appropriate link once the question was finished. If the participant needed the question to be repeated the participant was asked to shut their eyes until the question was re-read completely by the experimenter to prevent the participant from processing web page information while the question is being read again. When the participant clicked on the appropriate link, the timer was stopped and the task duration was recorded. When the participants clicked on the wrong link, the timer was stopped and the participant was instructed to go back to the gateway page and begin again with the pointer on the bottom of the screen. The timer was restarted once the participant was instructed to start again. In performing the lab-based usability testing the duration to complete the task, the number of errors in completing the task was recorded as well as any general comments.

    6.3 Results:

    6.3.1 Keystroke Level Analysis:

    The keystroke level analysis (KLA) was used to determine the expected time values for the time it should take to answer the question. This was subsequently utilized to draw comparisons with the observed times recorded in the lab. Refer to Table 3 to see these results.

    6.3.2 Lab Based Results:

    The participants observed times to answer a question ranged from an average of a low 5.88 seconds to perform question 2 to a high of 32.72 seconds to perform question 9. A statistical analysis, using a t-test, showed that question 1 had took a little longer with a value of 1.63, however, it was not significantly longer than the KLA. The observed time for question 2 was significant faster than the KLA with a score of –5.16. Question 3 illustrated an average observed time of 12.67 seconds which was a slightly faster time KLA of 15.60 seconds, but it was not significantly faster. The average time for question 4 was 14.50 seconds. This was slower than the KLA, but not significantly slower. Question 5 produced an observed time of 15.55 seconds. This was 3.15 seconds faster than the KLA but it was not significantly faster. The participants average observed time for question 6 was 3.23 seconds longer than the KLA, however this too was insignificant. Question 7 was an insignificant 1.47 seconds faster than the expected KLA time. The participants observed times for question 8 averaged to 15.90 seconds and resulted in a t score of only 0.06, well below the two tailed 2.23 value, and above the one tailed -1.81 critical value. Question 9 resulted in an insignificant score of only 1.10. The resultant score for question 10 was –0.77. This showed, although insignificant, that the participant completed the question faster than the KLA predicted.

    The lab-based usability testing was performed on each participant and the mean task completion time varied from person to person and question to question. Furthermore, some tasks were more prone to errors than other tasks. The observations found in performing the lab-based testing can be seen in Table 4.

    Table 4: Results from Keystroke Level Analysis and Lab-based Usability Testing

    Question KLA

    (sec)

    Lab-based Usability Testing Times (sec) Lab-based Usability

    Testing Errors (mean #)

    Mean Max Min SD
    1 10.60 21.7 56.1 1.5 22.5 0.45
    2 10.10 5.88 11.31 2.95 2.71 0.00
    3 15.60 12.67 44.06 1.68 12.02 0.27
    4 9.60 14.50 70.09 3.12 19.26 0.18
    5 18.60 15.55 71.24 2.31 20.76 0.45
    6 13.10 16.33 85.85 2.70 25.07 0.18
    7 8.10 6.63 25.50 1.65 6.70 0.36
    8 15.60 5.90 57.00 1.53 16.15 0.18
    9 9.60 32.72 240.00 2.56 69.90 0.55
    10 12.10 10.45 20.47 2.01 7.10 0.45

    A two-tailed t-test was the statistical analysis performed for each of the question comparing the participants’ durations to complete each task with the benchmarking (KLA). For question 1, there was no significant difference between the observed scores and the KLA scores (tdf=10 = 1.63, p > 0.95), and there were no comments from the experimenters to indicate the types of mistakes the participants made. There was a statistical difference between the KLA and the lab-based scores for question 2 (tdf=10 = -5.16, p > 0.95). In evaluating the results from questions 3 and 4 no statistical difference was found between the KLA scores and the task completion scores (tdf=10 = –0.81, p > 0.95) and (tdf=10 = 0.84, p > 0.95) respectively. Furthermore, for these two questions, experimenters found that some of the participants passed the appropriate heading when attempting to complete the task, indicating that the heading title and/or content are not easily interpreted for all users. There was no statistical significant difference between the benchmark testing and the user testing for: question 5 (tdf=10 = -0.49, p > 0.95); question 6 (tdf=10 = .043, p > 0.95); question 7(tdf=10 = –0.73, p > 0.95); question 8 (tdf=10 = 0.06, p > 0.95); question 9 (tdf=10 = 1.10, p > 0.95) and question 10 (tdf=10 = –0.77, p > 0.95). Questions 5, 7, 9 and 10 produced a higher mean number of errors as a result of users that are unfamiliar with the UW Library web page and the library services.

    6.3.3 Questionnaire Results:

    Most of the participants rated the new web page high for all aspects (key words, major links, graphics, navigation bars and overall layout). The mean score out of a possible five was calculated for each of the categories and is reported in Table 5. As well, of the participants that were familiar with the current UW Library Gateway page, three indicated that the new page was much better, one indicated that the new page was marginally better, one said that the new page was about the same as the current page and six were not familiar enough with the current page to answer the question.

    Table 5: User Rated Usability

    Category Mean Score
    Key Words 4.00
    Major Links 4.18
    Graphics 4.20
    Navigation Bars 3.60
    Overall Layout 4.27

    The user responses for the web page usability questions were grouped into three categories, non user friendly (scores of 1 to 2), neutral (score of 3) and user friendly (score of 4 to 5) and the frequencies were counted for each grouping. A chi-squared analysis was performed on each of the groupings and found that the frequency of observed responses for the key words, major links, graphics and overall layout were significantly different then the expected responses (χ2observed > χ2critical)

    6.4 Discussion

    6.4.1 Implications of lab observations

    Although there was only one question answered significantly faster than the KLA there were differences in the times. It was important to note that although most of the scores were not significant they were all within an acceptable range to the benchmark set by the KLA. This was shown through the examination of the two-tailed t-test, where all the values were with in the range of plus or minus 2.23 – the calculated standard deviation. This was relevant because it meant that the menu and sub-menu names were short, and descriptive enough that the participants could complete the task within a reasonable amount of time. Those observed times that were faster than the expected times may have been so because the names for the menu names were short enough and descriptive enough that the participant found what they were looking for easily. Conversely, those questions that took longer to perform may have not have been too long and too descriptive. Where the participant found it difficult to perform a task, some change may be required to make the solution more salient.

    One way to do make a menu option more intuitive would be to provide a longer description of a menu option through pop-up boxes. These pop-up boxes could inform the user specifically what the menu or sub-menu option does. This would be extremely helpful for the first time user who was unfamiliar with libraries by giving them more information, and thus help them to make a more accurate decision.

    6.4.2 Implications of Participant feedback:

    The participant feedback was analyzed by chi-squared statistical measurement. These measurements revealed that all but two of the results were statistically significant. The feedback indicated that the prototype used user-friendly keywords, and its major links were placed in easily accessible locations. Thus, the users found that the keywords portrayed their function well, and they made very few mistakes when trying to achieve a goal.

    The participants also liked the simple graphic design of the web page, indicating that the "uncluttered" appearance was particularly appealing. Some of the feedback stipulated that the simple layout allowed for easy navigation, and an attractive overall design. The participants had mixed feelings about the navigation bar. Sixty percent of the participants rated it as user friendly but the other forty percent did not think it was particularly user-friendly. There was also a common remark made that something to make the site more representative of UW would be a welcome addition. This lead to the implementation of a new colour scheme which included the black and gold of UW, and the UW crest along with a bolder title for the UW library web page.

    The navigation bar did not have a search function on it, nor did it have very good short cuts for the users. Because of this the some of the users did not find much attractive about the navigation bar. Because only few people had used the web page before the sample was no large enough to show that the prototype page was significantly different than what was expected. However, four out of the five people who where familiar with the old web page indicated that the new prototype was better than the current.

    Some improvements that would help improve the usability of the navigation bar would be to add a search function. This would allow the user to perform keyword searches for information that they might not otherwise know how to acquire. Adding frequently used links to the navigation bar may also help users achieve their goal faster. This would be particularly useful to the expert user who does not want to go through all the mouse-overs to achieve a goal.

  9. Final Specifications
  10. The final specifications are based on each method of usability testing performed throughout each iteration of development and testing. Many of the methods reported the same usability problems and these design recommendations are emphasized. As well, a number of distinct, yet equally problematic issues were discovered using each method of testing. The recommended web page design strategies are indicated to solve each of these issues.

    The most common problem identified by nearly all of the methods was the lack of intuitive and descriptive headings and subheadings. To remedy this issue, card sorting was performed. Card sorting also addressed the issue of which subheadings should be grouped together under which heading. In performing card sorting it was found that the headings in order from top to bottom should be:

    1. Search UW Resources

    2. Other Libraries

    3. Library Information

    4. Services

    5. What’s New

    6. Outside Links

    7. Help

    The contents for each heading are specified in Table 6. It was also established in performing the card sorting test that many of the library terms required the use of jargon and therefore the subheadings could not be entirely intuitive. To solve this problem it is important to include a descriptive pop-up box upon mouse-over of many of the subheadings. The suggested descriptions are also listed in Table 6.

    Table 6: Suggested List of Headings and Related Subheading Groups

    Category (in rank order) Related Subheadings
    Search UW Resources
    • TRELLIS: our catalogue ("Search for library materials" for mouse-over)
    • Journal Indexes ("Journal searches" for mouse-over)
    • Search by Program ("Program related material" as mouse-over description)
    • E-Journals ("Electronic version of research journals" as mouse-over description)
    • E-Textbooks ("Electronic version of research journals" as mouse-over description)
    • E-Data ("Electronic data" as mouse-over description)
    • Reference Tools
    • Course Reserves
    • Site Map
    Other Libraries
    • Materials from Guelph/Laurier/Annex
    • Materials From Other Libraries
    • Laurier Library
    • Guelph Library
    • From Other Libraries
    • TUG home page ("TriUniverty Group Libraries" as mouse-over description)
    • Kitchener Public Library
    • Conestoga College Library
    Library Information
    • Hours/Locations
    • Guide to Libraries
    • Staff/Administration
    • Library Development
    Services
    • Renewals
    • View Your Library Record
    • Connect From Home
    • Accommodations For People With Disabilities
    • Tours and Workshops
    What’s News
    • News/Events/Exhibits
    • Tell Us/Ask Us
    Outside Links
    • Internet Search Tools
    • UW home page
    Help
    • TRELLIS Help (our catalogue)
    • Undergraduate
    • Graduate
    • Faculty and Staff
    • Alumni
    • Distance Education
    • Community Patrons
    • Research Guides
    • Online Instructions

    Another commonly noted problem identified through the usability evaluations is the lack of an effective navigation bar. By providing an effective navigation bar on each of the gateway pages, both novice and expert users should be able to complete commonly required tasks with greater speed and efficiency. The ideal navigation bar should have the most commonly used functions for the library web page. Based on a hypothesis of frequency of use, the links to be included on the side navigation are: Home, Trellis, Journal Indexes, E-Journals, Site Map, Text Version and Library Forms (as a pull down menu). The benefits of the addition of each of these to the side navigation bar have been shown through KLA, for both advanced and novice users. It would have been beneficial to prove these findings using the lab-based testing with a representative population, however time did not permit us to perform this additional testing.

    The final commonly discovered usability issue was the difficulty in completing tasks that are unfamiliar to users and evaluators alike. This problem was most evident in the analysis of the original gateway page using the heuristics evaluation, hierarchical task analysis and walkthroughs. The problem was again detected in the discount usability walkthrough and the lab-based testing. In performing the competitive analysis of the current gateway page with other library gateway pages, it was found that pages that contained a search function improved usability for the novice user. It is theorized that the most beneficial location for the search function would be at the top of the side navigation bar because most users will scan the page top to bottom and left to right.

    Heuristic evaluations and design walkthroughs found a number of similar problems. One of these problems is the lack of appropriate feedback upon mouse-over of subheadings. It is important to inform the user of what event has taken place as a result of a user’s action. To improve the feedback of the headings and subheadings it would be beneficial to change the button colour upon mouse-over for subheading and headings. A related problem discovered in the heuristic evaluations was lack of mapping from selected heading to group of subheadings. To solve this problem it would be effective to implement an arrow over to a concave grouping of subheadings from the heading once it is clicked. In usability testing of prototype #3 it was found that the arrow over to the headings improved the mapping from heading to subheading. Due to limitations of the web developer it was not possible to group the subheadings in a concave fashion. Heuristic evaluations also found that there was a great deal of dead space on the gateway page. Dead spaces are areas of the web page that users might notice to be very empty. They do not impact usability in any meaningful way save as aesthetics. The addition of a side navigation bar along with a different colour scheme would decrease the amount of dead space.

    In performing the lab-based testing it was noted that a number of participants experienced difficulties when moving the mouse from the selected heading (with just a mouse-over) to the desired submenu. The difficulties arose when the users were forced to go around other headings on the way to the submenu instead of to the direct subheading. This issue was solved by requiring the user to click on the heading to present the submenu (submenu remained until another heading was clicked).

    The current UW Library Gateway page presents the image of the Davis Centre Library, however this image is not representative of library services for all users. To resolve this issue it would be beneficial to present an image that represents all of the libraries, such as a picture of the school crest or a picture of each of the libraries. It is also suggested that the colour scheme be changed to black and gold, with a white background in the main section of the page. This change would correspond better to University of Waterloo use. Currently, the gateway page uses the colour red to a great extent for feedback and the general colour scheme. One problem with the use of this colour is that 60% of individuals that suffer from colour blindness cannot see the colour red. This reinforces the use of gold as feedback when one of the buttons/subheadings is selected, however not for text purposes.

 

Appendix A: Summary of Client Meetings

The main function of the client meetings is to formalize and allow communication between the design team and the client contact. During the meetings with our client, we updated her on our progress, our ideas and plans and she was able to act as a source of information in guiding us as to what CNAG viewed as the focus of the project so that we could synchronize our goals with the overall objectives of the UW Library.

Date of Client Meeting Main Issues Discussed
January 25, 2001

Present:

Shawn

Chris

Gerald

Rhoda

Dylan

- We had prepared topics to discuss in the case that Connie did not have any specific details to discuss.

- We had all performed the 20 questions from the handout and compiled a list of the problems we had encountered. This lead to an open discussion with Connie on the following areas:

  • Wording and language – needs to be more intuitive, eg: abbreviations used on the Gateway, ILL, TUG, etc.
  • Mouse over lay – for a 1st time user, operation may not be self-evident
  • Search function – could this be useful?
  • Colour – Connie wanted our opinion on the scheme of the current site
  • Graphics – Connie wanted to discuss if additions would be helpful

- We also asked Connie to help us with the questions we could not answer.

- Connie also agreed to find the answers to the following question at her CNAG meeting :

  • What CNAG felt our primary goal should be for this project
  • What areas were absolutes and did not want changed
March 2, 2001 Present:

Shawn

Chris

Gerald

Rhoda

Dylan

- The team was able to update Connie on the recommendations we had made in our Report #1.

- We showed Connie our prototype and she gave us feedback on it

- She reported answers to the questions we had from the last client meeting.

March 21, 2001

Present:

Shawn

Chris

Gerald

Rhoda

Regrets:

Dylan

- The team had provided Connie with a copy of our Report #2 and she was able to provide us with feedback and comment on our recommendations

- She commented that she felt that our recommendations reflected a design that a novice user (like herself) could use easily.

- She commented that the design did not intimidate her as some websites have in the past and she liked the more simple style we adopted (no unnecessary graphics, colourful banners, etc.).

- We invited her to our user testing session.

 

Appendix B: Summary of Task Allocations

The tasks were divided among team members as evenly as possible in terms of time commitment and difficulty, considering personal interests into topics as well as personal strengths. The breakdown for this report is as follows:

Report 1

Task Subtask if Applicable Team Member
Title Page N/A Chris
Abstract N/A Chris
Table of Contents N/A Chris
Table of Tables N/A Chris
Table of Figures N/A Chris
Introduction N/A Shawn, Chris
Heuristics Evaluation Intent of Heuristics Method Chris
Presentation of Findings Chris
Discussion of Findings Chris
Hierarchical Task Analysis Intent of HTA Method Shawn, Rhoda
Presentation of Findings Shawn, Rhoda
Discussion of Findings Shawn, Rhoda
Hierarchical Task Analysis Figures Shawn
Walkthroughs Intent of Walkthroughs Method Gerald
Presentation of Findings Gerald
Discussion of Findings Gerald
Competitive Analysis Intent of Competitive Analysis Dylan, Chris
Presentation of Findings Dylan
Discussion of Findings Dylan, Chris
Evaluation of Client Concerns Intent of Evaluation Rhoda
Presentation of Findings Rhoda
Discussion of Findings Rhoda
Synthesis of Findings N/A Chris, Gerald, Rhoda
Recommendations N/A Chris, Gerald, Rhoda
Proposal for Phase 2 N/A Gerald, Rhoda
Gantt Chart N/A Gerald, Shawn
Task Allocation Chart N/A Rhoda
Illustrations N/A Dylan
Editing N/A Chris, Gerald, Rhoda

Report 2

Task Subtask if Applicable Team Member
Title Page N/A Rhoda
Abstract N/A Gerald
Table of Contents/Figures N/A Rhoda
Introduction N/A Gerald
Competitive Analysis of Webpages using Heuristics Rationale Gerald
Methods Gerald
Presentation of Findings Rhoda
Discussion of Findings Rhoda
Recommendations Rhoda
Card Sorting Rationale Chris
Methods Shawn
Presentation of Findings Chris & Shawn
Discussion of Findings Chris & Shawn
Recommendations Chris & Shawn
Synopsis N/A Gerald
Prototypes Prototype 2 Dylan
  Proposal for Third Iteration Dylan
Task Allocation table N/A Rhoda
Editing N/A Rhoda

Report 3

Task Subtask if Applicable Team Member
Title Page N/A Rhoda
Table of Contents/Figures N/A Gerry
Executive Summary N/A Gerry
Introduction to Problem N/A Rhoda
Interactive System Problem Statement N/A Rhoda
Project Constraints N/A Rhoda
Project Requirements N/A Rhoda & Gerry
Phase 1 UCD Methods Summary of Phase 1 Methods Rhoda
Impact of Phase 1 Methods on Prototype Development Rhoda
Phase 2 UCD Methods Summary of Phase 1 Methods Gerry
Impact of Phase 1 Methods on Prototype Development Gerry
Phase 3 UCD Methods Intent of Usability Testing Chris & Shawn
Methodology Chris & Shawn
Results and Discussion Chris & Shawn
Final Specifications N/A Chris & Shawn
Summary of Client Meetings N/A Rhoda
Team Project Task Allocations N/A Rhoda
Team Learning N/A Chris & Shawn
Prototype Final Design Dylan
Presentation Design N/A Chris & Shawn

Shawn Kavanaugh, Christopher Klachan, Gerald Lai, Rhonda Lee, Dylan Lum

 

Appendix C: Summary of Learning

As Smith (2000) states, "Individual learning, no matter how wonderful it is or how great it makes us feel, is fundamentally irrelevant to organizations … the learning units of organizations are ‘teams’, groups of people who need one another to act."

We feel that this is fundamentally true for our design team. We could never have accomplished what we have individually, or even with a few of the team members. We soon found out as we began to work together that we all have our own strengths, weaknesses, interests and twists on views that it allowed us not only to combine our assets but to also become creative and insightful. For example, some members have strong leadership and organizational qualities and some members have great presentation skills and we learned this so that we could all adopt roles and responsibilities that would result in the best designs and reports and when required, break into complementary pairs that would allow learning and teaching on both member’s parts.

What we also learned was that we needed to organize our time a little more effectively. At the start of our project, we were able to maintain up to date with the status of our progress and have long meetings and discussions. However, as the term progressed and other school projects and courses demanded large amounts of time, the team was not able to maintain the same level of regular time dedications as initially. Luckily, almost always, at least one member was thinking about the design project and reminded others of commitments such as meeting with our client and planning our testing and report formats.


Close this window