back to contents page

Subjective Cognitive Workload, Interactivity and Feedback in a Web-based Writing Program

Lisa Emerson
Massey University
L.Emerson@massey.ac.nz
Bruce R. MacKay
Massey University
B.MacKay@massey.ac.nz

Abstract

This investigation compares and analyses the experiences and subjective cognitive workload of students undertaking a lesson on an aspect of micro-level writing skills in a web-based and paper-based version. Both versions of the lesson were based on the principles of interactive learning, specifically on a modified version of Chou's (2003) model. The analysis is based on two quantitative and qualitative questionnaires and subjective cognitive workload is examined using the NASA-TLX. Analysis showed that while students were positive about the lesson in both modes, they experienced a higher subjective cognitive workload with the web-based lesson. The paper speculates that this difference may be accounted for by different approaches to providing feedback to students, and suggests that this factor be tested by future research.


Introduction

While many studies have compared web-based courses with traditional classroom-based courses (see, for example, Meyer, 2003, Gal-Ezer and Lupo, 2002 and Olson and Wisher, 2003), little research has been conducted which compares students' experience of a specific lesson on the web on micro-level writing skills with an equivalent lesson on paper (both of which conform to the principles of effective interactive teaching), and no research is available which measures and compares the kind of subjective cognitive workload experienced by students in a tertiary-level writing course undertaking the lesson in the two modes.

This paper reports on a pilot study undertaken to compare the experience of students undertaking a lesson on apostrophe usage offered in either web-based or paper-based form. It describes how the lesson was developed, based on the research on effective interactive teaching (using a modified form of Chou's 2003 model), the differences between the two modes of instruction, and the method of comparing the students' experience of the two modes of instruction (including a discussion of the NASA-TLX, a method for determining subjective cognitive workload). Some of the results were unexpected, particularly those related to the level and form of interactivity, and posed problems in terms of the interpretation of the results. The paper concludes with consideration of how this pilot needs to be pursued and expanded.

Critical features of effective web-based teaching

Web technology offers unique opportunities to provide interactive material that encourages active, deep and reflective learning. According to Katz and Yablon (2002), the Internet is uniquely equipped to provide interactivity, and "interactivity of all types has been shown to meet general student needs more comprehensively than other distance learning modes" (p.70). The challenge for tertiary-level teachers is to identify unique ways in which the Internet can engage students in learning material.

In terms of writing or language classrooms on the web, there are particular concerns. Reynard (2003) discusses the idea that "language learning requires a process-based orientation (Breen and Littlejohn, 2000). The individual learner should be able to negotiate a learning path that reflects personal learning goals and needs." She cautions against using the technology to simply duplicate classroom-based teaching or to convey information and thus promote passive learning. On a completely different level of concern, Mehlenbacher et al (2000), Najjar (1998) and Rose (1999) challenge researchers such as Katz and Yablon (2002) and caution against developing interactive learning methods which may either overwhelm some learners with sensory material or give no time for the student to reflect and consider their mistakes. The danger of immediate feedback, they suggest, is that the student simply moves on, rather than developing their understanding further.

Given these cautions, that interactivity in itself is not sufficient to ensure appropriate learning strategies, we may consider three models of interactivity. The first is one that allows for only the very limited interactivity such as clicking links, feedback forms and search engines which is present on most web sites (French et al, 1999). The second model prioritises interactivity between student-teacher and student-student, seeing the web as a new form interpersonal communication within a learning context (Hoey et al, 1997).

This model was not appropriate for our purposes, since our aim was to produce interactive material that students could use on their own and which made them less dependant on the teacher or a classroom setting.

The third model - and the one we used as a model for the design of our webtool - is proposed by Chou (2003) who identifies a series of key interactivity dimensions that she sees as critical to effective online pedagogy:

  1. Choice. Students need a range of material provided in different ways and through different media so that they can choose a learning pathway that suits their needs.

  2. Non-sequential access of choice. Users need to be able to choose material in a variety of ways - there should not be a single linear pathway.

  3. Responsiveness. The system needs to respond to the learner's learning choices immediately.

  4. Monitoring information use. The system should collect information on the users and how they access and use the information, so that the pedagogy can be improved.

  5. Personal-choice helper. Information should be provided which assists the learner in making choices about which information to access.

  6. Adaptability. The information made available should be adaptable to the individual learner's needs.

  7. Playfulness. While this may not be seen as essential, learners are often motivated to learn if they perceive the material presented and the pathway through the learning material as fun and enjoyable.

  8. Facilitation of interpersonal communication. Student-teacher and student-student communication should be encouraged in a variety of synchronous and asynchronous ways.

  9. Ease of adding information. The system should be easy to develop, so that teachers can change material that proves not pedagogically sound. A system which requires academics to learn complex coding or which leaves academics reliant on expert support for any changes is not advisable.

We modified this model to take in account Mehlenbacher et al (2000)'s comments about the importance of developing a reflective learner, which we saw as important in terms of developing students' deep-learning strategies:

  1. Sensitive and directive feedback. The system should provide immediate feedback but suggest possible responses to, or avenues to take as a consequence of, that feedback.

The e-learning resource: Interactive grammar!

The Interactive grammar! tool 1 is designed to allow students to work their way through a series of lessons on different aspects of punctuation, to test their understanding of the material, and to move on at a pace that suits their needs. Three features are critical in its development.

First, each set of instruction allows for students to absorb new information and then test their understanding until they are confident that mastery has been reached. Once students have absorbed one set of material, they are presented with a series of examples that allow them to test out their new knowledge. Answers are provided, and once students are confident that they can handle the material, they proceed to a short formal (multichoice or short answer) test.

The test is marked instantly and provides feedback. The feedback is limited: we wanted students to know which questions they had answered incorrectly, but not provide them with the correct answer. This is in line with Mehlenbacher et al (2000)'s comments that too much feedback can deny students the opportunity to reflect on their performance and improve their understanding.

Once students have worked their way through the allotted topics (in the apostrophes maze, for example, there are three topics - apostrophes in contractions, simple possessive apostrophes, and exceptional possessive apostrophes), they are given a 20-question test randomly selected from a question bank that contains 10 questions from each of the three topics. If they complete this test successfully we advise that they have reached mastery.

A second feature of Interactive grammar! is that it allow students choice in terms of their progress through the lesson - in this respect it is student-driven rather than instructor-driven. If students pass the test at each level, they move onto the next topic. However, if they fail the test they have a choice of responses. They can either move vertically by accessing another lesson on the same topic (which is designed to enrich their understanding of the topic) or they can choose to move horizontally by bypassing further lessons on this topic and moving onto the next topic. Our original plan had been to take a totally instructor-controlled approach and not allow students to move onto the next topic until they had proved to have mastered the first topic. However, in the light of the first item on Chou's model and work by Shoener and Turgeon (2001), we decided that student choice of path was important, and that we would provide options that allowed the material to be student-controlled (Figure 1).

Such a learner-centred approach is seen by Lin and Hseih (2001) as providing a powerful learning experience, and one which is ideally suited to web-based instruction: "Web-based teaching can empower individual learners by handing them control over the learning experience. It is no longer necessary to establish a fixed learning sequence for everyone; individual learners can make their own decisions to meet their own needs at their own pace and in accordance with their existing knowledge and learning goals. These are advantages to be valued in an educational environment" (p. 382). Nevertheless, although we chose to take such a learner-driven approach, the lesson instructions do advise that students pursue the option of studying more in-depth material on the initial topic before moving on to the next topic if they don't pass the test (see item 5 in Chou's (2003) model).

A third feature of Interactive grammar! is that it is designed, as far as possible, to be accessible and enjoyable for students. Cartoons and visual prompts are included regularly to engage the students in the material and provide light relief, and the style of writing is conversational. Where possible, grammatical terms are avoided, lessons explained in terms of conventional usage and examples taken from everyday life rather than formal contexts.

Figure 1 - Schematic

Figure 1: Schematic of student-controlled learning pathway


In terms of Chou's (2003) model for successful interactivity, Interactive grammar! meets all the criteria for successful interactive web-based teaching:

  1. Choice. Students choose how quickly they will progress through the maze, and have choices relating to depth of learning.

  2. Non-sequential access of choice. Students have access to material which allows them to move horizontally or vertically within the maze.

  3. Responsiveness. The system responds to the learner's learning choices immediately.

  4. Monitoring information use. The system collects information on the users and how they access and use the information, to improve pedagogy.

  5. Personal-choice helper. Although students have choice, they are given clear advice on how to extend their learning opportunities.

  6. Adaptability. The information is adaptable to the individual learner's needs.

  7. Playfulness. We consider this to be important, especially with such a dry subject as punctuation and grammar. The cartoons are designed to encourage students and make the activity more enjoyable.

  8. Facilitation of interpersonal communication. Ramosus permits students to email the teacher directly from the application. Furthermore, the maze is positioned within a bigger website which allows for direct contact with the teacher through email or private message, and class discussion through forums and chatrooms.

  9. Ease of adding information. The system allows someone unfamiliar with any kind of programming to add or adjust any information in the maze.

  10. Sensitive and directive feedback. Feedback on the tests is immediate, but we only provide sufficient information to encourage students to think through the issues.

Method

In assessing our web-based lesson in this pilot study, two questions concerned us:

  1. What were students' experiences of a web-based vs paper based lesson on micro-writing skills?

  2. Did students experience a higher subjective cognitive workload when accessing the lesson in one mode rather than another?

To assess this we developed a paper-based version of the lesson and investigated methods of assessing workload.

Development of the Paper-based Lesson

The paper-based lesson followed the structure and content of the web-based Interactive grammar! precisely, with one important distinction. While it is possible to provide feedback without providing answers on the web, this was not replicable on paper. For students to self-mark their tests and evaluate their progress, we had to provide an answer sheet at the end of the lesson which they could flick to for a list of answers.

It may be significant to note that, unlike most comparisons of web based and paper-based material, the paper based lesson was designed after the web-based version, and that both are designed in the light of the research on interactive learning, as discussed above.

Workload Assessment

Subjective cognitive workload was assessed using a paper-based form of the NASA-TLX, a measure of subjective cognitive workload. Noyes et al (2004) define cognitive workload as "the interaction between the demands of a task that an individual experiences and his or her ability to cope with these demands. It arises due to a combination of the task demands and the resources that a particular individual has available" (p. 111). The NASA-TLX assesses subjective cognitive workload as a function of a series of demands - mental, physical, temporal, performance, effort, and frustration (Luximon and Goonetilleke, 2001; Rubio et al, 2004) which it presents on a series of indices. Various approaches have been devised to measure objective workload, but subjective methods are still preferred, due to their ease of use, small cost and established efficacy.

A number of instruments are available but we employed the NASA-TLX because generally the literature suggests it has more sensitivity than other measures such as SWAT (see, for example, Rubio et al, 2004, or Charlton & O'Brien 2002)

What we specifically wanted to know was whether the web-based lesson placed a higher subjective cognitive workload on participants compared with a more familiar paper-based lesson. Researchers such as Bunderson et al (1989) and Clariana and Wallace (2002) who have investigated "test mode effect", i.e. the concept that "identical paper-based and computer-based tests will not obtain the same result" (p. 593), suggest that there is sufficient empirical data to establish that students respond differently to material on the web compared with material on paper, but both the actual causes and outcomes of those differences remain unclear. Our conjecture was that subjective cognitive workload may be a factor in that difference.

Trial Composition and Process

The trial we undertook involved 41 participants, who were randomly allocated one of two tools - either the web-based tool Interactive grammar! or a paper-based version of the same material - to complete a lesson on apostrophe usage.

Participants first completed a pre-test questionnaire which posed questions about their present attitudes to and experience of learning micro-level writing skills arranged as a 1-to-7 Likert response scale; in particular we wanted to assess students' level of confidence in using these skills before they undertook the lesson.

Once they had completed one form of the lesson (either web or paper based) they were asked to fill in two post-test questionnaires. The first sought feedback on the lesson through a series of questions using either a 1-to-7 Likert response scale or long answer responses. In particular we wanted to find out whether their confidence in using the skills had increased and to identify aspects of the lessons that they liked or disliked. The second post-test questionnaire was the NASA-TLX.

Responses on the Likert scales were analysed by the Kruskil-Wallis test using the non-parametric procedure NPAR1WAY of SAS. Group differences in response to the NASA-TLX questionnaire were analysed using the ANOVA procedure of SAS.

Results

Prior attitudes, skills and experience

Despite the random allocation of testing material to the volunteers, differences between the groups existed (Table 1). Six (26%) of those in the web-based group had completed a tertiary level communication course whereas 11 (61%) of those in the paper-based group had completed such a course. While both groups reported positive confidence in their ability to use correct grammar and punctuation, those volunteers in the paper-based group were significantly more confident than those in the web-based group before attempting the lesson. The groups were similar in the way they perceived their mastery of grammar, punctuation and spelling.

Despite the differences in confidence between the two groups, the groups showed similar attitudes to the importance of grammar, punctuation and spelling in a professional context, with both groups valuing these skills highly.

 

Group

Question

Web-Based
(n=23)

Paper-Based
(n=18)

I mastered these skills (punctuation/grammar) at primary school.

3.8

4.7ns

I mastered these skills (punctuation/grammar) at secondary school.

3.8

4.3ns

I have completed these skills (grammar/punctuation) during a tertiary level writing/communication course.

4.4

5.4ns

I feel very confident about my ability to use correct grammar.

4.2

5.3*

I feel very confident about my ability to spell correctly.

5.0

5.4ns

I feel very confident about my ability to use the correct conventions of punctuation.

4.2

5.2*

I understand how to use apostrophes and feel that I employ them correctly most of the time.

5.1

5.7*

I think the ability to use correct grammar and punctuation is important for people embarking on a professional career.

6.0

6.5ns

z with the exception of the time question, questions based on a 1-7 Likert-type scale where 1=strongly disagree and 7 = strongly agree.

ns, * non significant, significant at 5% respectively

Table 1: Mean Responses to Pre-test Questionsz


Post-questionnaire 1: Student Experiences of the Lessons

The quantitative results for post-questionnaire 1 showed that both groups responded positively to the lessons (post-test scores; Table 2). The paper-based group were significantly more positive than the web-based group in their use of apostrophes, determining where they had problems and applying the information, the structure of the lesson, and in recommending the lesson to others. Perceived understanding of apostrophes having completed the lessons was similar for both groups, although the mean scores in the paper-based group were consistently higher than for the web-based group.

The qualitative feedback from post-questionnaire 1 provided strong support for the value of the lessons in both the web-based and paper-based lessons. When commenting on the best aspects of the lesson, most students commented on the clarity of the lesson, with several commenting favourably on the way the lesson allowed for more in-depth focus on a problem area if mastery wasn't initially attained and being able to move "at my own pace". Students often commented that they found the lesson fun, but that they also enjoyed the challenge it presented to them.

 

Group

Question

Web-Based
(n=23)

Paper-Based
(n=18)

I feel that I understand the rules of apostrophe usage for contraction having completed this lesson.

5.4

6.1ns

I feel that I understand the rules of apostrophe usage for possession at the completion of this lesson.

5.3

6.2ns

I found the lessons clear and easy to understand.

5.6

6.2ns

I could see where I was having problems understanding and applying the material.

4.7

5.9*

It was easy to find my way around the lessons.

6.1

6.4ns

I thought the lessons were fun.

5.0

5.4ns

The structure of the lessons (lesson, ♦ try it out ♦ test) worked well in terms of aiding my learning.

5.7

6.3*

I would recommend these lessons to anyone else who was having trouble with apostrophes.

6.3

7.8*

Time taken to complete lesson (minutes)

23.3

33.3*

z with the exception of the time question, questions based on a 1-7 Likert-type scale where 1=strongly disagree and 7 = strongly agree.

ns, * non significant, significant at 5% respectively

Table 2: Mean Responses to Post-test Questionsz


An interesting discrepancy between the two groups emerged to the question of how the lesson could be improved. Most students who did the paper-based test suggested either minor refinements for the lesson or directly said that they could see no way in which the lesson could be improved. This was not the case with the web-based lesson. Slightly over half of the students (52%) in this group specifically commented that it was frustrating not being given the correct answer, or not being able to go back to the test and try out a different answer.

Workload Assessment

In terms of subjective cognitive workload, students experienced an overall higher workload with the web-based lesson than with the paper-based lesson (Table 3). While some components of workload (eg frustration experienced, physical demand, and stress related to performance) were deemed not to be significant statistically between the two groups, there is a clear indication that all aspects of subjective cognitive workload were higher for the web-based tool. The results do show that students who undertook the web-based lesson found the task significantly more mentally demanding and felt that they had to work harder to accomplish their level of performance (effort) than those who completed the paper-based lesson.

 

Group

Workload components

Web-Based
(n=19)

Paper-Based
(n=17)

Mental demand

198

111*

Physical demand

11

16ns

Temporal demand

96

89ns

Performance

121

54*

Effort

167

103*

Frustration

134

67ns

Overall workload

48

29*

ns, * non significant, significant at 5% respectively

Table 3: NASA TLX Summary Data


Discussion

Despite the random allocation of participants to the two groups, there were significant differences in terms of attitudes, experience and skill between the two groups so the results of this study may only be regarded as indicative.

Overall, students were enthusiastic about the lesson in both modes, and were positive about their ability to use apostrophes both for contractions and possessives at the end of the course. This is established in the quantitative feedback and reinforced in more detail in the qualitative feedback. While the paper-based group were more positive about their ability to use apostrophes correctly after the lesson (though not significantly so), it is possible that this was a factor of their higher confidence level prior to the lesson.

The subjective cognitive workload level was higher on all scales for the web based lesson and significantly higher in terms of the mental demands placed on students and the effort required to accomplish the level of performance. While a larger study is warranted to check the results of this study, this difference in subjective cognitive workload merits discussion.

Several factors might contribute to the higher subjective cognitive workload for the web-based group. The result may be a factor of the differences between the two groups - ie, because the web-based group started off with a lower confidence level, they may have had to work harder to produce a similar level of skill. If this is the case, then a larger study will clarify this issue.

A second possibility is that the computer-person mediated relationship is more stressful for the students than paper-person mediated material. We have no reason to believe that our students are computer-illiterate but, again, a larger study is required.

A third possibility takes the discussion away from the web-based vs paper-based discussion to focus on the issue of feedback, since this was a critical difference between the two lessons - that correct answers were supplied for the paper-based lesson but not for the web-based lesson. Given the number of students who highlighted this as an issue in the web-based group, it is possible that increased workload pressure was placed on students who didn't have access to correct answers, who therefore had to work harder to ensure that their answers were correct.

It is important not to jump to conclusions concerning this issue of subjective congitive workload and feedback. There is a tendency to assume that because students had a higher subjective cognitive workload with the web-based lesson then this is negative factor. However, this may not be the case. One participant in the paper-based group wrote in some detail about the temptation to check out the answers from the answer sheet rather than forcing herself to actually answer the questions herself, and then moving onto the next level without ensuring she had grasped the issues. This student wrote "tell the [participants] that if they have an urge to check out the answers when they're unsure, they need to read or re-read the whole lesson instead". Her scores on the workload index were all quite low. We cannot know how many of the other participants in the paper-based group took this approach to the lesson, but any participant who took this approach might have low scores on the workload indices but fail to attain mastery. Without objective scores on students' actual ability to use apostrophes (as opposed to their subjective rating of their abilities), we cannot know whether students' subjective cognitive workload assessment is related to successful mastery of the material, and whether either of those factors relates to the issue of feedback.

It is important to be cautious in our interpretation. We take seriously the concerns of Mehlenbacher et al (2000) who state " in our desire to promote active learning, we may be guilty of promoting more interactive learning environments, environments that give immediate responses to students but that do not necessarily facilitate reflection or a careful consideration of all the materials and tasks" (p. 177). Similar concerns are expressed by Dumont (1996), who talks about the active user (rather than learner) as someone who can achieve goals quickly but whose "skills tend to converge on relative mediocrity (p. 195). Mehlenbacher et al, commenting on this statement of Dumont's continue "thus, active users can be particularly good at moving quickly through a series of lower level tasks to reach a well-defined goal, but they may not fully understand the underlying complexity of the environment they are using" (p. 178).

This concern throws into doubt the learner-driven model of interactive web-based instruction as propounded by researchers such as Lin and Hseih (2001). We may need to question their assertion that "individual learners can make their own decisions to meet their own needs at their own pace and in accordance with their existing knowledge and learning goals" (p. 382). Unless such freedom of choice is tested against empirical data that confirms that learning has been achieved, we cannot be confident that students are always able to make wise choices about their learning needs.

Without objectively analysing participants' ability to use apostrophes following their completion of the lesson in either mode, we cannot know whether our paper-based group were more casual in their approach to the lesson because they had access to the answers to all the questions, and therefore experienced less cognitive workload pressure and a higher feeling of satisfaction, or whether the paper-based system with its provision of fuller feedback actually led to increased learning, which affected reduced cognitive workload demand and led to increased experiences of satisfaction.

Conclusion

This study, which set out to compare the experiences and the subjective cognitive workload demands of a web-based lesson on micro-level writing skills with a similar lesson on paper has, in some ways, been hijacked by a possibly more interesting problem about the nature of interactive feedback, active learning and learner-based instruction. While it is clear that students experienced higher subjective cognitive workload demands in using the web-based test and a somewhat lower level of confidence and satisfaction, we cannot establish whether this is a negative or positive result, ie we cannot know whether this correlates with the issue of feedback or attainment of in-depth understanding.

Further research is needed, therefore, which clarifies these issues. In particular a larger study is needed which compares a larger number of participants within three groups: one group using a web-based form of the lesson which does not provide correct answers, one group using a web-based form of the lesson which does provide the answers, and the third group using a paper-based test which provides the answers. As well as gathering information on students' prior attitudes, experience and skills, and their subjective views on the lesson including cognitive workload assessment, such a study needs to record students' processes and decisions throughout the lesson, and factor in objective measure of their success, i.e. their success in the final mastery test. In this way a comparison can be made between the web-based vs. paper-based lesson in terms of experience and subjective cognitive workload (the original focus of this study), and conclusions be drawn about the efficacy of student-driven learning and the value of different kinds of feedback. Meanwhile the whole issue of subjective cognitive workload as it pertains to web-based learning is a promising area for further study.

References

Breen, M. P., & Littlejohn, A. (Eds.). (2000). Classroom decision-making: Negotiation and process syllabuses in practice. Cambridge, UK: Cambridge University Press.

Bunderson, C. V., Inouye, D. K., & Olsen, J. B. (1989). The equivalence of paper-and-pencil and computer-based testing, Educational measurement, American Council on Education, Washington DC, 367-407.

Charlton S., & O'Brien, T. (2002). Human factors testing and evaluation, Mahwah, N.J: Lawrence Erlbaum Associates.

Clariana, R., & Wallace, P. (2002). Paper-based versus computer-based assessment: Key factors associated with the test mode effect, British Journal of Educational Technology, 33, 5, 593-602.

Chou, C. (2003). Interactivity and interactive functions in web-based learning systems: a technical framework for designers. British Journal of Educational Technology, 34, 265-279.

Dumont, R. A. (1996). Teaching and learning in cyberspace, IEEE Trans. Prof Commun., 39, 192-204.

French, D., Hale, C., Johnson, C., & Farr, G. (Eds.). (1999). Internet based learning: An introduction and framework for higher education and business. Sterling, VA: Stylus publishing.

Gal-Ezer, J., & Lupo, D. (2002). Integrating internet tools into traditional CS distance education: students' attitudes, Computers and Education, 38, 319-329.

Hoey, J.J., Pettitt, J.M., & Brawner, C.E. (1997). Assessing web-based courses at NC State Univ., Raleigh, NC. Available at: http://legacy.ncsu.edu/info/assessment.html. Retrieved on 3 November 2004.

Katz, Y. J., & Yablon, Y. B. (2002). Who is afraid of university internet courses? Education Media International, 39, 1, 69-73.

Lin, B., & Hsieh, C. (2001). Web-based teaching and learner control: A research review, Computers and Education, 37, 377-386.

Luximon, A., & Goonetilleke, R. S. (2001). Simplified subjective workload assessment technique, Ergonomics, 4, 3, 229-243.

MacKay, B. 2004. Ramosus: a tool for online scenario-based learning. Available at: http://ramosus.massey.ac.nz/ramosus_outline.asp. Retrieved on 8 May 2005.

Mehlenbacher, B., Miller, C. R., Covington, J. S., & Larsen, J. S. (2000). Active and interactive learning online: a comparison of web-based and conventional writing classes. IEEE Trans. Prof. Comm. 43, 166-184.

Meyer, K. A. (2003). The web's impact on student learning, T H E Journal, 30, 10.

Najjar, L. J. (1998). Principles of educational multimedia user interface design, Human Factors, 40, 2, 311-323.

Noyes, J., Garland, K., & Robbins, L. (2004). Paper-based versus computer-based assessment: Is workload another test mode effect? British Journal of Educational Technology, 35, 1, 111-113.

Olson, T. M., & Wisher, R. A. (2003). The effectiveness of web-based instruction: An initial inquiry, International review of research in open and distance learning, 3, 2.

Reynard, R. (2003). Using the internet as an instructional tool: ESL distance learning. Proc. Ann. Mid-South Instructional Technology Conf.

Rose, E. (1999, Jan/Feb). Deconstructing interactivity in educational computing, Educ. Technol., 43-49.

Rubio, S., Diaz, E., Martin, J., & Puente, J. M. (2004). Evaluation of subjective metnal workload: A comparison of SWAT, NASA-TLX, and workload profile methods, Applied Psychology: An International Review, 53, 1, 61- 72.

Shoener, H.A., & Turgeon, A.J. (2001). Web-accessible learning resources: learner-controlled versus instructor-controlled. J. Nat. Resour. Life Sci. Educ. 30, 9-13.




1 Visit http://writery.massey.ac.nz/lisa.asp to view an example of the tool in action.
Back ^



back to contents page