Networked Learning Conference 2004

NLC2004 /Proceedings / Individual Research Papers


Researching Networked Learning – Critically Reviewing an Adaptive Evaluation

 

Liz Aspden and Paul Helm

Sheffield Hallam University

e.j.aspden@shu.ac.uk, p.a.helm@shu.ac.uk

 

ABSTRACT

This paper describes the process of evaluating a large-scale e-learning implementation at a UK university. It begins by outlining the context in which the evaluation is taking place, before reviewing the approach taken and methods used. The paper then evaluates the methods and methodologies, and discusses how the instruments used could be adapted and applied to researching a variety of networked learning situations.

Keywords
Blended learning, evaluation, student-centred, diaries, interviews, observations

 

INTRODUCTION/context

This paper discusses the process of evaluating a large-scale e-learning implementation at Sheffield Hallam University (SHU), a campus-based UK university. SHU offers courses in all major discipline areas, including continuing professional development opportunities, and with over 24000 students is the UK’s sixth largest university. As part of the institution’s commitment to innovation in learning and teaching the e-learning@shu Project was established in 2001, its main aim being to encourage a culture where technology is used appropriately to complement campus-based activities. Part of its pedagogically-led work involves overseeing the implementation of a campus-wide virtual learning environment (VLE), Blackboard, adoption of which as a teaching tool is optional, and an integral part of the project is a 2-year programme of research and evaluation. It is the methods used within this formative evaluation that are the subject of this paper.

Although the broad scope of the evaluation is to investigate the impact of the VLE on the student experience, it should be stated that our focus is not solely on what happens online. While we aim to understand what students are doing within a networked environment, we are also looking at the overall picture of student behaviour. Our evaluation thus takes a holistic perspective of the student experience, examining how use of the VLE is related to the overall experience of being a student at SHU. This perspective recognises that e-learning is complex and context-dependent (Hutchings, 2001) and the evaluation seeks to illuminate an array of issues (Parlett & Hamilton, 1972).

 

the evaluation – aims & design

Faced with evaluating a campus-wide implementation our initial concern was with the scale of the project. The research and evaluation project began in early 2002, at which point nearly 13000 students were enrolled on at least 1 Blackboard site. An initial case study investigation (April – August 2002) into the experiences of staff who were using Blackboard gave us some anecdotal evidence of student feedback, and we considered using the emerging staff and student perspectives to inform a large-scale survey. However, it was felt that an investigation into the student experience needed to be more firmly grounded in the student perspective, and that a student-centred approach was important. Therefore, the decision was taken to begin with a small-scale, qualitative enquiry to clarify the main topic areas.

Within the broad scope of the evaluation, we had decided that the initial stage should investigate whether e-learning could add value to the student experience, and in consultation with colleagues from across the institution we defined a set of more specific areas of interest for the evaluation to focus on. These areas can be summarised through the following questions:

 

The purpose of these questions was not to provide a definitive list of the areas to be covered. While they provided an overall structure, we were keen to design an evaluation that retained sufficient flexibility to take account of any issues that might arise, but that didn’t ‘fit’ neatly into these areas. Therefore, we felt that it was important to select a method that would allow investigation of particular areas while allowing the participants to have an element of control over the specific issues that would be raised.

Having decided to pursue a small-scale, qualitative study to begin with, we needed to identify a suitable sample for the research. As the intention was to look at the campus-wide perspective rather than to evaluate the use of specific sites, we wanted to look beyond particular subject areas and courses, and we were keen to use the first phase of the evaluation to get an overview of the different uses of Blackboard. We therefore went back to the usage statistics to identify those students who had a substantial amount of their learning supported through Blackboard (who we defined as those who were on 4 or more individual sites). Being on multiple sites, these students would be more likely to have an acute sense of the impact of the VLE across different situations.

The data collection was scheduled to begin in November 2002. This was driven partly by practical reasons, as the enrolment process would be largely complete by this stage, and identification of the 4-or-more cohort would be possible. However, we were also conscious that we wanted to allow students the chance to settle in to their learning environment following the start of the new Academic Year, and felt that starting the research any closer to the beginning of Semester 1 would result in ‘first impressions’ of the VLE. While first impressions are valid, we felt that they would not provide the contextual data that we were keen to obtain, and we hoped to get a more considered reflection of the overall learning experience.

 

Phase 1

The need for a student-centred approach for the first phase of the evaluation led us to consider using open-ended (or semi-structured) interviews. Open-ended interviews have the advantage of keeping the interactions between interviewer and interviewee focussed while allowing individual perspectives and experiences to emerge (Patton, 1990). The interview guide approach is explained by Patton as follows:

“a list of questions or issues that are to be explored in the course of an interview…the interviewer remains free to build a conversation within a particular subject area, to word questions spontaneously, and to establish a conversational style – but with the focus on a particular subject that had been predetermined”(Patton, 1990; p283).

This form of qualitative interviewing (Mason, 2002) therefore went some way towards meeting our aims. However, because an element of the investigation concerns how students interact with online learning we felt that the data would be enriched by combining interviews with observations of students using Blackboard. Matching self-reported data with observed behaviour can allow contextualisation of comments made by an interviewee, and can facilitate comparisons between individual perspectives.

Within this study, the combination of interview and observation worked in the following way: At the beginning of each interview/observation, the student was asked to log into Blackboard. The interview was framed by a series of prompts from the interviewer (such as ‘what do you normally do when you first log on?’), and students were asked to navigate around the VLE to illustrate any points that they wished to make. Each interview/observation was recorded, but the basis of the analysis is a set of detailed notes made during the process, with the tapes being used to provide reminders of tone of voice, specific phrases used, etc. In this way, the data provides a comprehensive, contextualised picture of what the students are saying, how they are saying it, and how they are interacting with the material on-screen.

The notes are stored in 2 ways. First, they are written up in the order of the interview, which allows an insight into how the student progressed from one aspect of the learning experience to another. Because the interviews were semi-structured, this could have been due to a student naturally making the connection between two aspects, or it might have been in response to a prompt from the interviewer. For example, in response to the question ‘what do you normally do when you first log on?’ the student might talk about checking a Discussion Board within a particular course. This would obviously be prompted by the interviewer. If, without any further prompts, the student then goes on to talk about the fact that they like online communication because it allows them to offer/receive peer support at crucial points in their study, then this would be an example of the student making a link to a positive aspect of the VLE.

Once the notes have been organised in the above way, they are then analysed according to a series of categories, which are linked to the areas of the prompts used. This facilitates comparison between particular areas which are common across all interviews, and is particularly valuable for exploring shared perceptions and differences of opinion about similar uses of the VLE.

 

Phase 2

Between November 2002 – February 2003, 15 students took part in the interview/observations. The issues that were being raised by the students revealed an intriguing – and at times unexpected – picture of the impact that the VLE was having on their experience. Within the data we identified 7 major themes that were emerging, and these indicated a number of different directions that the research could take. However, it was becoming apparent that the planned move to a large-scale survey would be inappropriate at this point. Although the depth of the data was good, its complexity meant that we were still unsure of exactly what to ask through a survey, and we felt that further clarification was needed.

One approach that we considered involved pursuing a common theme with a small sample of students to see if the issues could be clarified to any extent. We considered pursuing this in two different ways, either through structured/ semi-structured interviews, or through the use of repgrids. Repgrids were devised by Kelly (1955) within the context of personal construct theory, and have been adapted and used within educational research in a number of different settings (see Cohen et al, 2000, for examples of their application). Their use has been compared with using well-structured interviews (Alban-Metcalf, 1997) and we contemplated using them as a way of assessing how students viewed the different uses of Blackboard. In the end, however, we felt that both repgrids and focussed interviews would lose a lot of the complexity of the data, and that this would weaken our perspective of the overall student experience. Rather than narrow the focus at this stage, we felt that a more in-depth investigation would be more appropriate. We were keen to continue generating self-report data by eliciting students’ accounts of their own behaviour (Säljö, 1997) and, as the intention was to complement the verbal and observed data from the interviews we considered using written accounts. This led to the development of a reflective diary.

Literature about the use of participant diaries as an educational research tool is sporadic. Diaries have been used in a variety of settings for social research (Corti, 1993) particularly within health related subjects. As well as being used as a stand-alone data source, they can also be used as an alternative to observation (Maas & Kuypers, 1974), particularly in sensitive situations (Elliott, 1997), which can then be used as preliminary data to inform in-depth interviews (Zimmerman & Wieder, 1975, cited in Plummer, 1983). Within education, the use of diaries as a learning and teaching aid is growing, and their application as a tool for educational research is reported in a variety of contexts (Tang, 2002; Nunan, 1992), some of which concentrate specifically on research into learning technologies (e.g.: Breen et al, 1998; Johnson, 2001; Brace-Govan & Clulow, 2000).

Diaries can be used to collect both quantitative and qualitative data. Their use, however, is not without its problems. Bell (1999) discusses how diaries can be adaptable and valuable sources of information, but that they can be time-consuming to complete. Therefore, participants need to be clear about the commitment that is needed at the outset. From an ethical point of view, the input in terms of time and effort needs to be carefully weighed up against the benefits that can be accrued through their use. Additionally, Johnson (2001) noted that although participants using reflective diaries claimed that they were time-consuming to complete, the quality and quantity of the data generated was not sufficient to support this perception.

Other potential drawbacks include non-completion (where participants drop out part way through the study) and falsification. Deliberate falsification should be a minimal risk in a study such as this, where the data is not being associated with particular learning outcomes/objectives. In effect, participants are being asked for their opinions rather than being judged or assessed on the basis of what they write (although as in all research, the very fact that their behaviour is of interest can cause participants to modify this). A stronger concern, however, is falsification of data in the sense of it being completed some time after the event, as this negates the advantage of using diaries to obtain information close to a particular moment in time (Plummer, 1983) rather than relying on re-call.

 Careful consideration was therefore given to the structure of the instrument, and to the support that would be offered to the participants, particularly in relation to encouraging regular completion of the diary. Taking all of the above issues into account, a pilot study was essential in order to assess:

The intention was that students would keep the diary, if selected as an appropriate research tool, for a 2-week period. However, because data from the pilot would only be used to inform choices made about the instrument, we felt that the trial only needed to take place over a few days. One student (who would not have been eligible to take part in this stage of the research) was asked to keep the diary for a 3-day period in order to assess its usability.

The pilot raised some interesting issues that led to modification of the diary’s structure, including the rejection of some questions and the re-wording of others. Overall, however, it was felt that the data generated through the diary over the full 2-week period would be useful in providing an in-depth picture of the student experience. The final format of the diary included 2 sheets for the students to complete on a daily basis, and 1 for them to complete at the end of each full week. The first sheet they completed each day gave them a list of activities (such as lectures, online communication, use of the Learning Centre, etc) against which they were asked to indicate which they had been involved with, and approximately how long they had spent on each. The intention here was to get a sense of the patterns of activities that students were involved with rather than to quantify the amount of time, and it was hoped that listing the different activities would provide students with an ‘easy way in’ before they moved on to the reflective sections of the diary. The daily and weekly reflective sheets provided students with a series of open-ended questions about how they felt about the day/week as a whole, and at the end of the 2-week period each student took part in an in-depth interview to clarify any emerging issues.

The students for this phase of the research were self-selecting and, as with Phase 1, were drawn from those students on 4 or more sites. An advert was placed within Blackboard giving brief details, asking students to contact the researcher for further information. Because of the sustained input required from the students (the pilot study suggested that approximately 15 minutes per day would be required for the diary, with the weekly reflection and exit interview accounting for a further 60/90 minutes in total) payment of £50 of book vouchers was offered for each completed diary. Students were selected on a first-come-first-serve basis – there were 10 places available, and we received many more expressions of interest than we could accommodate. Each student took part in an initial face-to-face meeting with the researcher, to ensure that they understood the purpose of the research and what was expected of them. This meeting also allowed students to raise any issues or concerns at the outset, in addition to which they were offered full support (via e-mail, phone, or additional face-to-face meetings) during the 2 weeks. As an additional measure to encourage engagement with the diary and two-way feedback, students were required to report their progress via e-mail after the first 2 days, at the end of the first week, and on the final day.

 

DISCUSSION

As stated earlier, the intention behind the evaluation was to investigate the impact of the VLE on the student experience. Within this, we hoped to look at the holistic student experience, and to keep the student voice at the heart of the evaluation. This section will critically evaluate the methods chosen, and look at how the instruments could be used to research a variety of networked learning situations.

The interview/observations had been designed specifically to put the student voice at the centre of the evaluation from the outset. One indicator of their value was not so much in the issues that students chose to raise, but also in the issues that were not raised. For example, we had expected that one of the main concerns of students would be equality of access, i.e., those students without off-campus internet access feeling that they were at a disadvantage because of this. This issue was in fact only raised by those students who did have off-campus access, who thought that others would probably struggle without it. The diversity of opinion was also interesting, and this is one area in which we feel the combination of interview and observation was particularly valuable. To illustrate this point, the first two students who took part in this phase of the research shared a number of characteristics: both were mature female students with similar domestic circumstances, and both were studying the same subject area. Their opinions on the use of the VLE varied greatly, which is perhaps not surprising. However, what was surprising was that they were enrolled on exactly the same courses, and taught by the same tutors. The combination of interview and observation was particularly valuable in separating out the different perspectives from which they were approaching the online part of their studies. Without the observation, this comparison of perspectives would not have been possible to the same degree.

The interview/observations provided a good balance of depth and breadth of coverage, and we would suggest that an approach such as this is valuable in a range of networked learning situations. Within a campus-wide implementation, they can provide a contextualised picture of the student experience, and a solid base from which further investigation can be developed. This contextualised picture could also be useful in a more localised context (say, in evaluating particular courses) and the combination of interview and observation could provide valuable information about the perceptions of individual students relating to specific areas of the course. The interviews and observations can be very easily adapted to individual circumstances by choosing the appropriate prompt questions, but their value does lie in allowing participants to have some element of control over the topics to be discussed.

From the outset, our long-term aims fell into the data-mix category of multimethod evaluations, which is in common with learning technology evaluations at other institutions (e.g.: Breen et al, 1998; Richardson & Turner, 2000; Scanlon et al, 2000). Multi-method evaluations can help to capture the complexity needed, can produce robust data (Breen et al, 1998) and recognise the fact that there are no uniquely ‘right’ approaches (Phillips, 2000; Stacey & Rice, 2002). The benefits of mixed- or multi-method analyses are well documented, although Datta (1997) cautions that for their value to be realised they have to be designed with appropriate care, selection and realistic expectations. What Phase 1 of the evaluation did not do was clarify the main topic areas in a way that could have led to the design of a large-scale survey as we initially hoped. However, it is fair to say that this was probably due more to the nature of the context than the method itself, and that the interview/observations were more of a help than a hinderance.

The flexibility of the interview/observations – and the themes that emerged because of this flexibility – did prompt us to reconsider our initial approach to the evaluation, which was to use the data from the first stage to feed into a large-scale survey.  This reassessment of the methods has proved to be an important point in defining the direction of the evaluation, helping us to maintain the focus on the holistic student experience while keeping realistic expectations regarding the quality and quantity of the data.

The diaries provide a rich data-set about many aspects of the student experience on- and off-campus which would undoubtedly have been overlooked through a narrower approach. Additionally, the value of obtaining data close to the event, rather than relying on recall, should not be overlooked. As one student commented: “I really didn’t think that this was an issue for me, and when we first talked about the diary I thought ‘oh, I’ll have nothing to put in there’…it’s only when you look at it on a day-to-day basis that you realise how important things are”.

None of the students within this study found the diary to be overly time-consuming, and the quantity and quality of the data was surprising. The style that the students used, and the detail in their written accounts varied greatly, and used alone a diary would not be the right approach for all situations or all students – some diarists will provide a wealth of information about seemingly unconnected issues, while others will simply record brief notes about what they did and how it affected them. However, combined with an interview, a diary can provide an extremely valuable first stage of data generation. Interviewees can expand on events that they have recorded, and can use the written account to remind themselves of the context, therefore meaning that they are not relying solely on recall.

As mentioned earlier, diaries are extremely adaptable, and can be used in a variety of contexts to generate qualitative and/or quantitative data. However, in evaluating networked learning, we would suggest that the value of diaries lies in the richness of the data that can be generated, and that they are best used to capture an in-depth picture of the experience of small numbers of students. This is not to say that they are not suitable for using with subsets of a larger population, and we found that within a campus-wide evaluation the small number of participants was more than made up for by the wealth of the data set. This richness can however mean that the analysis is more time-consuming than had been expected! After various attempts at synthesising our data, we found that the best way to approach this was to take the individual diaries as cases on their own, and to build up a picture of the learning experience for each student on each day. Once this has been completed it becomes easier to build up an overall picture of the learning experience over time, and to look at the different ways in which student’s progress through their learning experience.

One of the main advantages that the diaries brought was the ability to look at the changing patterns of student behaviour over a period of time. Of course, they could be used for shorter or longer time periods: if they are to be used for longer periods of time, then we would urge caution as the longer the engagement required, the more chance there is of participants dropping out. We would also recommend the use of regular feedback stages throughout the process, to ensure that data is being recorded at the required times.

Conducting evaluations with small numbers of students, even within a campus-wide context, does raise issues of confidentiality that would be less apparent in a large-scale approach. Care has to be taken not only to ensure the confidentiality of the students who have participated directly, but also that of their peers and tutors. When analysing and disseminating data, attention has to be given to removing any aspects that could identify individuals, while retaining sufficient contextual information to make the data meaningful.

 

conclusion

In evaluating networked learning situations, we would emphasise the importance of looking at the holistic student experience rather than just at what happens online. The methods used in the initial phases of this evaluation have been extremely valuable in providing a picture of student behaviour on- and off-line, and we would recommend that practitioners consider using mixed-method approaches in order to capture and illuminate a range of issues. The instruments designed for this evaluation were done so with a specific purpose in mind, i.e., to take an in-depth look at the student experience. They were designed initially to provide base-line data, which could be used to inform a large-scale survey. However, the methods are flexible enough to be adapted to a range of situations, from small-scale evaluations within a specific course, to large-scale evaluations.

 

REFERENCES

Alban-Metcalf, R.J. (1997) Repertory grid technique. In Keeves, J.P (ed) Educational research, methodology and measurement: an international handbook. (2nd ed) Pergamon.

Bell, J. (1999) Doing your research project (3rd ed.) Buckingham, Open University Press.

Breen, R., Jenkins, A., Lindsay, R., & Smith, P. (1998) Insights through triangulation. In M. Oliver (ed) Innovation in the Evaluation of Learning Technology (pp.151-168) University of North London.

Brace-Govan, J. & Clulow, V. (2000) Varying expectations of online students and the implications for teachers: findings from a journal study. Distance Education 21 (1).

Cohen, L., Manion, L., & Morrison, K. (2000) Research methods in education. London: Routledge.

Corti, L. (1993) Using diaries in social research. [online] last accessed 10 December 2003 at URL http://www.soc.surrey.ac.uk/sru/SRU2.html

Datta, L. (1997) Multimethod evaluations: using case studies together with other methods. In E. Chelimsky & W. R. Shadish (eds) Evaluation for the 21st century: a handbook. Sage Publications.

Elliott, H. (1997) The use of diaries in sociological research on health experience. Sociological Research Online. Vol. 2. No. 2. [online] last accessed 27 July 2003 at URL http://www.socresonline.org.uk/socresonline/2/2/7.html

Hutchings, M. (2001) Enhancing learning with computer mediated communication. Improving Student Learning, 2001. Edinburgh, UK.

Johnson, R. (2001) Integrating evaluation into the pedagogical process: a diary based strategy for illuminating the invisible and virtual. Improving Student Learning, 2001. Edinburgh, UK.

Kelly, G.A. (1955) The psychology of personal constructs. Norton: New York.

Maas, S. & Kuypers, J. A. (1974) From thirty to seventy: a 40 year longitudinal study of adult life styles and personality. London: Jossey-Bass)

Mason, J. (2002) Qualitative Researching (2nd ed.) London: Sage Publications.

Nunan, D. (1992) Research methods in language learning. Cambridge University Press.

Parlett, M. & Hamilton, D. (1972) Evaluation as Illumination: a new approach to the study of innovatory programmes. Occasional paper No. 9, Centre for Research in the Educational Sciences, Edinburgh. Reprinted in R. Murphy & H. Torrance (eds) (1987) Evaluating education: issues and methods. London: Harper & Row.

Patton, M. Q. (1990) Qualitative Evaluation and Research Methods. Sage Publications

Phillips, R. (ed) Handbook for Learning-centred Evaluation of Computer-facilitated Learning Projects in Higher Education. Murdoch University. [online] last accessed 21 January 2004 at URL: http://www.tlc.murdoch.edu.au/archive/cutsd99/handbook/handbook.html

Plummer, K. (1983) Documents of life: an introduction to the problems and literature of a humanistic method. London: George Allen & Unwin.

Richardson, J.A. & Turner, A. (2000) A Large-scale ‘local’ evaluation of students’ learning experiences using virtual learning environments. Educational Technology & Society, 3, 4

Saljo, R. (1997) Self-report in educational research. In J. P. Keeves (ed) Educational Research, Methodology & Measurement: an international handbook (2nd ed.) Pergamon.

Scanlon, E., Jones, A., Barnard, J., Thompson, J. & Calder, J. (2000) Evaluating information and communication technologies for learning. Educational Technology & Society, 3 (4)

Stacey, E. & Rice, M. (2002) Evaluating an online learning environment. Australian Journal of Educational Technology, 18 (3), 323-340

Tang, C. (2002) Reflective diaries as a means of facilitating and assessing reflection. HERSDA 2002.

Zimmerman, D.H. & Wieder, D.L. (1975) The diary interview-method, Urban Life 5 (4)