OER Synthesis and Evaluation / phase3ProgrammeIssues
  • If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

View
 

phase3ProgrammeIssues

Page history last edited by Lou McGill 11 years, 7 months ago

Back to ukoer3 Final Synthesis Report

 

Programme Issues

1. Challenges relating to Project reporting mechanisms

The Evaluation and Synthesis Team have identified some questions around the effectiveness of current reporting mechanisms to capture important information from projects.

  • Projects rarely used their blogs as a means of publishing lessons learnt as they progressed through their work. This meant that ongoing synthesis was very patchy.
  • Project reporting templates do not appear to capture some of the important information that would feed in to programme level outcomes. For example,  interim reports delivered at the end of April 2012 hardly mentioned senior management engagement. Our team was asked to include this in our report to the Senior Advisory Committee meeting which resulted in a specific email request to projects for information about their senior management engagement. We asked them specific questions:
    • How engaged your senior managers are with OER?

    • What are you doing to engage them?

    • Whether it seems to be working?

       

We received back some very interesting responses which resulted in the development of a wikipage on senior management engagement. This has since been augmented by information in final reports to develop a briefing paper). This important information would have missed this if we hadn't specifically asked for the information.

 

2. Project use of the Evaluation Toolkit and Framework

Information from projects (including stakeholder engagement issues) could have been captured in a more comprehensive and systematic way if they had been required to use the Synthesis and Evaluation framework.

 

The Synthesis and Evaluation Team developed an Evaluation Toolkit  which provided various visual ways to connect with the framework and offered routes through the programme themes. Support was offered to help projects engage with the toolkit, through online meetings with evaluation buddy groups and input at programme meetings, direct email and telephone conversations, written and video guides.

 

One of the key features of the toolkit is the use of interactive google forms that allows projects to report on specific questions and receive back their inputs collated for inclusion in programme reporting mechanisms. However this was not linked to project report templates or encouraged as a reporting mechanism. During final buddy evaluation meetings it became evident that projects were aware of the toolkit and it's potential value. Several projects reported using the framework and toolkit to shape and refine their evaluation questions. However they had to focus on reporting to programme templates so were unable to use the toolkit for this purpose. This had a negative impact on ongoing synthesis activities as the team relied principally on final reports. The final report template tends to encourage projects to repeat information (due to the nature of the section headings*), does not deliver engaging readable stories for the wider sector, results in key messages being omitted and makes synthesis challenging.

 

Engaging with the Evaluation toolkit was perceived by projects as 'something extra' to do because it was not specifically included in project plans. Although the framework did exist at the project planning phase (in October) projects were not encouraged to engage with it at this stage. An interesting aspect of this phase is that when the toolkit was launched in March some projects only then engaged with the framework - even though many of them were aware of the framework from previous project work in earlier phases and it was introduced at the start-up meeting. This may indicate that the toolkit was successful in engaging projects with the framework.

An Evaluation Plan was prepared at the start of the project and this was revised half way through the project following the release of the Evaluation Toolkit by the Evaluation and Synthesis team (Great Writers Inspire Final Report)

 

A recommendation from the SESAME project also implies that the framework was not available at the start of the programme (although they may also be reflecting the launch of the toolkit in March evaluation toolkit). The perception that programme evaluation was not aligned to project management mechanisms does link back to points raised in 1 above.

Better integrate programme level and project level evaluation -In order to maximise benefits at a programme level, it would be helpful for the programme evaluation framework to be available when individual projects produce their initial project plans so that the most appropriate evaluation metrics can be chosen at the project level to support overall evaluation. (SESAME Final Report)

 

Despite this we had some excellent feedback about the toolkit and the framework was referred to in most project final reports. We feel that this approach could be of value to future programmes if it was:

  •     included in project plans
  •     encouraged by programme officers
  •     linked to formal reporting mechanisms

 
The report was underpinned by the UKOER evaluation and synthesis framework and drew on an overview of project documentation (project wiki, blog and website), a focus group with the teachers involved in the project and e-mail consultations with project team. (DEFT Final Report)


We had our first meeting with the OER Evaluation and Synthesis team and our ‘evaluation buddy’, the FAVOR project, on 9 May 2012. This was a useful meeting, both for improving our understanding of the JISC evaluation framework and for exploring emerging themes from the two projects. Following the meeting we decided that we needed to revise the focus of our summative evaluation activities to ensure the data we collected was a better fit with the overall JISC evaluation framework. In particular we decided to focus our remaining activities on working with part-time tutors and designed our final tutor survey and focus group with this in mind. (SESAME Final Report)

 

3. Evaluation buddying teams

Evaluation buddy mechanisms were established in consultation with the Programme Manager and encouraged at the Programme start-up meeting. Some projects did refer to this in project plans. This proved a useful mechanism for the team to engage directly with projects and had the following positive outcomes:

  • provided an opportunity to discuss use of the framework and toolkit and identify any issues or challenges relating to evaluation
  • helped them to identify links with other projects. We had some measure of success in getting projects to link evaluation, and other, activities.
  • proved an effective means to gather some of the outcomes of project activities and supported ongoing synthesis
  • provided projects with someone to share their concerns with - particularly projects new to the UKOER programme
  • highlighted significant lessons for the wider community and resulted in the production of some collaborative briefing papers

 

All projects were offered an opportunity to attend two online evaluation buddy meetings (before interim and final report due dates). Not all projects took this opportunity as this was also perceived by some projects as extra to their agreed workplans. Some projects also took the opportunity for extra one-to-one meetings with team members.

 

Comments about the Evaluation buddying process:

The DeFT project wrote an excellent blog post around their approach with buddies ORBIT and said

The meeting also helped us to see that despite the fact that we cover different discipline areas (the majority of DeFT teachers are in English or media while ORBIT focuses on science subjects) we have much more in common than we initially thought.

and

Yet another satisfactory outcome of the meeting was a joint strategy for sharing evaluation outputs

 

The team also benefitted from the support of Evaluation and Synthesis team who facilitated peer evaluation based on involvement with the ORBIT (The Open Resource Bank for Interactive Teaching at Cambridge University) project. Through that arrangement, we were able to take advantage of synergies between projects both of which focused on OER issues within the school sector. The teams met in Sheffield on 16 May and also took part in a series of Skype conversations (23 April, 8 and 29 August). These interactions led to numerous collaborative outputs
•A deeper understanding of the synergies between the two projects and exchange of ideas for collaboration beyond the lifetime of the project
•An exchange of ideas for further collaborative development and involvement of DeFT project participants in ORBIT initiatives such as a survey for teachers
• A joint ORBIT-DeFT Kanban (visual process management tool) for sharing evaluation outputs and exchange approaches to evaluation

•The creation of a DeFT “family” page within ORBIT wiki listing all DeFT case studies (see http://orbit.educ.cam.ac.uk/wiki/DEFT). The ORBIT team have re-purposed the DeFT case studies into interactive lesson plans. On top of being an excellent example of content reuse, sharing the case studies via ORBIT wiki increases exposure and aids dissemination efforts. (DEFT Final Report)It has proven to be a fruitful two-way relationship with not only information but also technology and resources being exchanged by both parties as well as sharing evaluation strategies, (ORBIT Final Report)

 

 

 

A number of face-to-face and virtual meetings were held with DeFT, our evaluation buddies via Skype, that led to numerous collaborative outputs:

●      A deeper understanding of the synergies between the two projects and an exchange of ideas for further collaborative development.

●      An exploration of project management ideas (such as the joint ORBIT-DeFT Kanban)

●      The creation of ORBIT resources relating directly to DeFT (including extraction of lesson ideas from these DeFT case studies), see http://orbit.educ.cam.ac.uk/wiki/DEFT

●      Identification of ORBIT lesson ideas that may be appropriate to support the DeFT case studies.

●      Participation in a DeFT training event and meeting teachers in attendance.

●      The development of a joint evaluation strategy.

●      Agreement to share DeFT teacher contact details for the ORBIT survey.

●      An invitation to attend the DeFT dissemination conference 2nd October 2012, Sheffield and promote ORBIT. (ORBIT Final Report)

 

The COMC team had a discussion with our evaluation partner on Monday 23/04/12  - this was very constructive and supportive at the time and helped frame some issues, but there was little follow-up from the partner. Not sure how this can work better in future – perhaps the JISC OER team need to be slightly more prescriptive and/or require projects to budget time and resource to support this activity, or  even perhaps provide specific partners and specific evaluations tasks/questions.     (COMC Final Report) 

 

One unexpected outcome, in terms of a series of interactive lesson ideas, was the result of the successful collaboration with our evaluation buddies at [1] Sheffield Hallam University. Through negotiation the ORBIT project was granted access to one of the DeFT project’s teachers who analysed each of the 13 DeFT case studies, endeavouring to extract lesson ideas from each of them. This work resulted in over 10 individual lesson ideas for ORBIT and, at the same time, has enabled DeFT to link to these lesson ideas as good examples arising from their case study schools. The 13 case studies are also presented on the ORBIT wiki. (ORBIT Final Report)

 

A Kanban system was established to help organise and monitor the overall management of the project. Kanban is a task-management tool originally developed to track manufacturing processes. Subsequently the Kanban idea was shared with ORBIT’s evaluation buddies at Sheffield Hallam with a second Kanban established to capture joint evaluation activities. (ORBIT Final Report)

 

OMAC strand Teeside project reflected on their blog following the OMAC buddy meeting in April...

Just reflecting on how academic practice has changed having completed another skypeevaluation meeting with other projects in the OMAC strand. It was a very useful meeting sharing what we had learned and thinking about how to ensure the key lessons from the projects are captured in final reports. As someone relatively new to OER practice who has learned a great deal over the past year with this project, I am acutely aware that although OER practice and the infrastructure to support it across the sector is pretty well developed and there are colleagues with a great deal of expertise in this area, there are still many academics who are not engaged or experienced in this aspect of academic practice, nor is it necessarily very easy to access quickly the type of support needed. This is something we must look at if  mass participation in OER is to be achieved – assuming that is a shared goal.

 

The COMC team had a discussion with our evaluation partner on Monday 23/04/12  - this was very constructive and supportive at the time and helped frame some issues, but there was little follow-up from the partner. Not sure how this can work better in future – perhaps the JISC OER team need to be slightly more prescriptive and/or require projects to budget time and resource to support this activity, or  even perhaps provide specific partners and specific evaluations tasks/questions.     (COMC Final Report)

 

ALTO project final report included an acknowledgement to the Evaluation and Synthesis team

for comment, guidance and convening some very useful online discussions and providing the evaluation toolkit, which we have found very useful.    

 

CORE-SET project final report endorsed the appraoch of using a synthesis and evaluation team across the whole programme and recommended the model for future programmes


The use of a Synthesis and Evaluation team as an explicit component of the UKOER Programme, since its inception, has been advantageous in three respects. Firstly, evaluation became a visible and important concept from the outset of a project funding period. Secondly, the use of systems such as ‘evaluation buddies’ precipitated a sharing of thinking and practices amongst clusters of similar projects. Thirdly, having such evaluation consistently throughout all three phases of the Programme, ensured that a large body of evidence has been compiled relating to OER / OEP activities on a national scale. This model of a ‘Synthesis and Evaluation team’ is certainly one that should be applied to future project funding schemes and programmes.

 

4. Evaluation approaches used by projects

perhaps something about levels of evaluation - two projects attempted baselining approach - SESAME and CORE-SET

With the short timeframe for the project we were aware that it would be challenging to collect summative data on what we had achieved in time to provide a full report to JISC in this Final Report. However, we were able to implement evaluation metrics that allowed us to collect at least emergent data from all staff, tutors and students directly impacted by the activities completed in the period of the project, for example through our final tutor survey and focus group, both of which were carried out in early October 2012. We have also worked closely with the Weekly Classes Office to ensure continued evaluation of the project outputs is embedded in current course evaluation forms, so that on-going evaluation and review will continue beyond the end of the project. (SESAME Final Report)

 


 

* projects are asked to write to the following section headings which results in repetition and confusion over what to include in which section.

Project Outputs and Outcomes

How did you go about achieving your outputs / outcomes?

What did you learn?

Immediate Impact

Future Impact

Conclusions

Recommendations

Implications for the future

 

 

 

 

Comments (0)

You don't have permission to comment on this page.