Educational quality monitoring: Is it about quality or about monitoring?

(Note: as part of my ‘senior teacher qualification’ I was asked to write down my vision on an aspect of education. I wrote the essay below on the role of quality monitoring in  (higher) education. All feedback welcome!)

Abstract: There are many opportunities for using educational assessment and quality monitoring instruments to improve the quality of education. If these instruments are mainly seen by faculty as “feeding the bureaucratic beast”, however, it is quite likely that they will not contribute to a real quality culture, but rather cause a loss of perceived professional autonomy and frustration over time spent “ticking boxes” rather than preparing classes or giving feedback to students. I have provided a number of recommendations to achieve more constructive monitoring. In short, goals should be clearly and sincerely explained; measurement should be close to the substantive goals; monitoring should be minimally obtrusive; and faculty needs to be included as the main actor rather than as the object of monitoring. Following these recommendations may sometimes lead to fewer boxes being ticked, but will hopefully contribute to a real improvement in teaching quality and job satisfaction and productivity of teaching faculty. 

In the guise of enhancing professionalisation and accountability of teaching, a number of monitoring instruments have been introduced at universities in the Netherlands and abroad. Where professors used to be relatively free to teach and test ‘their’ courses as they saw fit, nowadays at the VU a course is encapsulated in an assessment plan outlining how the course fits into the different learning trajectories and which learning outcomes need to be tested in what way; after teaching the course the coordinator has to submit a course dossier showing (among others) how the tests were constructed and validated and (in the table of specifications or toetsmatrijs) how the elements of the test correspond to the various intended learning outcomes.

Now, do these monitoring instruments actually improve the quality of teaching, or whether they mostly hinder the teacher in doing his or her job, by adding extra work and diminishing professional autonomy and work pleasure. Put another way, the key question is: under what conditions do monitoring instruments contribute to the quality of education?

Why do we need quality monitoring?

The origins of increased monitoring and accountability lie in the adoption of a form of new public management, where a central authority sets the goals, and subordinate units have room to decide how to achieve these goals (e.g. Hoecht, 2006). Crucially, progress towards the goals needs to be measured (and hence measurable) to make the decentralized organization accountable.

There is generally no objection to accountability (or transparency) by itself, and there are many ways in which such accountability and monitoring can improve quality. The most obvious is perhaps that it can show which universities, programmes, or teachers are most successful at the various metrics related to their educational performance. This keeps faculty alert and motivated and gives incentives to adopt best practices from programmes or colleagues that are performing better.

It can also have direct beneficial effects on teaching. First, it can make teachers themselves aware of possible problems in their own teaching methods, and can point them towards resources or solutions. For example, the (in)famous table of specifications forces teachers to think about the proper distribution of tasks or questions of learning outcomes, and stimulates them to reflect on whether the test strategy adequately tests the outcome in question.

Second, by creating standards and improving documentation within the organization it can make it easier to transfer knowledge and experiences between teachers, for example when taking over a course or in an intervision setting. Students also benefit from more standardized information.

Potential problems with quality monitoring

However, it is not a given that all forms of monitoring and accountability lead to an improvement of education.  A central criticism of new public management is that by focussing on output control, and using certain metrics to measure the achievement of output, the focus of managers and indirectly of faculty moves from these “ultimate” goals towards more “proximate” measurable goals, such as presence of complete course files, graduation rates and student satisfaction. It can even shift from the actual output towards the measurement instrument, with the danger “valuing what is measured, rather than [measuring] what we value” (Biesta, 2009, p. 43).

Additionally, before we can determine how to measure outcomes (and hence the effectiveness of teaching as a process), we need to be able to clearly define outcomes. Although the goals of education are notoriously hard to define and measure, a useful analysis is the division of educational purposes in terms of qualification, socialization, and subjectivication (Biesta 2010; 2015), with most measurement targets aimed at the (arguably more concrete) goal of qualification.

A final criticism of new public management in education rests on the (implicit) assumption of students as customers, as evident in the important role of student evaluations in evaluating courses,  teachers, and even curricula (e.g. the Dutch NSE). Although students are certainly not blind to the quality of teaching, satisfaction ratings also tap into other variables such as enjoyment or strictness. Moreover, as Biesta (2015) describes, the relation between a student and teacher is closer to that between a doctor and a patient: the patient wishes to become better, but leaves it to the doctor to determine the treatment. Citing Feinberg (2001), Biesta states that professors (and doctors) “do not just service the needs of the client, but also play a crucial role in the definition of those needs” (2015, p. 82), especially on the purposes of socialization and subjectivication.

Partly because of measurement issues, and partly for cultural reasons, empirical research on the effectiveness of audits and measurements is scarce, but some anecdotal evidence is published. In an article titled “Feeding the Beast or Improving Quality?”, Jethro Newton (2010) studies whether the monitoring of educational quality, especially in the form of assessment exercises, contributes to education quality. Based on faculty interviews,  his main finding is that in many cases faculty sees accountability requirements as activities that do not contribute to the primary process, but are instead merely needed to ‘feed’ the bureaucratic ‘beast’. According to Newton, it is crucial for quality accountability to be ‘owned’ and ‘adopted’ by faculty for it to have a positive effect on teaching quality. Hoecht (2006) similarly conducted interviews at two institutions that recently attained university status and that went through a transition from ‘light-touch’ quality control to “a highly prescribed process of audit-based quality control” (p. 541). His conclusion is that, although “accountability and transparancy are important principles”, the current audit regime introduces “one-way accountability” and “rituals of verification” (p. 541; cf. Power, 1997), with interviewees commenting on the “extensive box-ticking at the expense of [..] activities such as teaching preparation” (p.556).

How to make quality monitoring beneficial?

From these considerations, it is clear that care must be taken on how quality monitoring and accountability are designed, implemented, and communicated. In my opinion, for quality monitoring to contribute to the quality of teaching, it is necessary for (1) the (substantive) goals of policy to be clear; (2) for the output measurement to be as close as possible to the substantive goals; (3) for the policy to be implemented in a way to maximize teaching quality; and (4) for faculty to have a feeling of ownership and recognition.

First, the goal of a specific policy must always be defined and communicated in substantive terms and related to improving the quality of teaching. The goal must also be sincerely defended by management. Requiring a specific action because “it is required by the visitation” is never an adequate response: management should either agree with and adopt the underlying goals of the policy, or resist its implementation and explain why the policy is not applicable or beneficial for this specific case.

Second, the attainment of policy goals should be measured as close as possible to the substantive goal. In many cases, this is not trivial. Learning outcomes are often difficult to measure, especially high-level educational goals such as socialization and subjectivication. As teaching is never done in isolation, the contribution of specific teaching activities to these outcomes is even more difficult to measure. As a consequence, we often take measures that are easy to produce, such as student satisfaction or graduation rates.

Third, the implementation of quality monitoring must always be implemented in such a way as to minimize unneeded effort for the faculty and maximize chances of actual improvement to teaching. This means that information should be requested when it is timely to reflect on the relevant process, and providing the information should be efficient and non-cumbersome. The information should also be the start of a dialogue, not a folder dropped into the memory hole. If needed to achieve these goals, policy should be implemented more slowly and/or less uniformly.

Finally, teaching faculty needs to the main actor in the story, not a cynical onlooker. The shift from input control to output control, combined with the highly competitive academic environment and the lack of job certainty for many (junior) teachers can easily lead to a feeling of inadequacy and underappreciation, and can lead to cynicism, lower performance, and even medical problems. Where quality monitoring is directly related to evaluation, the metrics should be fair, transparent, and predictable. Where quality metrics are not related to evaluation, they should be discussed constructively and with understanding for the efforts of the teacher and the specifics of the case. Effort by itself is not enough, but the effort should be appreciated even if the measured output is not as desired.

In sum, there are many opportunities for using educational assessment and quality monitoring instruments to improve the quality of education. If these instruments are mainly seen by faculty as “feeding the bureaucratic beast”, however, it is quite likely that they will not contribute to a real quality culture, but rather cause a loss of perceived professional autonomy and frustration over time spent “ticking boxes” rather than preparing classes or giving feedback to students. I have provided a number of recommendations to achieve more constructive monitoring. In short, goals should be clearly and sincerely explained; measurement should be close to the substantive goals; monitoring should be minimally obtrusive; and faculty needs to be included as the main actor rather than as the object of monitoring. Following these recommendations may sometimes lead to fewer boxes being ticked, but will hopefully contribute to a real improvement in teaching quality and job satisfaction and productivity of teaching faculty.

Sources

Biesta , G. J. J. (2010) Good Education in an Age of Measurement, Boulder: Paradigm Publishers.

Biesta, G. (2015). What is education for? On good education, teacher judgement, and educational professionalism. European Journal of Education, 50(1), 75-87.

Hoecht, A. (2006). Quality assurance in UK higher education: Issues of trust, control, professional autonomy and accountability. Higher education, 51(4), 541-563.

Newton, J. (2000). Feeding the Beast or Improving Quality?: academics’ perceptions of quality assurance and quality monitoring. Quality in higher education, 6(2), 153-163.

Power, M. (1997). The Audit Society. Oxford: Oxford University Press.