Harvey P. Weingarten — Quality assurance: A simple concept that we overly complicate

Harvey P. Weingarten, President & CEO

If you were to read all that is out there about quality assurance in higher education, you might be left with the impression that quality assurance is a complicated, nuanced, arduous, complex concept and process that defies rigorous measurement, and that will take a decade or more to get it right.

I am fond of saying that there are few mysteries in life and measuring and assuring the quality of a higher education is no exception. To the contrary — it’s really quite simple. Let me explain.

We develop academic programs so students will learn the things we think are important — things we think they should know and be able to do. What is quality and quality assurance? Simply measuring whether students who take these programs have, in fact, acquired the information and skills the programs purport to teach.

It would be presumptuous of me to tell curriculum designers what content and skills are important. I leave that to the experts in the discipline. But if I want to know or assure myself that they have developed a quality program, then all I really need to know is whether the program successfully instilled this knowledge and skill set in the students who graduate from it.

We have absolutely no difficulty with this simple quality-assurance approach when we talk about content. Instructors spend a lot of time teaching content they deem important, evaluating whether students have absorbed this content and credentialing how much content students have learned by assigning them a course grade that appears on an official transcript. For reasons unclear to me, all of this gets hopelessly muddled and complicated when we start considering skills like critical thinking, communication, literacy, etc. These are skills that we say we want students to learn, and we have no dearth of educators and administrators quite prepared to assert, often with little evidence, that these skills and competencies are acquired. But assertions are, or at least should be, insufficient for sensible and meaningful quality-assurance systems. The essence of quality assurance is to assess — with real measurements — whether these skills and competencies are, in fact, acquired.

Instead of this simple and direct approach to quality assurance, we have designed bureaucratic, administratively burdensome processes that measure a whole bunch of things that may or may not reflect the quality of a program — things like library resources, the number of instructors with a PhD in a program, the number of courses, the presence or absence of an experiential component, the length of the program, etc. This has created a sizable bureaucracy that kills many trees and employs many people but does little to really assure anyone — certainly not a skeptic with a critical mind — about the quality of the program under scrutiny. The current approach to quality assurance would be akin to evaluating whether students have acquired the concepts and content a program is designed to teach by measuring the length of their answers on an exam or the type of pen they used.

I am particularly sensitive to this quality assurance issue because my colleague, Martin Hicks, and I recently returned from a Bologna Process Researchers’ Conference (to see the paper we presented on performance measurement go here) where we heard a lot about quality assurance. To me, it was impressive how much attention has been paid but so little tangible progress made in Europe on this matter (at least as reflected in the papers presented at this conference). There are still many active discussions about what “quality” in higher education means (with some sober commentators prepared to acknowledge that no one really knows what it means — or at least that there isn’t any consensus) and endless analyses bordering on the Talmudic of how to classify different quality assurance regimes in different countries, what the right measures of quality are, and on the development of qualifications frameworks and learning-outcomes inventories. In my view, the discussion in North America has thankfully moved a little beyond this to focus on the critical role of assessment; specifically, how do we know whether these desired qualifications and outcomes are actually achieved. I would direct readers to the approach of the National Institute for Learning Outcomes Assessment in the US and the one we at HEQCO have taken as examples of direct assessment. There are appropriate criticisms of the way we do quality assurance in Ontario. But my sense is that the province at least recognizes that proper quality assessment requires actually measuring whether desired learning outcomes have been achieved. In the spring, we will report the results of a large trial we conducted in Ontario that used an online version of the OECD’s PIAAC test to measure learning gains in literacy, numeracy and problem-solving skills in college and university students from the time they start their programs to when they graduate.

So, as is customary this time of year, here are two of my New Year’s resolutions regarding quality assurance in higher education.

First, I will continue to advocate the simple idea that quality assurance is no more than measuring whether the knowledge, skills and competencies a program was designed to foster and develop have actually been acquired. And I will argue that quality-assurance processes and bodies should focus on this assessment and purge themselves of the voluminous paperwork and administration that goes into collecting indirect and surrogate information that provides little evidence of quality.

Second, I am going to stop using the term “learning outcomes.” Many people, quite legitimately, are turned off by the term because they associate it with the extensive cottage industry that has emerged to develop qualification frameworks and mapping exercises that link qualifications to courses and programs. Instead, I will stick to the essential point: how to measure and credential the skills we think students should acquire as part of a postsecondary program. So, expect to hear me say “skills measurement” and continue to expect HEQCO to devote considerable attention and resources to working with others to figure out how to do this well.

Thanks for reading.

6 responses to “Harvey P. Weingarten — Quality assurance: A simple concept that we overly complicate”

  1. Denise Conway says:

    Using the term “skills measurement” rather than “learning outcomes” at the higher education level implies a more active role of the learner, and probably better reflects the role of PSE in preparing people for the technology-driven world.

    • Mary Pringle says:

      I like the term ‘skills measurement’ too. Knowing what to do with information was always more important than merely possessing it. But now that anyone can have access to great volumes of information, it is more clear than ever that education is about knowing how to do something. And today the most important skill is the ability to learn. We need to keep working on creating super learners and super thinkers as the goal of all our education systems. And good citizens (we still need the humanities!)

  2. Jen W says:

    Hmm. I wonder if you would advise instructors who are designing courses and curriculum to replace the section ‘learning outcomes’ with ‘skills measurement’ or just erase the learning outcomes section altogether and jump from content to assessment?

  3. Maxine Machan says:

    In light of The Conference Board of Canada’s Employability Skills Toolkit, and students’ need to self manage not only their education but their transition into the workplace, evidence of their competencies through documented competency profiles actually begins in high school and is best continued into their post-secondary education. As a graduate student, I actively looked for opportunities to express my new knowledge through writing and my new skill set (in conflict analysis and management) through opportunities to demonstrate that knowledge in class. As a trainer now, my curriculum includes both theory and skill set demonstration to ensure that my participants not only learn what they need but also see value in the time invested in increasing their skill set. The ‘well-rounded’ citizen of the 21st century operating in our global marketplace will need multiple tools in their ‘toolbox’ for sure!

  4. Wes Zaboschuk says:

    Thanks for the post. I have been involved in fruitless program mapping as an Associate Chair for the Marketing program at Northern Alberta Institute of Technology. It is a waste of time and does not measure performance. When I was first hired 15 years ago, skills were measured. Now in Business Education a young PHD in any field can trump the skills of a successful mentor with a plethora of experience. Not discrediting the PHD but balance is needed to help the future success of graduates.

  5. Maxine Machan says:

    I’ve noticed that even under “Learning Outcomes” a lot of instructors use terminology calling for a ‘demonstration’ of both knowledge and skills. Perhaps wording such as “Demonstrated Competencies” could also be considered as more reflective of what both instructors and potential employers need to see.

Leave a Reply

Your email address will not be published. Required fields are marked *