Categories

Harvey P. Weingarten – Outcomes-Based Funding: Part 1. Successful models start with psychology 101

HEQCO released a report this week providing an extensive review of outcomes-based funding models used in postsecondary education and their effectiveness.  Outcomes-based funding is a practice where institutions receive their funding, partially or totally, on the basis of performance, specifically, the achievement of agreed-upon goals or outcomes (I use the terms “outcomes-based funding” and “performance-based funding” interchangeably).  Click here for a particularly good (and brief) summary of outcomes-based funding systems.

HEQCO’s report distills a set of conclusions consistent with a rapidly growing literature on outcomes-based funding, the most notable being that this funding model has met with varying degrees of success in different jurisdictions.  Perhaps not surprising, as the authors note, because the newness of these schemes provides little data for evaluation and funding schemes presented as “performance-based” can differ considerably in their design.  Nevertheless, the literature and our report point to some design features, such as the necessity of paying attention to institutional differentiation, that appear to increase the success of outcomes-based funding.

There are some who will take the absence of overwhelming and convincing evidence to justify the position that performance-based funding does not work in higher education.  People who hold this view are dead wrong.

Given that I am supposed to be data driven, how can I be so sure that performance-based funding works?  It’s simple — because the effectiveness of performance funding is based on human nature and basic laws of behaviour.

You may recall the book All I really need to know I learned in kindergarten.  Well, everything I really need to know about the behaviour of universities I learned in introductory psychology, which teaches students the basic principles and processes of behaviour change.  These principles and processes – like motivation, reinforcement, punishment and incentives – are as close as social sciences come to the physicist’s “laws.”  More importantly, my experience in higher education has shown me that these principles and processes, when appropriately understood and applied, can reliably and effectively change the behaviour of postsecondary institutions, departments, administrators and faculty in predictable and desired ways.

What principles of behaviour change are relevant to designing effective outcomes-based funding formulas?

First, there is the principle of motivation.  An organism must be motivated if it is to learn to behave in new ways.  In the absence of motivation it is difficult, if not impossible, to change behaviour.   John Kotter, in his influential book Leading Change, identifies “establishing a sense of urgency” as the first stage of initiating change.   Food deprivation at the right levels provides sufficient motivation and urgency for a rat to learn a maze.  Money deprivation (something like the impending inability to meet payroll) can be a powerful motivator of postsecondary behaviour change.  Unless institutions feel a sense of urgency, they may not be influenced by any outcomes-based funding formula, no matter how cleverly it is designed.

Second, in any exercise of behaviour change, it is critical to know what behaviour you want to elicit (or shape).  No matter how motivated the rat, you can’t teach it to press a bar, run a maze or stand on its head if rewards are decoupled from the desired behaviour.  So, for an effective outcomes-based formula, designers have to be clear about the outcomes they desire (you can’t have too many) and then disciplined enough to consistently reward only the desired behaviours or behaviours that move towards the desired outcome.  Outcomes can be things like growth in enrolments, more research output, greater economic impact or more world-class institutions.  It is problematic if governments have too long list of desired outcomes (especially if some of the outcomes are inconsistent with others) or start reinforcing other behaviours when other matters (like different political considerations) arise.  Clarity about a short list of desired outcomes is a necessary precursor to the design of an effective performance-based funding formula.

Third, reinforcers have to be of sufficient magnitude to drive and cement behaviour change.  An additional 1% or 2% of incremental funding in a stand-alone performance envelope may not be sufficient to guide and elicit change.  That’s why the most effective outcomes-based formulas are those that influence a significant amount of institutional funding.

Fourth, the reinforcer has to be applied in such a way that the behaviour change can lead to the desired outcome.  Consider the case of a government that wants a new funding formula to lead to improvement in the quality of the classroom experience.  The government might consider increasing institutional funding if student satisfaction scores increase (forget for the moment the issue of whether student satisfaction actually reflects the quality of the learning experience).  This might not work well because the reinforcer, additional funding to the institution, might not impact the lot (or salary or research space, etc.) of individual professors and it is their behaviour that has to change to improve the classroom experience because they, not the administrators, are the ones actually in the classroom.    In my opinion, too little attention is paid to this issue of behavioural engineering.  Most just assume that a contingency imposed on the institution will be internalized by the people who work in the institution, and this is often not the case.

It is easy to design a bad performance-based funding scheme and not surprising when it fails to lead to desired outcomes.  But, it is an axiom of behaviour that a well-designed outcomes-based funding formula will work.

In Part 2 of this blog, I will discuss the readiness of Ontario for an outcomes-based funding formula.  Stay tuned.

Thanks for reading.

-Harvey P. Weingarten, Ph.D. (In psychology, if you haven’t figured that out by now.)

3 replies on “Harvey P. Weingarten – Outcomes-Based Funding: Part 1. Successful models start with psychology 101”

Let’s break this down.

We know that the current system of funding based strictly on student numbers has lead to a “bums in seats” mentality. That needs to change.

So, let’s consider the outcome-based alternative as Harvey suggests.

How would it work?

We have the KPI survey and we know it is a disaster. The employer response rate can be as low as one or two responses per college program: useless. We know that colleges influence the student responses through advertising, preambles prior to the administration of the survey in class, and I’ve even heard of free food. All to have programs judged on things such as the quality of food in the cafeteria.

If we look at the SREB summary, here are some different measures:

1. the numbers of students who complete courses,
programs, degrees and certificates. Again, this becomes a “bums in seats” exercise, no better than what we have now.

2. reducing the number of credit-hours accrued by
completers. This moves from “bums in seats” to “get ’em in, get ’em out”

3. graduation rates. How do we know that institutions won’t simply lower quality to achieve this?

4. transfer. OK, that might be useful.

5. reducing achievement gaps for low-income students,
those from underrepresented populations, and
returning students. Sure, a laudable goal.

6. pass rates on licensure and certification examinations. How do we know this will not become “teach to the test”?

…and so on.

Particularly troublesome would be what seems to fall under the category of measuring student learning outcomes. These initiatives smell like standardized testing once you cut through the jargon. Are we missing something here?

What is missing from the discussion are some measures that we think are important, such as:

– student-faculty ratio
– percentage of faculty that are full-time
– degree of academic freedom in the institution

…and more.

Any guesses as to why these aren’t in the mix?

Darryl Bedford
President, OPSEU Local 110

My own experience is that the sorts of performance data we really want to have at our disposal are often difficult to obtain. For instance, is any given post-secondary school able to find out, with any degree of precision, consistency, or reliability what proportion of its graduates have ended up with work that makes use of what they learned? Conversely, do the graduates feel that what they learned in their programs has enabled them to find gainful employment that contributes to the public interest? Now THAT’S an outcome. But since it is something that isn’t measurable while the student is directly connected to the school, it gets set aside.

In the current zeitgeist of what I like to call “the cult of accountabilism” (the presumption that frenzied “measuring” of anything and everything, regardless of measurement validity or relevance, will yield, and is equal to, “accountability), we have a tendency to turn to what is readily and easily measurable, within existing systems. That is not to make excuses for “bums in seats” measures. Rather, perhaps the sorts of outcomes/behaviours that ought to merit the budgetary “food pellet” dropping down from the dispenser, require an overhaul of the outcomes-gathering systems we currently have, so that we ARE targetting, and reinforcing, the sorts of post-secondary structures, services, and behaviours, that we value and are most meaningful, and not simply the ones we are currently well-positioned to gather data about.

I might note that, in some 25 years of both completing (as student), and being subject to (as faculty), course evaluation forms, in some dozen post-secondary institutions across the country, not once did anyone ever ask whether the student found the program coherent (i.e., all prior courses interfaced well with the one being evaluated, and provided all necessary background knowledge), whether the course/program provided them with useful skills, or whether it assisted them in making career-related decisions. They were all essentially focussed on the instructor – for tenure and promotion purposes – and provided little information as to whether the department and program, or the faculty in general, was achieving what it set out to do, or indeed, whether it was even mindful of such. How’d THAT happen?

So, I think effective reinforcing and shaping of post-secondary education, in a manner that improves it and sustains those improvements, begins with devising means for gathering information that is most informative about what is going on there. Certainly, if the rat is moving around inside the Skinner box there will be *some* sort of correlation between registering “jiggles” of the box and bar-pressing. But we would wish reward to be contingent on bar-presses, specifically, and not merely box-jiggles resulting from running around or frantically scratching. Let’s figure out what we need to reward, and how we can obtain unambiguous data on it.

Leave a Reply

Your email address will not be published. Required fields are marked *