- Oregon Business Report - https://oregonbusinessreport.com -

Oregon cited in college grade paradox

[1] [2] [3] [4]

[5] [6]
By U.S. Chamber of Commerce [7]

The American Association of Community Colleges (AACC) put out a statement [8] regarding the Institute for a Competitive Workforce’s (ICW) Leaders & Laggards report card [9] on state public higher education systems which called the report “flawed” and “poorly conceived.” We certainly didn’t go into writing the report expecting everyone would join hands and sing “Kumbaya” when it was released. You never go into anything that threatens the status quo without expecting a backlash. What we do expect is fair, fact-based criticism—things that are largely absent from this response.

The first problem AACC has is with our “Cost Per Completion [10]” metric, asserting that transfer students weigh heavily on the amount states spend to produce a credential. This argument does have some merit. Certainly, it’s not a failure of the system if a student goes on to another institution and succeeds. They cite Oregon as their prime example, and it’s a good one. Their state earned an “F” for two-year efficiency, but got an “A” for their four-year system. If that is indeed the actual plan, then this indicates that the system works.

Any serious policymaker would look ever so slightly below the surface and ask immediately, which is the whole point of the exercise—are policymakers looking critically at their investments in higher education? Or are they just throwing money at the system and hoping for better results, without actually measuring those results?

But what about states like Alaska, Hawaii, and Wyoming, which earned “Fs” in both the two- and four-year category? Or states like Alabama, Delaware, Idaho, New Mexico, and North Carolina, each of whom earned no better than a “D” in each category? Even if students are transferring out of their two-year schools, it seems highly unlikely that they’re doing well once they get to their four-year schools, or they’d be enjoying the success that Oregon and Wisconsin (which earned an “F” for two-year institutions and a “B” for four-year institutions) are.

Which brings us to another issue—that AACC seems to want to treat transferring students as a success. This can be at best considered a neutral outcome until a student actually completes a credential. Studies suggest [11] that transferring students aren’t all that successful in four-year schools. As we all know, “almost” only counts in horseshoes and nuclear weapons—almost conferring a degree is not the same as conferring one. If a student transfers from a community college and ultimately does nothing, it may or may not be the community college’s fault, but it’s also pretty far from being a success. It’s hard to see why it should be treated as one in our report, or anywhere else.

This, ultimately, brings us to the biggest point of contention with AACC’s comments—they make a big bluster about the impact of transferring students across various metrics, but we don’t know what happens to them once they transfer. And looking at AACC’s legislative agenda [12], they seem to be none too interested in finding out. Nor does their “Voluntary Framework of Accountability [13]” make finding out what actually happens to transfer students a priority. So if we’re scoring at home, AACC complains that current data doesn’t tell a fair story about community colleges and thus argues that no story should be told. Meanwhile, they don’t intend to gather the information necessary to tell the fair story. It’s stonewalling at its finest.

Another issue AACC has with the report is that we use the standard metric of students completing their degree programs within 150% of the normal time it takes to complete the program. They say that we should have been using 200% time instead. It’s a common retort [14] for them, but their rationale for doing so seems entirely arbitrary and goes against standard practice. Either way, it’s not as though community colleges look that much better, with completion rates rising from a pathetic 22.1% (using AACC’s numbers) to a nearly equally pathetic 27.6% when students are given four years to complete a two-year program.

The other three issues AACC takes up with the report make it pretty clear that they didn’t bother reading the report or its methodology at all. They claim that the cost per completion metric “assumes that all completers are equal, both in terms of the quality of education and the cost of providing it.” It most certainly does not. To quote our own report, “We divided the Delta Cost Project’s estimates of education and related expenses by a weighted sum of completions, where credentials are weighted to reflect different program lengths and costs of delivery.” They take issue with our completions per FTE metric, saying that it doesn’t account for spikes in enrollment. However, we used [15] a three-year average when preparing this metric, accounting for exactly that problem. Further, they complain about the use of the percentage of a state’s students receiving Pell Grants, arguing that wealthier states will naturally have lower percentages because they don’t have as many poor students that could attend. That’s true, but the whole point of that exercise is to mitigate against institutions slashing access to make themselves look better: “…focusing on completion rates in isolation can unleash perverse incentives, convincing institutions that the way to improve their standing is to become more selective and restrict access to those students most likely to succeed.”

At the end of the day, AACC’s response is one that every education reformer—in either K-12 or higher education—will find familiar. They effectively argue that if you can’t tell the entire unburdened story using the data that we have then no story should be told whatsoever. This is a patently flawed piece of logic in itself, but it becomes completely farcical when you realize that they have no interest in providing that information to the public. They want to be judged on their terms, or none at all.

Frankly, that line of thinking cannot continue to exist. The public has a right to know what’s happening with its investment. Business leaders have a right to know whether or not our public system of education is capable of producing the talent they need to thrive. Most importantly, students have a right to know the quality of their education before they choose a school and spend their hard earned time and money. If the institutions are unwilling to provide this information—and overwhelmingly, they are—they should not be shocked when others fill the void and have no right to be outraged when the results don’t paint them in a flattering light.