Brian Rosenberg, president emeritus of Macalester College and a visiting professor in the Harvard Graduate School of Education, has written the best article that I’ve read on the recent decision by leading law schools to boycott the US News ratings. In an essay entitled “Higher Ed’s Prestige Paralysis,” he makes a highly convincing argument that with or without the US News rankings, “college reputations are fixed, valuable, and based on almost no hard evidence.”
I regard his argument, that the US News rankings are based on almost no real evidence of quality, as absolutely correct, and I wholehearted agree with his main message: That the rankings simply serve to reinforce the existing structure of institutional wealth and prestige.
Indeed, it wouldn’t surprise me to learn that the US News rankings were reverse engineered to ensure that the "right" schools appeared at the top.
College ratings take various forms. Washington Monthly famously measures economic mobility. Georgetown's Center for Education and the Workforce has released invaluable information on the return on investment of individual colleges and programs.
As for US News, it tries to measure quality largely in terms of inputs: resources, average class size, faculty qualifications, standardized test scores, and reputation.
Each approach has its limitations. Mobility and ROI measures tend to privilege schools located in high income or fast-growing cities or regions. Even licensure passing rates in Engineering and Nursing (or bar passage rates for law schools) can be misleading, since schools can game the system by restricting admissions into those programs.
Indirectly, the US News rankings measure students’ qualifications. It’s my view that the biggest effect of its rankings has been to nationalize the higher ed marketplace by encouraging the most academically successful students to aspire to attend one of the leading national colleges and universities.
None of the college ratings that I’m familiar with truly tries to measure what I consider the single most important variable: the quality of the academic experience. That’s not easy to do, but I do think it’s possible – because we know it when we see it.
For example, how about having the opportunity to serve as a research assistant to a Nobel Prize winner? I know a recent Columbia graduate who did just that. Or how about working with a psychology department chair and a team of students on a game-like app now used in many medical centers to draw out information from adolescents suffering from chronic illnesses, as my stepson did.
I myself had the opportunity during my senior year to write a biography of the Harlem Renaissance poet and essayist Jean Toomer, and, in the process, spend time in the Fisk University archives and interview the artists Aaron Douglas and Georgia O’Keeffe and the poet and biographer Arna Bontemps. That proved to be a real education.
If I had to measure quality, I’d try to assess the share of the students who:
- Had the opportunity to work one-on-one with a faculty mentor.
- Participated in a learning community, an honors program, or a research or opportunity program.
- Partook regularly in small classes or seminars or a studio courses.
- Took part in an experiential learning opportunity, including a supervised internship, mentored research, study abroad, or service learning or created a project in a maker space.
- Produced a capstone project that was evaluated by faculty other than the student’s mentor.
I can think of still other measures of quality: The proportion of students who shared a meal or had coffee with a faculty member, visited a professor’s house, went on an off-campus excursion with an instructor, or took part in co-curricular and extracurricular activities.
I hear the objections. Won’t those indicators discriminate against schools that serve large numbers of part-time and commuter students? Not necessarily. I’m aware of many institutions, including many of the City University of New York 2 and 4-year campuses, that make student engagement and enrichment activities defining features of their undergraduate experience.
One byproduct of the awful academic job market is that Impressive teacher-scholars can be found at every campus. Every student at a 4-year brick-and-mortar non-profit (and many 2-year schools) has the opportunity to study with a genuine subject matter expert and research scholar. Sure, the average academic qualifications of the undergraduates differ, but talented, highly motivated students too are omnipresent.
The big difference among institutions, in my view, lies elsewhere: Partly in things that are hard to measure, like the amount and the quality of constructive feedback that students get. But mainly in matters that we can quantify, including access to mentoring, the amount of faculty-student interaction, participation in learning and research cohorts and more intimate and interactive learning experiences, and engagement in experiential learning opportunities.
Let’s not wait for for-profits to assess quality. Accreditors need to step up to the plate. Accrediting agencies are especially well positioned to collect the information that applicants need (including information on student satisfaction and student assessments of teaching quality and post-graduation employment and earnings) that truly can allow prospective college students to gauge academic quality.
Steven Mintz is professor of history at the University of Texas at Austin.
from Inside Higher Ed https://ift.tt/Bzl8mFM
via IFTTT
Comments
Post a Comment