Whatever else academic freedom means, at many colleges and universities, it is the right to teach however one wants without regard to learning outcomes. Indeed, at the more selective, better resourced institutions, academic freedom and tenure also imply the right to teach whatever one wants, when one wants.
Pedagogy, assignments and activities, and assessment methods, all are up to the instructor.
I hate to say this, but I think the following generalizations are largely true. All too many college instructors:
- Like to occupy the class’ center stage, whether lecturing or leading discussions.
Only to a limited extend do they distribute responsibility to their students.
- Teach pretty much like they were taught.
Too many have no particular knowledge of or interest in the science of learning.
- Define active learning very narrowly.
Too many equate active learning with discussion and debate as opposed to inquiry or problem solving or activities involving annotation, concept or geo-mapping, role playing, text mining, visualization, and presentation.
- Think about teaching largely as a matter of what content to include and readings to assign.
Not, in contrast, as a matter of pedagogical strategies or activities, exercises, and assessments aligned with course goals.
- Assess student learning in standard or generic ways.
Quizzes, exams, reports, and term papers predominate, with little emphasis on formative assessment or on alternative approaches to evaluation. These alternatives might include peer assessment, developing a rubric, conducting interviews, producing a visual presentation, conducting a case study, drafting a how-to manual, creating a case study, presenting and analyzing opposing interpretations or points of view, writing a position paper or persuasive letter, creating a concept map, designing an exhibit, applying a concept or theory to a real world situation.
- Provide pretty limited constructive feedback
That which is provided too often focuses on the quality of the students’ performance rather than actionable suggestions about how students might build their skills and improve their future work.
- Do not integrate career connections into their classes.
Windows and pathways into careers is viewed as outside their purview. Ditto for any discussion of the soft and digital skills that future jobs require.
In an effort to improve teaching quality, many institutions now mandate peer evaluation of teaching, but the results tend to be amateurish, conducted without training or clearly defined standards, with the results almost always positive (reflecting evaluators’ conflicts of interest).
There are attempts to do better. At my institution, the Psychology Department did draft peer observation guidelines that require:
1. The instructor to discuss the course goals.
2. The peer observer to evaluate whether the instructor:
- Aligns class time, learning activities, assessments with the course goals.
- Provides clear, accessible teaching materials to support student learning.
- Shares responsibility for learning.
- Offers opportunities for students to practice important skills and to solve problems that encourage them to think like a practitioner in the discipline.
- Supports student success through use of earlier no-stakes or low-stakes formative assessments and feedback to assist successful student learning
- Assesses student learning in a variety of ways.
- Gives students growth-oriented feedback, and offers opportunities to reflect and revise
3. The peer observer to offer a single piece of advice or adjustment to practice would most improve the colleague’s teaching.
4. The instructor to reflect on the peer observers’ feedback.
This rubric is impressive, but what’s missing is information that students can accurately provide:
- Does the class begin on time?
- Is student work returned in a timely manner?
- Is the instructor approachable and encouraging?
- Is the instructors’ feedback helpful?
When I directed Columbia’s teaching center, I drafted a teaching evaluation rubric in an attempt to professionalize peer evaluation. The rubric was quite long, but essentially it evaluated teaching along 4 dimensions, beginning with the instructor’s teaching philosophy: Their core ideas about what constitutes effective teaching in their discipline.
Next, the rubric turned to course design and pedagogy, including the activities and assignments and teaching strategies that the instructor uses to bring the instructional material to life, build essential skills and knowledge, and create an inclusive classroom that left “no student behind.” I was also interested in the extent to which the instruction integrated active learning, multimedia, and educational technologies into the class.
Third, the rubric looked at class dynamics: whether the students are engaged; whether they participate actively, knowledgeably, and constructively; and whether the classroom atmosphere is friendly, respectful, and open to multiple viewpoints. I was particularly interested in whether the instructors were attentive when students seemed passive, withdrawn, confused, bored, or hostile, and how the instructor responded.
A fourth dimension involved assessment and feedback: How the instructors monitor and evaluate students’ progress and performance, the support provided to students who are off-track, and the comments and advice provided to help students improve in the future.
Not surprisingly, at a number of institutions I encountered resistance to the use of rubrics to evaluate teaching, partly out of a fear that such an instrument would encourage of “check-the-box” approach that would fail to capture instructors’ individuality and craftsmanship, but also because it would intrude on faculty autonomy.
It’s been a decade since I directed Columbia’s teaching center, and have begun to somewhat differently about teaching evaluation. I now think evaluation needs to take place along two dimensions.
Dimension 1: Mechanics
Elements include clarity, organization, engagement, timeliness, and responsiveness.
Dimension 2: Substance
I don't mean command of content, but, rather, whether the teaching goes well beyond edutainment or even content transmission, but is truly at a college level: involving, in my discipline, critical analysis, advanced frameworks of interpretation, and serious skills building.
I have come to believe that this second dimension gets underplayed in evaluations of teaching, partly because observers may lack expertise in a colleague’s field. But I think this dimension is essential.
Let me be frank: Some classes aren’t sufficiently sophisticated. They're interesting. They're filled with memorable anecdotes, and are stylish and polished, and have vibrant discussions. But they're not state of the art.
Let me step on some toes here. I don’t believe that a history course that doesn’t explicitly engage with the theoretical and conceptual frameworks of the past 40 years and the most recent historiographic and interpretive debates, even at the introductory level, is at an appropriate college level. I hate to use the term, but a class that doesn’t apply a feminist or an equity lens to the material or discuss comparative, Marxist, psychoanalytic, postcolonial, postmodernist, or world systems perspectives is high schoolish in the pejorative sense.
Nor are enough courses sufficiently skills oriented. A content focus is not enough. If an instructor isn’t really working to improve students' writing, close reading, and analytical skills, the course isn’t being taught at a college-level
We all aren't natural born lecturers or discussion leaders of talk show caliber. Not all of us are charismatic, and many don't feel comfortable organizing classes around active learning or making extensive use of technology. But all of us who have a Ph.D. are sophisticated scholars who are undertaking cutting edge research. Shouldn't every class, regardless of level, reflect that?
If not, then there’s nothing wrong with ceding our lower-level classes to dual degree/early college programs. We will have no grounds for complaint.
Which brings me to the challenge we face. Ask teaching and learning specialists and they’ll tell you that we pretty much know what works pedagogically. They’ll also tell you that many of the methods that will enhance learning are relatively low lift.
The problem is straightforward: There are no incentives for faculty to introduce these techniques.
So what can be done? Let me suggest three solutions.
Solution 1. Require faculty members to regularly provide evidence of teaching improvement.
This might involve specific improvements based on feedback provided in student evaluations or peer observations, or it might describe new courses, activities, or assessments developed.
Sure, this exercise might lapse into a bureaucratic exercise with no practical consequences. But if it encourages some faculty to systematically reflect on their pedagogy and course designs, and to discuss their teaching with colleagues, a teaching and learning specialist, or an instructional designer, it has the potential to drive pedagogical improvement.
Solution 2. Identify bottleneck courses and take proactive steps to strengthen these classes.
Courses that impede student success needn’t be a secret. Department chairs should identify classes with unusually high DFW rates and performance gaps or particularly poor student evaluations and intervene. Get an instructional design to work with the faculty member on course redesign. If necessary, institute supplemental instruction sections.
Solution 3. Create a departmental and disciplinary culture dedicated to teaching improvement.
Individual departments should incorporate serious conversations about pedagogy into their annual retreats. Consider discussing active learning strategies, assessment types, and constructive feedback. Share best practices. Create a departmental repository of teaching resources.
At the same time, scholarly societies should make professional development a bigger part of their institutional mission and pedagogy a major focus of their annual meeting.
Even though I’m proud of the awards I’ve received for teaching, I don’t consider teaching awards a particularly effective way to incentivize the kinds of pedagogical improvements that higher education needs. You know as well as I that these prizes are awarded largely for popularity or mentoring – not for curriculum or tool development or innovative teaching and assessment practices.
I myself have never seen a teaching award bestowed because an instructor integrated career identification and planning into a course or experimented with an inquiry-, project-, or competency-based based approach or designed a class with a community-outreach component. We honor what we value and, very unfortunately, we don’t currently attach much value to the evidence-based practices that reduce achievement gaps or increase successful course completion in the most demanding gateway classes.
Yes, please recognize those rare teachers who transform their students’ lives, who inspire their students to be the best that they can be, who take extra steps to demonstrate that they care for their students’ well-being. But also recognize that teaching is first and foremost about student learning – and that is best achieved through practices that all of us – even the shyest and most reserved and introverted -- can implement.
Steven Mintz is professor of history at the University of Texas at Austin.
from Inside Higher Ed https://ift.tt/FnyoOzU
via IFTTT
Comments
Post a Comment