What Is Learned in College History Classes?

Rss Feed

New Member
Joined
Jun 11, 2020
Messages
5,052
Reaction score
28
This article by Professors Sam Wineburg, Mark Smith, and Joel Breakstone, tells us, “The challenge of how to measure learning is not restricted to universities. For high school teachers the situation is not much better. The structure of the school day restricts collaboration to brief meetings taken up by administrative matters, leaving scant time for teachers to articulate goals for student learning. Moreover, few options exist for assessing student learning. Multiple-choice tests dominate at the high school level. Each of the twenty-four states that test students in history uses multiple-choice questions and over half use only multiple-choice questions. Analytic essays rank a close second to multiple-choice questions as testing options. These essays provide students opportunities to practice skills central to the discipline, but as assessment tools they are blunt instruments: so many processes occur at once that it is hard to know what, exactly, these tasks measure. From the perspective of cognitive science, pinpointing the factors that go into an essay of the sort used in the College Board’s Advanced Placement program’s “document-based question” (DBQ) is virtually impossible. Even after decades of developing and refining the DBQ, reliability (that is, the degree of consistency in test scores) remains disturbingly low. With support from the Library of Congress, we developed dozens of tasks for assessing historical thinking at the high school level. Our tasks ask students to answer questions about historical sources and to explain their reasoning in a few sentences. Each task assesses one or more historical thinking ‘constructs’—core notions of historical thinking, such as the relationship between claim and evidence, the nature of chronological thinking, or how time and place influence events. These aspects apply whether one is reasoning about why Constantine converted to Christianity in 312 or why World War I erupted in 1914. For example, one of our tasks presents students with excerpts from two documents about the Philippine-American War and asks how each provides evidence of opposition to the war. One source is sworn testimony by the U.S. Army corporal Richard O’Brien before the Senate Committee on the Philippines, chaired in 1902 by the Massachusetts Republican senator Henry Cabot Lodge. The other is from an 1899 letter published in the Kansas City Journal by Col. Frederick Funston, who defended American involvement by casting the Filipinos as ‘illiterate, semi-savage people’ who wage war ‘against Anglo-Saxon order.’ To succeed in the task, students needed to look beyond the content of the documents to consider the occasions that prompted their creation. Senate committees are not haphazardly convened. High-ranking officers do not write letters defending military campaigns without cause. At its most basic level, this task is about warrant. Students are provided with a claim and evidence, and must specify the relationship between the two. (See figure 1.)”


Figure 1 Opposition to Philippine-American War Assessment
Shown here is an example of an assessment task asking students to examine the content of two documents while considering the occasions that prompted their creation. SOURCES: For document A, Testimony of Richard T. O’Brien, U.S. Congress, Senate, Committee on the Philippines, Affairs in the Philippines: Hearings before the Committee on the Philippines of the United States Senate, 57 Cong., 1 sess., April 2, 1902, pp. 2549–51. For document B, “Interesting Letter from Funston,” Kansas City Journal, April 22, 1899, Library of Congress, http://chroniclingamerica.loc.gov/lccn/sn86063615/1899-04-22/ed-1/seq-4/.

The article continues, “Our initial work with high school teachers showed promise. Teachers were able to use assessments to gauge students’ grasp of key concepts and to inform department-wide discussions about instruction. We began to wonder whether our tasks might address the AHA History Tuning Project’s call for measures to assess the discipline’s core concepts at the college level. Of the thousands of high school students who completed our assessments, most struggled. Would college students exposed to more sophisticated content and a greater range of sources do better? To answer these questions, we administered our tasks to students enrolled in a required introductory U.S. history course at a state university on the West Coast. In addition to the Philippine-American War task, we gave students a 1936 playbill for Battle Hymn, a stage production celebrating John Brown’s 1859 raid on an arsenal at Harpers Ferry, Virginia. Students had to determine whether three facts, each true, might provide evidence for why the authors wrote the play. (See figure 2.) Just as Arthur Miller’s The Crucible, about seventeenth-century Salem, Massachusetts, witch trials, reflected the McCarthyism of the 1950s, our task asked students how a play about events in the 1850s might reflect the 1930s. Students struggled with the task in early piloting, but we could not tell if it was because they overlooked the play’s date or thought that the date was irrelevant to understanding the authors’ motivations. We thus added the first question to make the playbill’s date impossible to miss. In subsequent administrations of the task, no student got the date wrong, but most continued to struggle when analyzing the document as a product of its time.”


Figure 2 John Brown Playbill Assessment
The assessment task shown here asked students to analyze a document as a historically conscious product of its time. Source: George Goldschmidt, “‘Battle Hymn’ a New Play about John Brown of Harpers Ferry by Michael Blankfort and Michael Gold at the Experimental Theatre,” ca. 1936–1941, Library of Congress Prints and Photographs Online Catalog, http://www.loc.gov/pictures/item/98516478/.

We read, “Our third task focused on sourcing: Would students attend to a document’s bibliographic information when judging its evidentiary value? We used an early twentieth-century painting, The First Thanksgiving 1621, by Jean Leon Gerome Ferris to ask students if the work would be a useful source for historians who wanted to understand the relationship between the Wampanoag and Pilgrim settlers in 1621. (See figure 3.)”


Figure 3 Thanksgiving Assessment
The assessment task shown here measured students’ ability to evaluate a source’s evidentiary value from its bibliographic information. SOURCE: Jean Leon Gerome Ferris, The First Thanksgiving 1621, ca. 1912–1930, Library of Congress Prints and Photographs Online Catalog, http://www.loc.gov/pictures/item/2001699850/.

The authors describe the first study: “We administered the three exercises at midsemester to seventy-eight freshmen and sophomores in a required U.S. history course. We used a three-point rubric to score responses: ‘Basic’ (zero points) if the answer was off base and bore no relation to the competency being measured; ‘Emergent’ (one point) if the answer showed inklings of proficiency; and ‘Proficient’ (two points) if the answer demonstrated understanding. Across the three assessments were six questions (one for the Ferris painting, two on the Philippine-American War documents, and three on the Battle Hymn playbill) resulting in a possible total score of twelve points. Results were alarming. Students averaged less than one-half of one point. The high mark across the entire sample was a mere three points (earned by three of seventy-eight students). On the painting evaluation task, the average score for students hovered slightly above zero. (See figure 4.) Among students assigned the task, 94 percent ignored the bibliographic information accompanying the picture and evaluated the painting based on whether it matched their preconceptions about Thanksgiving. As one student wrote, ‘I agree [it would help historians]. The painting does show the nature of the relationship. In the image, we see Pilgrims and Indians interacting peacefully and joyfully.’ Other students engaged in a similar matching process but reached the opposite conclusion, rejecting the painting because it conflicted with their prior understanding. As one student explained, ‘The painting shows a pretty picture of how the Wampanoag Indians and the Pilgrims were sharing a meal and getting along, when in reality the Pilgrims didn’t come and have a peaceful communication. In reality, they came hungry for land and killed or fought anything and anyone trying to stop them.’ In neither case did the temporal gap between the image and the event it purports to depict enter into students’ deliberations.”


Figure 4 Sample Student Response to the Thanksgiving Task
This handwritten paragraph is a typical student response to the task of evaluating a source’s evidentiary value from its bibliographic information. Although many aspects of the painting are worthy of analysis, our task focused on one aspect of sourcing: the date. We registered a variety of responses that represented progress toward proficiency. For example, if students did not mention the temporal gap but speculated on the motivations of the artist, we granted partial credit. Responses awarded full credit had to note the gap in time between the creation of the painting and the event it depicts.

“Only one student focused on this gap and provided a rationale for why it mattered: ‘It was painted in 1932 and the event occurred over 300 years ago. We don’t know if the painter used a credible source to paint the painting and we don’t know if the event even looked like that back then. It’s all speculation from the painter.’ This type of reasoning—which we would hope college students would learn to do in an introductory course—was rare. Based on our experience with high school students, we suspected some college students might struggle. But we woefully underestimated how much they would struggle. Our findings raised questions about the transition from high school to college and the capabilities we can assume that students bring to introductory classes. But what about students in upper-level history courses? Would they breeze through tasks designed for high school students?”

We learn this about the second study: “We administered the same three tasks to forty-nine juniors and seniors enrolled in upper-level history courses at a different state university with a similar student population. Each student had completed at least five university history courses, and twenty-seven of the forty-nine were history majors. Recall that the Philippine-American War task asked students to explain how testimony from a Senate hearing and a letter from a U.S. Army colonel provided evidence of opposition to the war. If students explained in basic contour how each of the two documents provided evidence of public opposition, they earned a total of four points. These juniors and seniors scored, on average, less than one of four possible points (.77). Eighty-six percent earned no credit on the question about the Senate testimony. Rather than consider what prompted a congressional investigation, students fixated on the atrocities described by Corporal O’Brien in his statement. One history major wrote, ‘Well, provided what occurred in Document A is true, then it makes sense Americans would oppose the war. Document A would be something someone would quote who opposed the war.’ Another wrote: ‘It appears that the lower end of the chain of command was against the war in the Philippines. Due to brutal means of handling the situation in the Philippines many Americans were appalled by such actions.’ Another wrote, ‘Many Americans would oppose a war in which the opposing forces did not shoot a single bullet and came out waving a white flag. Americans generally have a difficult time dealing with the murder of children.” Students ignored the context of the testimony and focused solely on its content. Of these forty-nine juniors and seniors, only three provided explanations that considered the context of the testimony. One of them wrote, ‘[The testimony] provides evidence that many Americans opposed the war by there being a Senate investigation. If there hadn’t been such a huge opposition by Americans to this war, I don’t believe that the investigation would have occurred.’ Students did only slightly better on the second question. Over four-fifths failed to note that Colonel Funston was likely responding to public opposition or that the letter’s appearance in a newspaper signaled a broader debate about the war. For some, Funston’s letter provided no evidence of public opposition. One student reasoned that the letter “does not provide evidence that many Americans opposed the war … it’s an opinion of a man who supported the war.” Other students could not get past Funston’s racism. One major argued, ‘This [letter] does not show public opinion but one man’s rude, unethical, and racist opinion of people.’ Another wrote: ‘[Funston’s letter] also shows how Americans opposed the war in the Philippines because of the racist views supporters had. Colonel Frederick Funston dismisses opposition by saying that they are ‘educated, however, about the same way a parrot is’ and that they deserve strict discipline to get them in order. Thus, this shows that Americans opposed a racist war.’ Only six students out of forty-nine were able to see how the publication of Funston’s letter might provide evidence of opposition to the war. (See figure 5.)”


Figure 5 Sample Responses from Students in Upper-Level History Courses
This table provides examples of the range of answers provided in response to each of the assessment tasks. We used a three-point rubric to score responses: “Basic” (zero points) if the answer was off base and bore no relation to the competency being measured; “Emergent” (one point) if the answer showed inklings of proficiency; and “Proficient” (two points) if the answer demonstrated understanding. Across the three assessments were six questions (one for the Thanksgiving painting, two on the Philippine-American War documents, and three on the John Brown playbill), resulting in a possible total score of twelve points. SOURCES: For task 1, Testimony of Richard T. O’Brien, U.S. Congress, Senate, Committee on the Philippines, Affairs in the Philippines: Hearings before the Committee on the Philippines of the United States Senate, 57 Cong., 1 sess., April 2, 1902, pp. 2549–51. For task 2, George Goldschmidt, “‘Battle Hymn’ a New Play about John Brown of Harpers Ferry by Michael Blankfort and Michael Gold at the Experimental Theatre,” ca. 1936–1941, illustration, Library of Congress Prints and Photographs Online Catalog, http://www.loc.gov/pictures/item/98516478/. For task 3, Jean Leon Gerome Ferris, The First Thanksgiving 1621, ca. 1912–1930, painting, Library of Congress Prints and Photographs Online Catalog, http://www.loc.gov/pictures/item/2001699850/.

In assessing the future, the authors tell us, “These results give us pause. If a required survey course is the only history that students are exposed to during college, what ways of thinking do we want them to master? How can we make sure that students develop such ways of thinking? These questions become sharper still when applied to majors. Unlike their peers in computer science or engineering, the vast majority of history majors will not pursue history as a profession but will go into law or finance or any one of a number of professions. Historians have long claimed that historical study teaches critical thinking. Our results suggest that this may not occur by osmosis. Might a more direct approach be necessary? To ensure that students develop the reasoning skills central to the discipline, we need new tools to gauge their learning. We do not labor under the assumption that our exercises have solved the problems of history assessment. Our tasks are open to numerous challenges, particularly in their failure to exhaust the wide range and richness of historical thinking. At the same time, we believe that for the field to progress, abstract goals must be given concrete form. We agree with the AHA History Tuning Project’s call for students to ‘contextualize information.’ But what does this look like, and how can we find out if students are learning to do it? Our tasks embody one possible form that brief assessments might take. They provide concrete points of reference that ground department-wide collaboration in ways that abstract goal statements do not. New assessments are a start, but they are insufficient by themselves. A collaborative effort to explore new directions in assessment practice must be organized. Our tasks are best understood as formative assessments rather than end-of-course tests. In the assessment literature, formative assessment is distinguished from end-of-course assessment by its purpose: to inform teaching, not to give students a grade. Formative assessment provides a window into student thinking. Moreover, it gives students feedback on whether they are on track to master course content. Rather than waiting to see what students have learned on a final exam, formative assessment allows us to gauge student learning more frequently and tailor instruction more precisely. Instructors can slow down and revisit concepts that students find challenging or pick up the pace on material that students master quickly.”

They continue, “Formative assessment is rare in the college history classroom. It does not have to be. On the first day of class, instructors could take five minutes and have students complete the task using The First Thanksgiving 1621. Rather than grade responses, instructors could use the task as an entry into a conversation about the evaluation of evidence. Alternatively, instructors could collect student responses and quickly scan them to get a better sense of the beliefs students bring to class. The next session could begin with a discussion of evaluating evidence based on representative student responses. Along with Harvard University’s Eric Mazur, the Nobel Laureate Carl Wieman has pioneered the use of clickers (a type of audience response system) to assess student understanding in college science classes. Wieman has shown how instructors can obtain immediate feedback about student thinking by having students respond to prompts he projects from the podium. Nothing is stopping us from doing something similar. Instructors could display one of our tasks and show typical responses, asking students to select which one is best and explain why in small groups. These responses would provide instructors with instant feedback about student understanding instead of assuming that what is second nature to historians is second nature to students. Student responses also provide opportunities for departmental collaboration. We observed collaboration of this sort at the high school level when we worked with a department that met monthly to discuss student work. At each meeting, teachers reviewed student responses to our exercises and discussed how well students grasped aspects of historical thinking. Over the course of a year, teachers shared strategies for integrating assessments into their courses and developed a shared set of expectations for student learning. The study of history should be a mind-altering encounter that leaves one forever unable to consider the social world without asking questions about where a claim comes from, who is making it, and how time and place shape human behavior. If the major is to succeed in fulfilling this mind-altering mission, historians cannot be resigned “to suck at assessment.” There may be disagreements about how to define the major, but we doubt that any readers of this article would celebrate the fact that most students ignored the date of a document or failed to consider the context in which it was created. As Anne Hyde noted, the assessment train is barreling ahead. If historians do not create assessments that capture the unique aspects of the discipline, others will come in with their one-size-fits-all tool kit and do the job for them.”

This is excellent for anyone teaching history.

Continue reading...
 
Top