An Approach to Marking Handwritten Concept Maps

By Abhirami Senthilkumaran, Honorary Research Associate, UBC Faculty of Forestry, and Fernanda Tomaselli, Assistant Professor of Teaching and Coordinator, Land One program, UBC

For our BCcampus Research Fellows project, we are examining the outcomes of using a co-operative board game in an undergraduate climate change class. One of the data collection materials we used to evaluate student understanding was concept maps. We sought to evaluate how their understanding of the key concepts and connections related to climate change across the categories of mitigation, adaptation, impact, and causes, changed after the instructional activity (i.e., playing a co-operative board game).

Students were divided into two groups: one group played a co-operative board game related to climate change and the other group played a co-operative board game unrelated to climate change. All participants were provided with the following instructions and handout (Figure 1) prior to and after the instructional activity. They were given ten minutes to complete the concept map. All submissions were annotated with a five digit unique code to compare pre-test and post-test responses. Students submitted hard copy responses, as well as photographed and uploaded an image of their responses to the online questionnaire.

Student Instructions

  • In this section we are interested in learning about your understanding of climate change. Please complete the steps listed below. (We are not looking for a specific correct answer.)
  • In about 2 minutes, for each of the following categories related to climate change, list as many keywords as you can think of: CAUSES, MITIGATION, IMPACT, ADAPTATION.
  • Take 5 minutes and draw a concept map of climate change that incorporates the keywords from your list.
  • In the next 3 minutes, label the arrows with verbs (e.g., “leads to”, “increases”) when possible and include any feedback loops or interactions you are aware of. Use icons or visuals to aid comprehension. You may use a digital drawing platform or the paper provided. If you used a digital drawing app, please save your image in .jpg or .png format. If you used the paper provided, please take a photo of your work with your phone.
Fig 1: Handout provided to students as a starting point for their concept-mapping activity.

Studies that use concept maps often involve concept-mapping software (Eggert et al., 2017), which requires 45-60 minutes of onboarding. Given the cognitive challenges of learning and playing a new board game, we decided to take the handwritten route for our concept maps. This allowed students to focus on the content in their concept maps without worrying about the tools and procedures of creating one.

We reviewed the literature on assessing concept maps and identified the following evaluation metrics (Watson et al., 2016):

  • Traditional: each concept map is assigned a separate score for knowledge breadth, knowledge depth, and knowledge connectedness.
  • Holistic: each concept map is assigned an overall score based on its comprehensiveness (knowledge breadth and depth), organization, and correctness.
  • Categorical: For each top-level category in the concept map, a category score and a complexity index are computed. The category score captures the number of concepts in that category and the complexity index is a measure of the degree of interconnectedness of concepts in that category with concepts in other categories. Categories may be predetermined (e.g., the sub-headings in our template handout) or generated from themes that emerge in the concept maps.

For our evaluation, we used a combination of traditional and categorical scoring. For each concept map, we recorded the number of concepts under each top-level category, and the number of connections from a concept in that category to a concept in any other category. We felt that a mere count of the number of concepts and connections did not capture the quality of ideas and relationships in the concept map. So, we also included a systems thinking sophistication score between 0 and 3 in 0.5 level increments to indicate the level of systems thinking sophistication demonstrated by each concept map. Our rubric was adapted from Gray et al. (2019) and expectations were adjusted to account for the fact that the students only had ten minutes to complete this task.

Each concept map was scored by two (out of three) members of the research team. When our scores differed by more than one point for the category and complexity scores or by more than half a point for the sophistication score, we discussed our rationale to arrive at a corrected score that was agreed upon by both researchers. This evaluation was done without knowledge of which experimental group the student whose concept map we were marking was assigned to. We were subsequently able to compare changes from the pre-test to the post-test, as well as scores between the two groups in the experiment.

Lessons Learned

The choice to not use a concept-mapping software tool was prudent from a student point of view. We were able to collect data in 10-15 minutes that would have otherwise required up to an hour of preparation and training time for each student. From the researchers’ point of view, however, it increased our marking time significantly. It was fairly straightforward to count the number of concepts and connections under each category, but many micro- decisions related to punctuation choices needed to be made for consistent scoring (e.g., Do concepts under the same hierarchy separated by commas or forward-slashes count as a single concept or multiple ones?; Do concepts connected by an “and” conjunction count as one or two concepts?; Should two-sided arrows count as a connection under both categories that they are drawn under or scored as half a point in each category?). On the other hand, the sophistication score had the least consistency across raters. We had to schedule multiple practice scoring sessions to align on a rubric that we agreed upon and record reference examples for each score level.

We randomly divided the total maps to be scored among the three of us with one researcher scoring all of them, one researcher scoring all pre-tests, and the third researcher scoring all post-tests. In retrospect, it may have helped to have the second and third researchers split the concept maps so that the same person marked both the pre-test and post-test for a given student. Doing so would have eliminated the bias introduced by the pre-test or post-test scorer being more lenient in interpreting the rubric compared to the other scorer, with the researchers still blind to which experimental group’s concept maps they were marking.

In addition to concept map evaluation, we also performed a text frequency comparison on the list of keywords submitted. Word clouds from this analysis revealed interesting shifts in themes within and across categories. Stay tuned for a report on those findings.

We would like to thank Sadie Russell, Graduate Academic Assistant, UBC Faculty of Forestry for her excellent work conducting the gameplay sessions, supervising data collection and scoring all the concept maps.

References

Eggert, S., Nitsch, A., Boone, W. J., Nückles, M., & Bögeholz, S. (2017). Supporting students’ learning and socioscientific reasoning about climate change—The effect of computer-based concept mapping scaffolds. Research in Science Education, 47, 137-159.

Gray, S., Sterling, E. J., Aminpour, P., Goralnik, L., Singer, A., Wei, C., … & Norris, P. (2019). Assessing  (social-ecological)  systems  thinking  by  evaluating  cognitive maps. Sustainability, 11(20), 5753.

Watson, M. K., Pelkey, J., Noyes, C. R., & Rodgers, M. O. (2016). Assessing conceptual knowledge using three concept map scoring methods. Journal of engineering education, 105(1), 118-146.


Learn More

2024-2025 BCcampus Research Fellows

2024–2025 BCcampus Research Fellows: Fernanda Tomaselli and Abhirami Senthilkumaran