Use this interpretive rubric to assess reading and listening comprehension in world language courses

How to grade reading comprehension?

How do you grade reading comprehension?

This has been a hard question for me to respond to for quite a few years now. I have felt somewhat torn about what kind of reading skills I have wanted to assess in order to fit my vision for a proficiency oriented course. Finally, I have a plan, and I feel confident that it will support my students’ progress on the path to proficiency.


I unveiled my new reading assessment rubric during a Facebook LIVE on Friday. With many teachers adopting the SOMOS Curriculum this year that have no background in Comprehension Based teaching or Standards Based Assessment, I knew that I needed to bring some clarity around my vision for assessment. If you’d like to understand my complete vision, please search #newtosomos and #assessment in the SOMOS Curriculum Collaboration group.

Today, I want to share with you one small piece–and one that I have been wrestling with for a few months: assessing reading and listening comprehension (the Interpretive mode).


Soon after I made the switch to Standards Based Assessment (well, as much as one can in a traditional grading system), I began evaluating all of my students’ productive work (presentational writing and presentational speaking) using this Proficiency Targets rubric. The rubric, which is an adapted work of a rubric that Crystal Barragan created, evaluates student production in reference to the language used to describe proficiency sub-levels from the ACTFL Proficiency Guidelines. Because the Proficiency Guidelines describe real-world performance, this rubric is really a performance rubric (similar to ACTFL’s Performance Descriptors, but broken down into sub-levels of proficiency). To me, it was a really easy decision to evaluate production in this way because student performance in the presentational mode in the classroom setting very closely mimics their performance in the real world (their true proficiency).

Still following? Phew!


I don’t evaluate beginning students in the Interpersonal mode, so that rubric was easy because I didn’t make one.


The interpretive mode, though…well…that one was trickier. I saw two routes:

  1. Basing the rubric on the Performance Descriptors. This would be the best predictor of real world interpretive proficiency, and I like that. This would require all interpretive assessments to involve authentic resources, since the Proficiency Guidelines and Performance Descriptors all describe interpretive ability in relation to authentic materials.
  2. Basing the rubric on traditional reading comprehension rubrics. This would evaluate the same reading skills that L1 teachers are looking to develop, and I like that. This would allow interpretive assessments to involve teacher created texts.

There are many interpretive assessments in SOMOS that involve teacher-created texts, so going with Option 1 would require a total overhaul of the assessments in SOMOS 1 and 2. And while it would be a labor…I am totally game for that labor if I felt that it would best support my curricular goals. Having already gone the Performance Descriptors/Proficiency Guidelines route with my Productive rubric, I have felt very drawn toward Option 1. I would love for my students to see their progress toward the goal of being able to interpret authentic resources with ease, understanding even nuances contained within. However, the stress that comes with student interaction with an authentic resource–especially in a testing situation–caused me to think twice before making the switch.

What is the goal of my program?


How do my students get there?


How can assessments best support the goal?

  1. Providing opportunities for students to demonstrate progress toward the goal
  2. Encouraging students to continue the journey
  3. Giving me the information I need to plan future instruction and interventions

I have decided that using authentic resources as the basis for all of my interpretive assessments does NOT best support the goal of proficiency. While certainly #authres-based assessments can allow students to demonstrate progress, I think that the inevitable stress that comes along with knowing that their ability to figure out what this resource means–even a well-chosen resource–undermines what I know to be true about language acquisition. I have decided to keep #authres in their current role in the SOMOS curriculum, as sources of comprehensible input and intrigue–appearing frequently in low-stakes roles.

How can I assess learner comprehension of teacher-created texts?

Now that I have recommitted to my decision to use teacher-created texts as the basis for my interpretive assessments, I needed to bring clarity to how I recommend using them. In the past, I gave vague answers that involved some mix of depends-on-how-many-questions-they-missed and depends-on-which-questions-they-missed and depends-on-how-they-answered-the-questions. Phew! I did not have a duplicatable model for interpretive assessment…so I set out to create one!

Use this interpretive rubric to assess reading and listening comprehension in world language courses

As you can see, this rubric evaluates comprehension from four angles:

  1. Comprehension of individual words and phrases
  2. Comprehension of concepts (main ideas and details)
  3. Ability to cite textual evidence to support conclusions
  4. Ability to infer (interpret meaning of unfamiliar words based on context, extract information not explicitly stated in the text)

Click here to access an editable version of this rubric!

I feel confident providing this rubric to teachers using my curriculum because it evaluates progress within the framework of comprehension based instruction. In comprehension based courses, students develop what Terry Waltz calls ‘micro-fluency’ in her fantastic manual, TPRS with Chinese CharacteristicsIn the classroom context, our students seem to almost skip over the Novice proficiency level in the interpretive mode altogether. (Keep in mind that ACTFL’s Proficiency Guidelines do not describe performance in the classroom setting, but in the real world! Students in comprehension based programs do NOT skip over the Novice level in the real world.) Because of this micro-fluency, I think that evaluating student interpretive comprehension using rubrics that are aligned with more traditional L1 reading comprehension rubrics better communicate student progress toward the goal of proficiency and better inform my instruction.

Soooo…how is your vision for a proficiency oriented language course realized? Do your assessments support that goal? How? What lingering questions are YOU wrestling with? How does my vision for interpretive assessment match yours, and what are the points of departure?

Let’s talk!!


16 thoughts on “How to grade reading comprehension?

  1. Kim Lancaster says:

    I agree with much of what you say. I am wondering, though, what we can do to eliminate some of the stress students feel when faced with authentic resources. If that is our ultimate goal, being proficient enough to interact with authentic resources, shouldn’t one of our focuses be to use enough authentic resources (carefully chosen) in the classroom that students no longer feel stressed in that scenario?

    • Martina Bex says:

      Totally agree! I use authentic resources frequently–I just prefer to use them in low-stakes situations (as activities, not assessments).

  2. vtracy1 says:

    This rubric is absolutely fantastic. Assessing the Interpretive Mode in a consistent manner has always been a struggle for me. The language that distinguishes each level is not only clear but allows for i + 1. I appreciate that subtle aspect and I would feel very confident using this tool in my classroom. Thank you for sharing this!

  3. Iris Cortes says:

    Thanks for sharing, Martina. Another point in interpretive assessment that I struggle with in addition to having a good rubric is whether or not to have the students answer in English or in the TL (Spanish in my case). I haven’t read your Somos curriculum so I don’t know where you stand on use of English in interpretive tasks. Currently I do both “depending “ ….on where we are in the year, what I want to assess, how much time we have…you know the drill. I would love to hear your thoughts.

  4. Melissa Mullins says:

    Fantastic resource! I truly appreciate the thoughtful way you express the process which brought you to this conclusion. You are a wonderful model for meta cognitive thinking, something which I am trying to improve upon.
    In my school we are having a big push to increase Lexile scores and therefore reading comprehension, system wide. Having a rubric that mimics the skills that L1 teachers are looking for helps me to prove the importance of a foreign language course; that it boosts L1 communicative proficiency.
    My only lingering question, at the moment, is how would you connect this to a ‘grade’? My students are not used to receiving just proficiency marks, but want a grade attached to it. I would rather only ‘grade’ summative assessments; but I’ve encountered push back from students and admin to have more grades for each standard.
    Any thoughts?

    • Martina Bex says:

      Most of my students’ grades feel into my 5% ‘Formative/work habits/citizenship’ category. Some teachers have the ability to weight a category like this at 0%, and that is a great option! In this way, students, parents, and admins are seeing lots of grades–which are communicating information about student progress to those parties. In the summative categories, there are few grades, but that SHOULD be the case, because those assessments are coming at the END of learning periods, or at benchmark moments in learning periods…so by nature there cannot be many grades in each of those categories. Otherwise, they wouldn’t be summative!

  5. Kristin Terry says:

    Are the questions and evidence that they provide in the target language? in English? or a mix of both? How do you know if they “interpret new/unfamiliar words?” What do those questions look like on an assessment? Thank you.

  6. Maria says:

    I think this rubric is useful: however, the A,B,C,D,F attached to the level need to be adjustable, according to the experience of the student. For example, I can’t give a freshman in French 1 a D or an F for the first quarter grade for being a beginner!
    I have been working on rubrics a lot. I’ve set a target level expectation for each semester. The levels are written on my rubrics. When I have multi – year classes (mix of French 2, 3, 4) I remind myself (by re-reading the descriptor) of the student’s experience level before I grade his/her paper/test/performance. Even when grading my French 1 work, it helps me to re-read the descriptor to keep my expectations “real.”
    I think assessing someone’s proficiency level, and grading their work may have to be separated. We have a hybrid proficiency-based and traditional reporting system going on at our school. (apples and oranges!) Trying to do both can be challenging.

    • Martina Bex says:

      I so agree! This is one of the reasons that I decided to NOT assess interpretive comprehension with reference to proficiency guidelines. With the micro fluency that I see my students, I feel confident setting the expectation at the ‘green’ arrow on this rubric!

  7. Celine Brabo says:

    Martina, I discovered your blog only a few weeks ago, after attending the MaFLA Proficiency institute. Your thoughtful posts and links have been so helpful. I am just beginning to incorporate proficiency “methods” (units, rubrics, grading, etc.) in my French and Spanish classes (levels 3 and 4), and I was struggling with the thought of giving my students summative assessments based on authentic ressources for the interpretive mode. This post gave me the answer. What a relief. You are brilliant. Thank you so much.

Leave a Reply