My maternity sub is unavailable on one day during finals week, so I needed to develop a new final exam for Spanish II that did not involve the use of the computer. (Last year, I gave my students this exam based on the video La leyenda del espantapájaros.) Wanting to do as little work as possible, I turned to the Embedded Readings blog to see what pre-made readings I could find and turn into a test. I found a reading by Matt (don’t know his last name) about La Llorona that I could work with. Two hours later (so much for saving time, right?), I had a new final exam with a past-tense reading. Since a Spanish speaker will not be administering the exam, it consists only of reading and writing. Download the pretty PDF of the exam here, or download the not-so-pretty Microsoft Word version here. (Since I do all my word processing with Pages, the files often get screwy when I convert them to Word. Sorry!) Both documents are FREE–my Teacher’s Appreciation gift to YOU! The final includes target structures from the storytelling units that I use to focus on the preterite tense (in addition to vocab that we’ve studied since Spanish 1), so check out those students if you are looking for how to adequately prepare your students for the final.
Julia developed an awesome extension for running dictations (click here to read how to do a basic running dictation)! Instead of having students record the events on a single piece of paper, she had the secretary write each event on a separate square of paper. After the secretary recorded it, he or she passed the paper to another student in the group (one that was not currently the runner) to illustrate. By doing so, she added another role to the activity and increased the level of engagement! To put the events in order, then, students only have to stack the papers in order (the first on top and last on the bottom). They can staple them and hand them in very easily, instead of trying to re-write the list or number them on the side.
The best part about this extension, however, is that you now have illustrations to use for any number of activities. Julia used them for a listening assessment: she showed two pictures on the Doc Cam and read one of the statements from the dictation. Students had to write “A” if she was describing the picture on the left or “B” if she was describing the picture on the right. It lowered affective filters that usually go up during assessments because the kids got to see the drawings of their classmates…sometimes quite interesting!
Can I please have an intern ALL the time!?
I continue to experience the same problem with each reading assessment that I assign, and I am wondering if anyone else experiences it as well and/or has input and suggestions.
When I administer a reading assessment, I make absolutely certain that it is comprehensible to my students. If there are any words that my students haven’t learned and shouldn’t be able to figure out with a few squints of concentration, I footnote it. The point is to assess whether or not they understand the structures that they are supposed to have acquired when read in context. So I get really frustrated when I grade an assessment and the scores average a Developing (C) or–worse–an Emerging (D)!
Occasionally, I can look back and see that I was trying to stretch them a bit too much. This was the case with my Spanish A kiddos last week. But those instances are few and far between because I am SO CAREFUL when designing reading assessments. The problem, I have discovered, is that my students are really, really, really bad at answering questions. I have discovered this pattern because I will often give papers back to students and have them write out the translations of the readings, and they translate the entire thing without significant errors. Then, I ask them to go back and re-answer the questions. Most of the time, they say, “OH DUH!” and correct their mistakes. Oftentimes, however, they still don’t get it. About a month ago, this sentence appeared in a reading for Ladrones: “The robbers robbed the same store four times”. The question was, “How many different stores were robbed?” The answer, obviously, was ONE. Even after students translated the sentence, however, many were unable to answer the question.
Do you think that my questions are just too hard??
Is this nothing more than a result of my students’ low ENGLISH reading comprehension?
Is it fair and accurate to accept a correct translation of the reading as proof of their Spanish reading comprehension, or is the fact that they can’t answer questions (in English) about a Spanish reading proof that their Spanish reading comprehension is low?
I need input, people!!!!!!! Help!!
This week is conference week in Anchorage. At the middle school level, we do Student Led Conferences, in which students prepare a portfolio that includes work samples and self-reflections and present it to their parents. Teachers check in during the conference to see if parents have any questions, but the student is responsible for explaining his or her experience in the class to his or her parents.
I have struggled to create self-reflections that elicit honest, accurate responses from students, and I think that I have finally found the magical combination (until next year, I’m sure)!! First, students complete this self-reflection about their experience in the class. I have used this in the past, and it has proven a successful conversation starter for students and parents. Kids get to talk about what they like and don’t like, and consequently the areas in which they experience successes and struggles. Download a free, editable version of this document here.
The second piece is one that I added this year, and I am VERY pleased with its success. It is a self-reflection on behavior, work habits, etc. (their Citizenship grade, essentially). I found that by a simple re-phrasing of the questions (asking students what they think that I would say to their parents vs. simply what they think about their own performance), students produced more honest, well-thought out and defended responses. Knowing that I would later circle whether I agree or disagree with their thoughts held them accountable, and I added comments to support my agreement or dissent. Download a free, editable version here (but beware–the font that I used in the original is most likely not on your computer, so you will probably need to do some re-formatting).
Students included this self-reflection in their portfolio with work samples and explanations of why they chose to include those samples. The final piece that I used was this “instruction sheet” for parents, so that they could speak with their students about the missing and incomplete components of their portfolio. It is helping parents to feel equipped to address concerns about work habits with their child and develop strategies and goals to address those concerns.
There are some benefits to Student Led Conference and some challenges, but I am thankful for the opportunity with which it provides students to have meaningful conversations with their parents about their class performance! It is also great for the MANY families in our school that do not speak English! What do you do for conferences??
Someone started a new thread on the #flteach listserv about Standard Based Assessment, and so I of course had to jump in on the conversation. I’m posting my response here because I was filled with warm fuzzies while I was writing it. Standard Based Assessment is the best!! If you’ve not made the switch, you’re missing out!!
This is my third year using Standard Based Assessment, and I’m hooked.
First of all, my students’ grades reflect their performance in the language, not their performance as a student. In my old grading system (quizzes, tests, homework, classwork, projects, participation), students could earn high grades by simply being a good and responsible student, and all the language ability in the world could not keep a ‘slacker’ student from achieving a high grade. The frustrated teacher in me liked this because I liked rewarding students that worked hard and “punishing” students that didn’t. “See! You have a D because you never turn in your work!”. But that is not just. My course is called “Spanish” not “Work Habits”, and my students’ grades should reflect their ability to interpret and produce the language. My categories are now Reading (25%), Writing (25%/20% depending on level), Listening (25%), Speaking (20%/15% depending on level), Culture (5%), and Citizenship (5%). More on Citizenship later. I have fewer Fs and Ds, but not fewer As. Most students have As and Bs because I am focused on their proficiency and I know that if my students are getting Cs and Ds on assignments, it’s not because they are lazy and not doing their work but because they don’t understand the material. I spend more time working on it until the majority of my students are proficient. The other great thing about this system is that typically my students’ grades go UP as the year goes on instead of down.
My students receive a score of “Advanced”, “Proficient”, “Developing”, “Emerging”, “Beginning”, or “No Attempt” on each assignment. These correspond to A/95%, B/85%, C/75%, D/65%, F/55%, and F/50% and to ACTFL proficiency levels. For an example, please see this document that I created by combining the ideas of about a million different brilliant minds (it’s free!): http://www.teacherspayteachers.com/Product/Proficiency-Targets Even though my students are well aware that an Advanced is the same as an A and an Emerging is the same as a D, I have found that it encourages them to work harder to move to the next level. An “F” means “FAIL”–an ending–whereas a “Beginning” shows that there is hope and that the student is just at the beginning of the journey. Students WANT to jump to the next level. If they’re Developing, they want to be Proficient. If they’re Proficient, they want to be Advanced! I also like it because students know that they cannot improve their grades by doing extra credit or handing in missing assignments. They have to perform better in order to improve their grade, and so they actually STUDY and PRACTICE in order to bring up their grades. It’s fantastic!! Best of all, I have students re-take my course BY CHOICE after they have failed to move on to the next level (D or F). I think that that is the most wonderful testimony to the hope that this grading system gives them. They do not think that they have failed, but instead that they need more work to become proficient. I love it.
As many of you have brought up, the great conundrum caused by Standard Based Grading is how to hold students accountable for anything but summative assessments. Some schools issue two grades for their students: one for their content performance and one for their behavior/study habits. I wish this were the case for me! My solution is the Citizenship category. It’s only 5% of my students’ grades, and it includes formative assessments, participation, attendance, behavior, work completion, etc. It’s a catch-all. It is such a low percentage that it has almost no affect on their final grade, but it’s something that the frustrated teacher in me can hold onto and hold over their heads. It’s a grade that I can point to at Student Led Conferences or mention in a phone call to show their parents tangibly how “studious” their child has been. Now, my kiddos are middle schoolers and don’t really have any concept of just how little of an affect a 5 percent category has on their overall grade. If you teach high school, when students actually understand percentages, you might bump it up to 10 percent. But I think it is very important that even an extremely low grade in Citizenship (namely, an F) not bring the students’ overall grade down more than a single letter. In most situations, students’ grades have already suffered because of their poor work habits, and so adding on additional punishment simply to stick it to them is really not necessary.
I just found this file from the end of last year! I used this video to create a video-based assessment (a la Alma) for the final exam for my Spanish II students. The video is just wonderful, and although the vocabulary is challenging, the narrator speaks very slowly and clearly. Download the assessment here.
Open House is tonight, and I’m feeling the pressure to have grades for my students’ parents to see. My first year kids only have one grade in the books so far, so I gave them a listening assessment today so that they could have a second one. But as I’m sitting here looking at them, I realize that I can’t put them in the gradebook!! They are formative at best, but even that is questionable because they really aren’t an accurate measure of my students’ listening abilities.
I asked the story Camina y corre in class today, and it was a huge success!! In one class, a girl was walking with her friend Lady Gaga and saw Michael Jackson, who turned out to be a zombie that ran after the girls. In another class, one girl saw One Direction, but she doesn’t like them so a classmate who is completely obsessed ran after them, but they ran away. My sixth graders thought that it was the best thing ever!!
So after the story, I gave them the listening assessment–four questions in English about the story. But I realized that (1) I can’t count it as a summative assessment because we haven’t even gotten to the “read” stage of TPRS yet–they are still just learning the words, and (2) it isn’t even accurate as a listening assessment because the whole time I was pausing and pointing to words on the board so that the kiddos could read them, and the actors were acting everything out. I tried to think of questions that weren’t obvious from the acting, but even then I think I ended up with details that the kids couldn’t exactly remember, even if they understood them at the time. Sigh. I need to remember that if I want to put something in the gradebook, it has to be summative (the kids must have “finished” learning whatever I am assessing), and it must be accurate.
Maybe I’ll count it as a Citizenship grade…the kids had to be paying attention to know the answers, right?….
In the RTI (Response to Instruction/Intervention) framework, there are five categories of assessments: outcome measures, universal screeners, progress monitoring assessments, diagnostic assessments, and informal assessments. Each kind of assessment plays an important role in determining each student’s abilities, strengths, and weaknesses in whatever content they are meant to assess.
At our school, students take a universal screener three times per year in reading, writing, and math. A universal screener is meant to give the teacher a snapshot of all of his or her students and how they compare to the standard and to their classmates at the time of its administration. I have decided to administer universal screeners quarterly to my students, and I am beginning with a writing assessment today.
Yesterday, my students drew a picture of where they went this summer, and we talked about some of them. Today, I will pass out the drawings at random, and each student will need to create a story that explains the random picture that they receive. I will explain that they can make up anything, and that it is their job to make the story about things that they know how to say–if they get a picture that they don’t know how to describe, focus on a specific detail or use their imagination to make it so that they can talk about it!
I changed my mind at the last minute. Instead of giving each student different papers, I realized that all students should be working from the same picture so that I could truly compare them to each other. I used this great visual from the University of Pittsburgh because there are tons of things going on and many opportunities to create imaginary story lines.
Students will not receive a grade (that is entered in the gradebook) for this assignment. Rather, they will see how they compare to the proficiency targets that are set for their level. I made this form for them to use, and there is a different rubric on the back for each level I teach. I will not screen my Spanish A students until next quarter, since they cannot produce any Spanish at this point.
This is the final exam that my Spanish A students took today: Spanish A Final Exam 2012
I am not proud of it because it isn’t particularly academic–there is no cultural/real aspect to it–but it was somewhat student generated, which I think is a good thing. Both reading sections were free writes written by students in my classes. It includes a good range of vocabulary and structures that we have studied throughout the year, but not great. I did not include a speaking portion to the exam because our technology has been collected/cleaned/locked up, and the speaking cards worked well and have already given me an accurate read on each student’s speaking abilities.
I much preferred what I did with my Spanish 2A midterm exam, with the short film Alma, and hope to find a good film to use for this final exam in the future. Someday…
I was thinking this morning about how challenging it really is to design accurate assessments. There are so many variables that affect a student’s performance on any given assessment that it is really quite difficult to control all but the one that you wish to assess. Because I use Standards Based Grading (since it gives much more valuable feedback than traditional categories–my students know how well they speak, read, write, and listen in Spanish instead of how well they do homework or perform on quizzes), I must always isolate one skill area to assess at a time. This is not easy!!
For example, I gave my students in Spanish 1A an “Incorrect Dictation” as a listening comprehension assessment today. We read the Biblioburro article a few days ago, and have since watched the video and discussed it some more. For the dictation, I read aloud several false statements about the Biblioburro to my students and they were required to transcribe and then correct the information.
- First, I had to make sure that my students know all of the information that they would be required to correct. Since we had previously read the article, I had to ensure that we discussed all of the facts that appeared on the assessment in class. If they didn’t already know the material, they should have known it by the time we finished discussing it. If they didn’t, that was a direct reflection of their listening comprehension. If they already knew the information because they had read it, it still assessed their listening comprehension because they had to understand the statements that I made.
- Next, I had to make sure that they didn’t miss questions because their memory is bad. Since this particular assessment required students to know chunks of information (like where Biblioburro is from), and knowing those facts was a skill necessary to be successful on the assessment but in no way affects their ability to comprehend spoken Spanish, I had to control that variable. I did this by giving the assessment IMMEDIATELY after a discussion in which those facts were discussed. Even though I agree with Laura Terrill that most things should be assessed once time has passed, to see if structures have been acquired, in this case I just wanted to know if students understood what was being spoken. It doesn’t matter if they remember it tomorrow or not; just as long as they get it NOW. If I were giving them a quiz about Biblioburro for a “Culture” grade, then I would want to give it time to make sure that they had truly learned the material and stored it in their long-term memory.
- Since the statements contained errors that needed to be corrected, I had to make sure that an incorrect response was not the result of an inability to PRODUCE (write/speak) in Spanish. Students had to be able to identify the error in what they heard, but if they could not say how to correct the error in Spanish, that was okay. I let them say it in English. As long as they can demonstrate that they understood what I said (both in the statement and in the discussion by knowing how the statement should be corrected, even if they can’t express that in Spanish), they should receive full credit for listening comprehension.
It was much easier to develop assessments when my students received homework, test, project, quiz, and classwork grades! It didn’t matter what the assignment was, just when it was done! But the extra effort is worth it for my students and me to have a clearer, more detailed understanding of their Spanish proficiency.