Do your assessments pass the ACID test?

It’s been well-established in the literature around professional learning communities that team-developed common assessments can serve as powerful tools to monitor students’ level of proficiency in the essential standards (DuFour, et al 2016). Common assessments, particularly those that are formative in nature, provide actionable data to teams focused on learning. Using these assessments, teams take action by examining which of their students attained proficiency on the skills and concepts that they deemed most essential and providing additional time and support to those students. They take action when the data reveals the instructional strategies that appear to be most effective across the team. They take action by using the information to provide frequent and specific feedback to students, a key strategy found highly effective in strengthening learning and developing a growth mindset (Hattie, 2013; Dweck, 2008).

Few educators would argue the benefits of common assessments for monitoring student learning and impacting their instructional practice. Yet many teams are reluctant to design these powerful tools for fear that they may not be of high quality or even “appropriate” in light of the more rigorous standards adopted by states and reflected in high stakes assessments.

The reality is that teachers are the best qualified to design assessments that monitor what they are teaching in their classrooms and therefore must be engaged in the design process. The other reality is that the targets for learning have changed and do require that teams reconsider the design of their formative and summative assessments. So how can teams build confidence that the assessments they design and use will “hit the mark?” Here’s a simple guide called the “ACID” test your team can use as you examine your assessment items. Each letter in the word ACID relates to an element of quality assessments, and includes guiding questions as well as the suggestions for what teams can do related to that element. Using the ACID test can empower your team to design high quality and appropriate assessments that lead to the actions that will increase student learning of what’s most important.

  Guiding Question What teams can do

A

Alignment

Is the assessment aligned to the context, content, & rigor/ complexity of the standards?

 

·      Look at the language of the standard and the learning targets (from the unwrapped standard) in comparison to the task.   Are the thinking types on the assessment aligned to those targets?

·      Do the various items target the various levels of rigor or application (e.g., DOK) represented in the learning targets?   For example, is the difficulty of the task or questions at the same level

·      Examine any exemplars related to your targeted level of complexity; is the level of scaffolding or cueing appropriate?

·      Is the designated level of mastery or proficiency appropriate and aligned?

C

Clarity

Are the items on the assessment clearly written?

 

·      Read the prompt and any distractors provided.   By completing this task as written, will students be demonstrating the skills and concepts you are targeting?

·      Will students understand what you want them to do?

I

Informative

Will this assessment be informative and meaningful about student learning? ·      Will teams benefit from gathering data on these learning targets in this fashion?

·      Will specific information on learning targets steer teams toward meaningful interventions/support?

·      Will this assessment be an opportunity to provide student feedback?

D

Design

Is the assessment designed to reflect and support the demands of the state standards and assessments? ·      Will the items ask students to show what they know in a way similar to high stakes assessments?

·      Are students asked to provide reasoning for their answers?

·      Are they looking for evidence?

·      Are they digging into information in a variety of text/ sources?

 

References:

Bailey, K. and Jakicic, C. (2016). Simplifying Common Assessment: Practical Strategies for Professional Learning Communities at Work (in press). Bloomington, IN: Solution Tree Press.

DuFour, R., DuFour, R., Eaker, R., & Many, T. and Mattos, M. (2016). Learning by Doing: A Handbook for Professional Learning Communities at Work (3rd ed.). Bloomington, IN: Solution Tree Press.

Dweck, C. S. (2008). Mindset: The new psychology of success. New York, US: Ballantine Books.

Hattie, John A. C. (2013). Visible Learning: A Synthesis of Over 800 Meta-Analyses Relating to Achievement. Taylor and Francis.

The Power of Going Vertical

 

A great deal of our work to improve student learning is accomplished through powerful conversations at the team level. Collaborative teams answer the guiding question, “What do we want students to know and do?” by identifying essential standards and examples of proficiency in those standards. As they teach, they monitor whether their students have learned those skills by collaboratively examining evidence and then identifying students who need additional assistance. Through ongoing reflection, they gain insights into effective instructional practices that impact their students’ learning. Generally, collaborative teams are horizontal in nature—in other words, the members of the team are working with students at the same grade level or within the same course. However, schools can periodically expand beyond the horizontal team level and engage the entire school in these powerful conversations. By periodically building in vertical conversations, schools can maximize their efficacy and ensure that they are building a staircase of proficiency in student learning.

Vertical conversations can be structured to address student learning at a variety of levels. Here are three types of vertical conversations and descriptions for their implementation:

 

Vertical Articulation of Essential Standards – After horizontal teams identify their draft essential standards, they create posters. Grade level posters of draft essential standards are arranged around the room. Teams are equipped with sticky notes and pens and stationed at their poster. Given a signal, teams move in a clockwise fashion to the next grade level. Teams examine the next grade’s essential standards, looking for patterns, omissions, redundancies, or any questions that might need to be clarified and leave messages on the poster using their sticky notices (Teams can use prompts such as “We saw…” or “We wondered.” Teams move from poster to poster, repeating the conversations until they reach their original grade level poster. Together, they read the comments and feedback and discuss implications for any revisions or clarification.

Vertical Articulation of Expectations for Student Learning/Rigor – Horizontal teams bring exemplars or expectations for quality of their essential standards in a particular strand in literacy, math, or other content areas. The exemplars can be glued onto a ledger size piece of paper. Teams examine the level of rigor across the grades by “passing the papers” from one team’s table to the next and provide feedback about the level of rigor they see in the exemplar. As a school, the staff discusses the progression of expectations and rigor to ensure that they are

Vertical Articulation of Student Work Samples – At key times during the year, teams bring artifacts of the work that students have produced in the focus area. Using the “pass the paper” strategy, teams provide feedback about what they notice. Through the process, common patterns in student responses can be identified and teams can discuss effective instructional strategies to improve the area of learning.

Vertical conversations can strengthen the school as a system and can be a powerful vehicle for building a staircase of proficiency as students move from grade to grade. Building upon the work of horizontal teams, vertical conversations promote a wide-angle view of student learning and further promote collective responsibility for the students within a school. Start looking for the opportunity to go vertical!

 

Bailey, K. and Jakicic, C. (2016). Simplifying Common Assessment: Practical Strategies for Professional Learning Communities at Work. Bloomington, IN: Solution Tree Press.

DuFour, R., DuFour, R., Eaker, R., & Many, T. and Mattos, M. (2016). Learning by Doing: A Handbook for Professional Learning Communities at Work (3rd ed.). Bloomington, IN: Solution Tree Press.

 

 

We may not need to reinvent the wheel, but we do need to kick the tires!

Much of my work is focused on supporting teams in the development of common formative assessments. In my experience, teams engaged in this process reach a new level of collaboration–they gain collective clarity about what students should know, and build consensus about the way they will gauge student learning. Yet, designing these assessments takes time—something that’s quite precious, especially as schools and districts transition to the Common Core State Standards. It’s understandable when teams feel overwhelmed. In fact, I am frequently asked, “Do we have to start from scratch? Can we use premade assessment items?”

Here’s my simple answer: We don’t necessarily need to reinvent the wheel, but we do need to “kick the tires. “ Rather than blindly adopting premade assessments pulled from an item bank, published in a textbook, or found in a web search, teams need to first examine the items with a critical eye. Kicking the tires involves asking the following questions: “Does this item really align with the learning targets we are teaching in this time period? Is the level of thinking in this item compatible with our targets? Is the structure of the item representative of how we need our students to respond?

There’s no other way to ensure the alignment, accuracy, and value of premade assessments than by taking the time to examine their content, structure, and quality. Rather than realizing “after the fact” that an assessment missed the mark, teams work smarter when they ensure alignment up front!

Take Time to Smell the Roses

Along with the change of seasons from winter to spring, comes the anticipation of new growth—it may be seen in the budding of leaves or blooming of early flowers. But for many educators, when the calendar pages arrive at spring, it signals a different type of anticipation. Instead of “Ahh, spring!” the reaction is “Agghh! Spring!” The final stretch of the race is here, what some educators call “crunch time.” We might hear worried statements such as “We still have so much to teach before the end of the year”, “My kids aren’t ready for testing” or “I can’t get everything done.” In our “busy-ness” and haste, we might not notice the many forms of new growth taking place right before us. Maybe we aren’t taking the time to “smell the roses.”

Why is it important to pause and look at the fruits of our labor? The reality is that teaching is hard work. But when teams examine meaningful feedback and see the impact of that work on their students’ learning, it builds momentum. John Hattie (2015) recently amended the practices highlighted in Visible Learning (2012), a meta-analysis of over 800 studies examining the effectiveness of practices in schools and classrooms. A notable addition to his ranking of effective practices is collective teacher efficacy, which he defines as the ability of teachers working together to produce a desired or intended result. According to Hattie’s analysis, collective teacher efficacy ranks as the second highest factor influencing student achievement with an effect size of 1.57 (far exceeding the .40 effect size considered to be effective). In other words, when teacher teams realize the powerful connection between their instructional and assessment practices and the outcome of their students’ learning based on assessment evidence—that new growth, their sense of effectiveness and the ultimate impact on student learning is significantly increased.

Do we pause just a moment to relish the feeling of efficacy—the knowledge that our hard work has paid off? Here are some guiding questions that both educators and students can consider to make sure they recognize and celebrate their collective efficacy:

At the school level (During schoolwide conversations):

  • How have teams grown in their use of effective assessment practices so that their students learn more?
  • What can we celebrate in our students’ growth so far based on our evidence?

At the team level (Reflecting during collaborative time):

  • How have we enhanced our students’ learning through our collaboration?
  • On what essential learning targets have our students grown as a result of our team’s focus?

At the classroom level (Individual teacher reflection):

  • How has the use of formative assessment practices impacted the learning of my students?
  • What specific instructional strategies strengthened the learning of my students?

At the student level (Student reflection):

  • How have I improved my learning?
  • What strategies did I use to make this happen?
  • How can I build upon those strengths?

Pausing to acknowledge the impact of our hard work doesn’t mean we stop moving forward. In fact, there’s evidence that making sure we “smell the roses” might actually help to energize our efforts and build that much-needed momentum that will propel us through that end of year “crunch time!”

Hattie, J. (2012). Visible learning for teachers: Maximizing impact on learning. London: Routledge.

Hattie, J. Scholarship of Teaching and Learning in Psychology, Vol 1(1), Mar 2015, 79-91.

Hattie, John. Hattie Ranking: 195 Influences And Effect Sizes Related To Student Achievement. Accessed at https://visible-learning.org/hattie-ranking-influences-effect-sizes-learning-achievement/ March 1, 2017.

Scaffolding is for Educators, too!

scaffoldAs we move into the Common Core State standards, it’s apparent that we will need to scaffold student learning–in fact, you might have been in on a conversation focused on that very concept.  To meet student needs and help them succeed in the more rigorous context found in the CCSS, we’ve discussed the importance of breaking things down into smaller chunks, using modeling, or an “I do, We do, and You do” approach to support their movement toward proficiency, and gradually reducing support over time.  Doing so creates a safe environment by controlling the struggle, and ensuring that the “muscles” are developed over time, building competence and confidence to ultimately tackle the challenge independently.  No argument, right?

Today, I had the opportunity to work with a team of teachers participating in a lesson study.  Together, they were grappling with the design of text-dependent questions that they would be asking students as they engaged in a close read of an article.  Mind you, all of the teachers on the team had attended a well-designed workshop in which they were actively engaged in learning the newer skill of  text-dependent questioning.  What I expected to see was fairly smooth sailing.  What I observed was a fair amount of struggle.  The team struggled with how best to approach the design of their questions.  They struggled with how much support and background knowledge to build prior to engaging their students in the process.  They struggled with the need to be right–to do it perfectly the first time, even though as part of the lesson study process, they would be refining their lesson once they gathered evidence from the students the first time around.

As educators, we need to follow our own advice when expecting teachers to shift their practices in multiple ways, such as embedding rigorous problem-solving or ensuring that students are digging into complex text across content areas by asking text-dependent questions.  How are we supporting teachers’ understanding and use of their new skills?  Are we expecting them to “get it” after a single workshop?  Are we expecting mastery of applicaton without struggle?  Simply put, we can’t.  We’re asking teachers to change their practice, and it will be a journey, not an overnight happening. We need to help them be aware of and embrace the fact that they will struggle, and be there to support them when they do.  One of the best things we can do when designing our professional development is to anticipate the struggle, and create some supports to help teams “muddle through” –things such as quality models, videos of strategies in action, or ideally, coaching.  In addition, by intentionally “chunking” our professional development into smaller learning targets, we also ensure that teachers aren’t overwhelmed with new information or strategies until they’ve had some time to struggle, practice, and integrate their learning on the previous strategies.  In other words, we need to remember the importance of giving the same consideration for our teachers that we want them to provide their students.

October 6, 2011 Post on All Things Assessment

Reaching the Tipping Point Through Common Formative Assessment

By Kim Bailey, coauthor of Common Formative Assessment: A Toolkit for PLC at Work™

When a school begins its journey to become a professional learning community, it’s fairly common for teacher teams to experience a lack of clarity about their purpose. Not only are they unclear about how they’re supposed to spend their time, they might even be questioning the premise that working collaboratively could lead to improved student learning.

In an effort to comply with administrative directives, many teams simply go through the motions of developing key products. While these are all valuable activities, teams may engage in them with little understanding about “why” each of the products was created, or the role they play relative to improvements in student learning. Upon completion of each product, teams might perceive that their task is indeed finished. In fact, you can almost hear the brushing of hands and the declaration that they are “done” with the work.

  1. Create team norms. Check!
  2. Define your essential standards. Check!
  3. Develop your SMART goal for the year. Check!

In our work to support the development of professional learning communities, we’ve seen a consistent pattern emerge. The tipping point at which members truly realize their potential impact as a team is when they design and implement common formative assessments. Each step inherent in the common formative assessment process provides focus to the team and a direct connection to student learning.

For example, through the unwrapping process, teachers become collectively clear on what students will know and do as a result of their teaching (Bailey & Jakicic, 2011). In the next step, designing assessments that are aligned to these learning targets, teachers become clear about the “end in mind” and as a result, instruction becomes far more intentional and aligned (Ferriter & Graham, 2008).

As student learning data is gathered and analyzed using these meaningful assessments, teams actually see the fruits of their labor and “own” the results. The connection between their efforts and improvement in student learning is clear. We call this efficacy. By going through this cycle of improvement, teams experience that what they do makes a difference. As a result, rather than merely complying and completing “activities” or holding random discussions around teaching, teams shift and begin focusing on how their teaching actually impacts learning. There is the momentum necessary to keep moving forward in the improvement process with new focus and intention.

So here’s the bottom line. We need to make sure that teams don’t get stuck muddling through the completion of discrete tasks or checklists that are viewed as disconnected from the improvement process. Let’s empower teams with tools and support in the design and use of common formative assessment so that they can begin realizing the power that comes from meaningful work to improve student learning. Let’s help them experience efficacy.

Sources:

Bailey, K. & Jakicic, C. (2011) Common formative assessment: A toolkit for professional learning communities at work. Bloomington, IN: Solution Tree Press.

Ferriter, W. & Graham, P. (2008, Summer).  One step at a time: Many professional learning teams pass through these seven stages. Journal for Staff Development, 29 (3), 38-42. Oxford, OH:  National Staff Development Council.