With gratitude for Natalie Wexler’s work and the learning we’re building together.
In her recent post, What do we mean by ‘reading comprehension?”, Natalie Wexler surfaces a critical tension in education today: how we talk about reading comprehension versus what we actually do to assess and teach it.
She writes:
“We talk about comprehension as though it’s a set of skills—finding the main idea, making inferences, identifying the author’s purpose—that students can deploy on any text, provided it’s at the right ‘level.’ But in reality, comprehension depends on how much you know about the topic.”
This distinction—that reading comprehension is less a generic skill and more a reflection of content knowledge and vocabulary—is central to the work we’ve been doing across our fourth- and fifth-grade learning teams this spring.
Why Assess Comprehension?
When we say we want to assess reading comprehension, what do we actually mean?
We want to know:
- Can students identify who’s who and what’s happening?
- Can they make sense of characters’ motivations, emotions, or decisions?
- Can they connect ideas across a chapter—or a whole book?
- Can they talk about and write about what they’ve read in ways that show they understand?
But here’s the challenge Wexler points to: if students lack the background knowledge or vocabulary to access the text, our “comprehension” assessments may be measuring something else entirely. Fluency. Decoding. Language exposure. Or simply familiarity with how school asks questions.
Our Goals for This Work
This spring, our Learning Team—Mary Jacob Harris, Laurel Martin, Sarah Mokotoff, and Samantha Steinberg— along with Marsha Harris (Director of Curriculum) and Jayna Cook (Literacy Instructional Specialist), committed to a learn-by-doing professional development series designed with the focus on building a shared inquiry around these essential goals:
- I can teach students to use their accommodation (Learning Ally) along with printed text to improve their comprehension.
- I can help students build habits and implement strategies to improve comprehension, annotation, and reading.
- I can prototype and innovate assessments to track growth over time and notice what works and what needs improvement.
We started with a prototype I designed using Bud, Not Buddy (Chapters 11–13) so that these motivated, dedicated educators would have a concrete example of the task ahead of us. All of us have read Bud, Not Buddy; none have read Starry River of the Sky.
Our goal is to support 10 Fourth-Grade students with learning support accommodations for the base classroom materials from Bookworms paired with Starry River of the Sky. This text offers rich character development, layered themes, and opportunities for both literal and inferential understanding.
Prototype Design
We focused on three comprehension domains:
- Characters
- Literal knowledge
- Inferential understanding
Using a familiar multiple-choice structure (inspired by Jen Jones’ work), we drafted five questions in each category. Each assessment is:
- Machine-graded, with results visualized through Looker Studio
- Shared with students, the Learning Team, and other adults to support planning and response
- Used formatively, not just to measure, but to guide instruction and intervention
We are documenting the strategies and techniques students use, as well as how they access texts, with and without accommodations like Learning Ally and Mote.
What We’re Learning So Far
In our March 12 meeting, we:
- Reviewed the Bud, Not Buddy Ch. 11–13 prototype.
(This is a copy so you can “take” it, although your responses will not be graphed.) - Agreed vocabulary is already supported in Bookworms through Super Sentences and other routines and therefore not needed in our assessment.
- Finalized our focus on the three domains above.
- Began setting up the Looker Studio dashboard for visualization and analysis.
Then, one of our 5th Graders, Brooks, piloted the prototype with Laurel and me. And, because of strong relationships between the student and the teacher, Brooks gave us important feedback on his experience. #Awesome

This piloting gave us our first real insight into how the assessment was functioning. We saw:
- Stronger performance on literal knowledge
- More variability in inferential understanding
This raised an important question: Should our metrics be different by category?
If we use the same “cut score” or mastery threshold across all domains, we may miss the nuance that Wexler urges us to pay attention to. Inferential reasoning is harder. It may require more support. Should we treat it the same as identifying literal facts from a paragraph? Probably not.
What’s Next
Before our next meeting, we’ll review sample data from Brooks and others in Looker Studio. We’ll ask:
- Do we have the right number and types of questions?
- Should the benchmarks for success be different for literal vs. inferential questions?
Also, before our next meeting, we will read the first two chapters of Grace Lin’s Starry River of the Sky.
Where We’re Headed
Wexler reminds us that comprehension isn’t a bag of tricks—it’s the product of knowledge, language, and engagement. By pairing well-chosen texts with thoughtful assessment prototypes, we’re designing a system that helps us see students’ strengths, name their strategies, and support their growth.
This isn’t just about scores. It’s about insight.
We want every student to understand what they read.
And we want every teacher to have the tools to know when that’s happening—and when it’s not.
We’re learning as we go.
We’re experimenting in learning by doing.
Wexler, Natalie. “What Do We Mean by ‘Reading Comprehension Instruction’?” What Do We Mean by “Reading Comprehension Instruction”?, Minding the Gap, 6 Apr. 2025, nataliewexler.substack.com/p/what-do-we-mean-by-reading-comprehension. Accessed 12 June 2025.