*The Math Minds Initiative began with the goal to improve mathematics instruction at the elementary level, with an eye to all learners continuously extending their mathematical understanding. The project has integrated findings from cognitive sciences and six years of empirical data to develop both a teaching model (the RaPID model) and a framework for lesson observation and analysis. The model identifies essential elements of effective mathematics lessons, and the framework offers a lens for both teachers and researchers to attend to and analyze the interplay of resource, teacher, and learner in the context of individual lessons, with consideration of how they are woven into a broader learning trajectory. When project teachers have used the model/protocol in collaboration with a resource that clearly identifies and weaves together key mathematical ideas, we have seen a powerful impact on student learning. The chapter elaborates on the development of this framework, with particular attention to contemporary literature from the cognitive sciences that has oriented the project; it also includes a critical review of hundreds of theories of learning, their assumptions on the nature of mathematical knowledge, and their advice for teaching. We conclude the chapter by elaborating on implications for teacher professional development*.

## Introduction

Since 2012, the Math Minds initiative has focused on improving mathematics instruction at the elementary level through the collaboration of different organizations, including the University of Calgary, three Alberta schools, and JUMP Math – a not-for-profit organization that develops teaching materials for mathematics instruction. During its first five years, the initiative included teacher professional development and weekly mathematics lesson observations for 31 participant teachers and the video-recording of more than 300 lessons. The study also included a longitudinal analysis of student performance in mathematics, as measured by the Canadian Test for Basic Skills (CTBS; Nelson, 2019), 44 teacher interviews, and 228 student interviews.

Findings in this initiative include a consistent improvement in student performance in the CTBS mathematics components. Using a Linear Mixed Model (West, 2009), we observed a sustained longitudinal improvement in student performance (Preciado, Metz, & Davis, 2019). The improvement was particularly noticeable in the areas of conceptual understanding and problem solving. These results are consistent with classroom observations and students’ interview responses that showed students engaged in mathematical tasks that extended their understanding during class; many interview responses also indicated a positive attitude towards mathematics.

In a second stage of the initiative, we used the results from the first stage to inform the development of the Raveling, Prompting, Interpreting and Deciding (RaPID) model for teacher professional development. This model includes a Framework for Lesson Analysis (RaPID FLA) intended to provide feedback to teachers as a means to support their professional development. Remarkably, some of the challenges we faced in the development of the model are related to what we call “contemporary obsessions” in education. Such obsessions include general strategies such as group work, the promotion of personal strategies, or the use of multiple representations that, although important, are unlikely to support the development of a coherent and robust mathematical understanding unless they are accompanied by critical elements of teaching and learning. The Math Minds approach contrasts with both traditional and reform perspectives (Metz et al., 2016) in ways that may not be obvious at first sight but which have profound implications. For this reason, we decided to begin this chapter by situating our work theoretically through a critical review of hundreds of theories of learning, their assumptions on the nature of mathematical knowledge, and their advice for teaching, culminating in a unique, scientifically grounded blend of insights pertaining to mathematics pedagogy. We then describe the RaPID model and elaborate on the development of the RaPID FLA tool. We conclude the chapter by discussing implications for teacher professional development.

## Situating Ourselves Theoretically

We wanted our work to be theoretically defensible and to have theoretical breadth. Unfortunately, given the vast array of perspectives on learning and teaching that are at play in the field of education, these two qualities can sometimes work in opposition. Our strategy to deal with that issue was to survey and critically assess all the theories, principles, and metaphors about learning that we could find at play in the mathematics education literature.

Our current count of reviewed and analyzed perspectives has surpassed 500. With a view toward constructing a *map* of education’s epistemological landscape, our analyses comprise reading both original- and secondary-source material, identifying foci, crafting concise summaries, discerning grounding metaphors (for knowledge, knowing, learning, learners, and teaching), assembling prominent criticisms, and linking these to close relatives. We also attempted to assess the scientific status of each perspective, the principal criteria for which were:

- That assumptions and metaphors are made explicit (e.g., many theories rely on implicit and largely indefensible metaphors, such as ‘learning as acquiring’ or ‘brain as computer,’ rendering them formalizations of common sense, but offering limited useful insight into the complex dynamics of learning); and
- That there is a supporting body of evidence with no contradictory evidence (e.g., many theories of ‘learning styles’ and ‘cognitive styles’ are present in the literature, despite a lack of evidence; see Willingham, Hughes, & Dobolyi, 2015).

The map, we hoped, would reveal major fault lines and important seams in current understandings of learning. We were oriented in this aspect by some initial suspicions based on past work. For the most part, these suspicions were verified through the analysis and are manifested in the map’s structure. For instance, we anticipated that a useful contrast would be the bifurcation of ‘correspondence theories’ and ‘coherence theories.’

A *correspondence theory* is an epistemological perspective that assumes a radical separation of mental (or internal, or mind-based) and physical (or external, or body-based) worlds. That separation sets up the need for a *correspondence* between what is happening outside the knower in the real, objective world and what is happening with the knower’s inner, subjective world. Most correspondence theories are developed around object-based metaphors (e.g., knowledge is seen as a thing, a commodity, bits of information, a fluid, and/or a product/outcome). Typically, correspondence theories rely on linear/direct imagery, rigid binaries/dichotomies/dualisms, and cause–effect Newtonian mechanics. Many correspondence theories develop around and focus on taxonomies and concern themselves with separating and classifying.

In contrast, *coherence theories* are perspectives on learning that regard distinctions and descriptions as useful devices to make sense of the complex dynamics of learning – but they are oriented by the caution that such devices are mere heuristic conveniences. Truths are not cast in terms of correspondences (e.g., between theories and actuality, or between subjective models and objective reality), but as coherences. They are elements that contribute to and rely on a larger whole – those interpretations that constitute a consistent, extensive body.

In this context, a statement is true to the extent that it is a necessary element of a systematically coherent whole. In other words, coherence theories suggest that truths do not exist independently or outside of a system – a commentary on humans’ understanding of reality, not on reality itself. Most coherence theories employ biological and ecological metaphors, with dynamics framed in evolutionary terms and relationships framed as couplings, complementarities, and nestings.

The correspondence–coherence contrast serves as the horizontal axis of our mapping. As illustrated in Figure 13.1, the correspondence theory region was further subdivided into ‘mentalisms’ and ‘behaviorisms.’ Mentalisms encompass perspectives that assume a separation of mental from physical worlds (inner from outer, subjective from objective, etc.) and cast learning in terms of mental images, models, encodings, or other inner representations of the existing world. Some sort of barrier – typically the body, or fallible senses, or faulty subjective interpretations – is seen to prevent direct, first-hand knowledge of reality. For mentalisms, the measure of truth is the extent to which internal representations match with external reality.

Their counterpoint, behaviorisms, rejects the notion that knowledge is some sort of external, stable, and context-free object that exists independently of knowers by redefining personal knowledge as established and stable repertoires of behavior that are triggered by events in the world. Seeking to establish a scientific basis for their claims, behaviorisms rejected attempts to explain learning in terms of unobservable mental processes, opting instead for phenomena that can be observed and measured. Originally oriented by the metaphor of a telephone switchboard (and, specifically, the activity of linking nodes), learning was imagined in terms of establishing a network of causal relations between stimuli and responses. That network is seen as conditioned by and reflective of the real world but not necessarily representative of it.

The coherence portion of the horizontal axis is organized into three nested regions: embodied theories, embedded theories, and eco-complex theories. The term ‘embodied theories’ serves as an umbrella reaching across perspectives that refuse a separation of internal and external. Mental and physical are understood as integrated and inseparable aspects of the body. Phrased differently, the body is not seen as something that a learner learns through, but as the learner. Correspondingly, behaviors are not seen as goals or indications of learning, but as integral elements of learning. The phrase ‘embedded theories’ reaches across perspectives that refuse separations of ‘self’ from ‘other’ and ‘individual’ from ‘collective.’ Perceived boundaries among persons and peoples are therefore understood as heuristic conveniences, as collective phenomena are recognized to unfold from and to be enfolded in individual phenomena. In other words, collective forms are understood as learning bodies.

Finally, the category of ‘eco-complex theories’ comprises perspectives on learning that refuse separations of human from nature, material from transcendent, and part from whole. Across eco-complex theories, learning is understood as synonymous with evolution.

Figure 13.1 presents a simplified graphic of our map’s *x*-axis. Its *y*-axis was more emergent, arising as a surprisingly sharp distinction between perspectives focused on the nature or dynamics of learning and those more concerned with the pragmatics of teaching. Our vertical axis is thus defined by the categories of ‘theories of learning’ and ‘theories of influencing learning.’ In Figure 13.2, we offer provisional terms to occupy the resultant cells.

The next step was to place each of the theories we reviewed on the evolving map, as we continued to review others. As more and more were placed, emergent clusters began to present significant subthemes (e.g., ‘extrinsic motivation theories,’ ‘identity theories,’ ‘activist theories’). Figure 13.3, a work in progress, represents the current state of affairs. That version also makes use of color coding to distinguish among folk theories (red), quasi-scientific or limited-scientific theories (amber), and scientific theories (green). The following are among our important realizations from this work:

- Most theories of learning in education are actually theories of
*influencing*learning; - Most theories that actually offer accounts or characterizations of the nature and dynamics of learning are rooted in popular (uncritical, typically implicit, usually indefensible) metaphors;
- With regard to those theories that are principally concerned with influencing learning, recommendations are often piecemeal. In particular, many pieces of advice for educators (e.g., group work, personal strategies, learner explanations, growth mindset) are anchored to one or two defensible principles, but oftentimes in ignorance of a broader array of issues and insights, resulting in claims and advice that can lack an important level of rigor.

The evolving map is useful for locating our work amid the persistent North American “math wars,” in which traditional structures (teacher-centered, procedure-focused, outcomes-based) are set against a reform agenda (student-centered, understanding-focused, relevance-based). From our analysis, and roughly speaking, the traditional camp tends to be grounded in the discourses located on the left third of the map, and most reform arguments are grounded in the middle third. Distinct from both of these, we find our principal influences in the lower right quadrant of the map.

With regard to theories of learning, we anchor this work in cognitive science, the interdisciplinary study of cognition across all learning/thinking entities. In terms of more specific theories of learning that are consistent with (and, for the most part, subsumed by) cognitive science, the following are among our principal influences:

- Embodied cognition – that is, understanding knowers as culturally situated, biological beings (Varela, Thompson, & Rosch, 1991);
- Socio-cultural theory – that is, understanding learners as socially active and culturally situated beings (Vygotsky, 1986);
- Spatial reasoning – that is, translating bodily motions into abstract, conceptual tools (Davis and the Spatial Reasoning Research Group, 2015);
- Conceptual metaphor theory – that is, attentive to the role of metaphor in weaving and maintaining webs of meaning (Lakoff & Johnson, 1999);
- Conceptual blending theory – that is, positioning the combining metaphors as a core process in learning and creativity (Fauconnier & Turner, 2003);
- As for theories of
*influencing*learning, the following have proven particularly useful; - Affordance theory – that is, exploiting possibilities for action and sense-making that arise for an individual in an environment (Gibson, 1979);
- Variation theory – that is, channeling attentions and managing associations by exploiting habits of perception (Marton, 2014);
- Mastery learning – that is, structuring and pacing to positively affect achievement and understanding (Bloom, 1968);
- Meaningful learning – that is, strategies that emphasize building on established understandings, learner engagement, inquiry, and empowering learners (Novak, 2002); and
- Expert–novice research – that is, insights into teaching gleaned from attending to differences between the habits and abilities of experts and novices (Ericsson, Charness, Feltovich, & Hoffman, 2006).

In the next section, we offer some more fine-grained detail on how threads drawn from these perspectives are woven together in our model of teaching and research tools. For now, in somewhat coarser terms, and tying together the theories listed above, we link mathematics knowledge, mathematics learning, and mathematics teaching in the following way: we regard mathematics *knowledge* as comprising “principles” (i.e., stable and patterned aspects of existence) and “logics” (with analogical and deductive processes figuring most prominently). In parallel, mathematics *learning* involves noticing principles and integrating them through appropriate logics. For us, this means that mathematics *teaching* is about orienting learner attentions to key principles and juxtaposing those efforts in ways designed to support appropriate integration.

We might thus identify a major component of our work as “trying to change the language” around mathematics teaching – a focus that, not without irony, has on occasion placed us at odds with traditionalists and reformists alike. A consequence of our deliberate effort at unfamiliar characterizations is that both adamant traditionalists and inflexible reformists have argued that we are opposite to them. More descriptively, our model of “structured inquiry” is simultaneously dissonant for those who reject inquiry (typically, and erroneously, equating it with “discovery”) and those who reject an emphasis on structure (typically seeing it as an indication of standardization and teacher-centeredness).

## Description of the Model

The word RaPID is both an acronym and a descriptor. It is an acronym for the four key components: *ra*veling, *p*rompting, *i*nterpreting, and *d*eciding. When all of these are done effectively, a lesson seems to progress *rapidly*, as learners make and integrate key discernments. We first offer some important distinctions necessary for understanding the model; then, we frame these in a matrix that allows a holistic view of how they all fit together. We then elaborate on each of the elements of the matrix. Once we describe the model and each of the elements, we highlight ways that RaPID may be distinguished from (and interrupt) common classroom practices associated with both sides of the traditional–reform dichotomy. Finally, we offer a brief statement about the significance of what we have come to refer to as a ‘teacher-resource partnership’ and the impact this has had on the development and use of the model.

### Key Distinctions

The RaPID model is based on several key distinctions that we introduce here and elaborate on throughout the next section:

- The model prescribes ‘ribboned’ lessons, in which learners have an opportunity to engage with each new idea. These lessons are distinguished from blocked lessons, in which multiple ideas are offered at once before learners have a chance to engage with each. Importantly, blocked lessons may take the form of either long explanations or long chunks of relatively unstructured time during which learners engage with complex problems. Ribboned lessons are neither. For ideas to
*co-evolve*(both individually and collectively), there needs to be continuous interaction between learners and the fine-grained ideas that they are weaving into coherence. - ‘Critical discernments,’ or awarenesses of significant mathematical objects or relationships, are distinguished from both procedural steps and broad mathematical topics. Topics may point to significant mathematical ideas, but they are too broadly defined to support teachers in designing effective lessons. Procedural steps involve attention to pieces and how they are connected, but the pieces are typically rigidly defined actions rather than dynamic awarenesses, and they have value only in the context of a specific procedure (e.g., “Bring down the 5” during long division; “Put a zero in the second row” when performing double-digit multiplication). Critical discernments are fine-grained awarenesses that have mathematical significance beyond an immediate procedure. In working with teachers, we have found it important to clearly articulate these distinctions.
- ‘Raveling’ is distinguished from ‘scope and sequence’ through its fine-grained attention to the gradual integration of critical discernments into increasingly complex webs of association. It is based on the metaphor of a rope that is woven from strands that are themselves woven from smaller strands (and so on; see Figure 13.4). Raveling is significant on multiple time frames: the moment-by-moment associations that take place during a lesson are integrated into more complex associations within a single lesson as well as over the course of longer lesson sequences within and between grades. Key to effective raveling is ensuring that learners are invited to attend to associations between previously discerned ideas. In other words, they should not be expected to braid the strands at the same time that they are trying to weave the strands into a rope. Learners who are asked to form associations between ideas that are themselves poorly understood are sometimes described as having limited ability to reason, but their reasoning presents as strong when they are making associations among ideas that are well understood.
- ‘PID’ cycles of
*p*rompting awareness of critical discernments,*i*nterpreting learner awareness of those discernments, and*d*eciding how the lesson might proceed are distinguished from other structures for teaching and assessing through their attention to*all*learners,*all*key discernments, and awareness of emergent possibilities. In that sense, the effectiveness of PID cycles is deeply tied to careful raveling and ribboning. Effective prompting is also deeply tied to awareness of how effective patterns of variation can be used to draw attention to key ideas and relationships. Prompting, interpreting, and deciding are further elaborated in the following section.

### The RaPID Matrix

We have found it helpful to consider these distinctions in terms of a matrix that separates the impact of PID cycles and raveling, as presented in Figure 13.5. This matrix is also referenced in the first section of the RaPID FLA in Appendix A.

Quadrant 1 of the matrix describes lessons in which clearly raveled content is clearly prompted, learner awareness is continuously interpreted, and the teacher’s awareness of learners’ sense-making informs the unfolding ravel, the nature of prompts, and even the manner of gathering information about learners’ sense-making. Here, there is a co-evolution of personal and shared mathematical knowledge, and lesson structure and emergent mathematical knowledge.

Quadrant 2 describes lessons where mathematical discernments have been clearly separated for attention, but effective learning is interrupted by ineffective prompts, lack of awareness of how learners make sense of those prompts, and/or ineffective decisions about how to proceed with the lesson. Such breakdowns may be associated with *incoherences* among personal and shared ways of knowing and/or between lesson structure and emergent mathematical knowledge.

In some instances, we have observed lessons that appeared to have high mathematical coherence from the point of view of an observer familiar with the content of the lesson, but learners were not making sense of those ideas. Rather than adjusting prompts or considering discernments not identified in the raveling of the lesson, the teacher offered strong hints that allowed one or more students to give a correct answer without clear understanding – an issue with *interpreting* that is likely to be at least partially rooted in correspondence theories of learning and influencing learning. Such lessons often progress according to a pre-defined plan, an issue with *deciding* that may be similarly rooted. After all, if ideas are seen as having been clearly *offered* and learners as having failed to *pick up* what was *out there*, it can be hard to see alternative paths forward. In RaPID terms, it is likely that the scale of the ravel was not well-matched to the learners, though it is possible that learners would have been able to make intended discernments had more effective prompts been offered.

Quadrant 3 describes lessons where both the raveling and PID cycles are weak – for instance, a lesson in which a teacher offers a full explanation of the steps in the long-division algorithm and then allows time for students to practice would fit in this space. Such a lesson attends to steps rather than critical discernments, and the steps are offered in an instructional block.

Quadrant 4 describes lessons where PID cycles are strong, but they are applied to poorly raveled content. In contrast to the lesson described above, this might take the form of a lesson on performing the long-division algorithm in which the algorithm is separated into steps and learners are required to engage with each step. The teacher interprets whether learners are able to perform each step, and proceeds when they show that they can do so. Extensions may be offered to quick finishers. The structure of the PID cycle is in place, but it is not used to draw attention to critical discernments.

It is important to note that Quadrants 3 and 4 can also describe lessons that at first glance seem more inquiry oriented. For example, a Quadrant 3 lesson might take the form of an open problem that requires learners to integrate ideas that are not yet well-developed. Even if the teacher attempts to support learners as they work, the need to simultaneously develop and link ideas limits what is available to be learned. A Quadrant 4 lesson might take a seemingly open problem and break it into steps to help guide learners to a solution but fail to adequately consider critical discernments necessary for making sense of those steps; it might also fail to consider whether or how awarenesses developed through engagement with the problem might be relevant to future learning. Sometimes such lessons intend to offer practice in general problem solving, but here again, the emphasis on raveling gets lost, and it is unclear whether more general competencies are developed.

When learners have discerned relevant problem features and have access to strategies for systematically varying and integrating those discernments, open problems may offer meaningful opportunities for learning that could be placed in Quadrant 1 or 2.

In summary, the RaPID model offers a set of distinctions that makes it easier to separate key factors that have an impact on the effectiveness of mathematics lessons. Having looked at the broad structure of the model, we will now consider the four elements of the RaPID model in greater detail.

### Raveling, Prompting, Interpreting, and Deciding

*Raveling*. Earlier, we noted that raveling takes place at multiple scales. In school, this may be seen in the moment-by-moment events of a lesson intended to bridge learners’ ways of knowing with shared ways of knowing, as well as in the weaving together of lessons, units, and grades over much longer spans of time. Still more broadly, raveling may be seen to include the emergence of new mathematical knowledge.

As students move from lesson to lesson or grade to grade, careful attention to how different strategies and representations might be *integrated* is an important aspect of raveling. For instance, a student who can balance a scale by adding or taking away blocks from one side of a balance scale, or who can draw pictures to demonstrate those actions, may see no connection between those representations and a numerical equation unless such connections are intentionally developed. Potential interactions between seemingly more disparate areas of mathematics (e.g., number and geometry) may be similarly lost when they are seen as categories of knowledge or “topics” rather than as integrated clusters of discernments that have the potential to interact with other discernments outside of those clusters.

On the scale of a lesson, a key aspect of a strongly-raveled lesson is that it identifies relevant features and/or associations between well-understood features. Taking even a few minutes to clarify relevant features can do much to allow all learners to engage in making associations among them. Having done so, there is less need to pause to keep addressing the needs of those who are struggling.

On a broader scale, taking the time to ravel key discernments rather than rushing too quickly to topics that are inadequately developed can make a big difference to learners’ ability to make sense of more complex ideas. Recently, the Grade 1 classes in our project began using a revised version of a resource in which addition and subtraction are not formally introduced until much later than to what teachers were accustomed. Although the teachers at first expressed reservations about this, they found that by the time they did start working with those ideas, the students’ conceptions of number were better developed, and they were able to make sense of addition and subtraction more easily.

Once significant features are clarified, they can be varied (singly or in combination) to offer opportunities to keep extending and integrating recent insights. They may also be identified as significant structural elements relevant to the conceptual mapping of diverse representations, solutions, or models. Challenging extensions may thus emerge as more complex and/or open considerations of familiar features. Effective raveling, then, includes identifying which elements in a complex web of mathematical ideas and associations might effectively be brought into conversation with a particular group of learners at a particular time. Prompting, interpreting, and deciding may be seen as ways of facilitating that conversation.

*Prompting*. Perhaps not surprisingly, considerations of how to effectively facilitate this conversation draw from theories of influencing learning. Effective prompting draws considerable insight from variation theory, as initially developed by Marton and colleagues in Europe (Marton, 2014; Runesson, 2005; Mason, 2017; Watson, 2017) and separately by Gu and colleagues in China (Gu, Huang, & Marton, 2004; Lai & Murray, 2016). Such work is consistent with findings regarding the limits of working memory but also offers a powerful alternative to theories that rely on a transmission-based view of learning (e.g., Sweller, 2016). In doing so, the tension between *reducing the amount of new information* that a learner must attend to at any given moment and *maintaining sufficient information* to allow the contrasts necessary for prompting attention to particular ideas and relationships – and for bringing diverse ideas into conversation – assumes significance.

Marton’s (2014) “Principle of Difference” reminds us to change what we want to draw attention to, as far as possible against a constant background. This can take the form of offering examples, non-examples, counter-examples, and non-standard examples to draw attention to defined mathematical objects (“conceptual variation” in the Chinese tradition; cf. Gu, Huang, & Marton, 2004). Variation may also be used to draw attention to the dynamic unfolding of relationships and associations (“procedural variation” in the Chinese tradition; cf. Lai & Murray, 2016). For example, by changing one feature (or variable) and observing corresponding change in another, patterns may be discerned and relationships inferred. By using carefully considered chains of logic, it is possible to support gradual transformations between what is known and unknown. Further, different strategies may be contrasted to draw attention to relative efficiency and/or transparency, and different solutions, representations, or models may be contrasted to draw attention to structural similarities and differences among them. In all of these cases, *contrast* is key; it matters what is set side by side, how the dynamics of sameness and difference are highlighted, and what insights might emerge in these interactive spaces. When careful variation is used in conjunction with clearly raveled mathematical content, powerful and focused sense-making is often made possible.

So far, discussion of effective variation has focused on the *offering* of effective contrasts. A second essential element of prompting is that learners must be required to *make the distinctions and associations* that those contrasts are intended to prompt attention to. This act of engagement is essential to learning. It is important to note that tasks that *prompt* are notably different from practice sets that merely ask learners to *do something related to the topic*. Again, effective prompts require *engagement with intended distinctions and associations*. Such prompts can also offer opportunities for teachers to effectively interpret students’ understanding.

*Interpreting*. Acts of interpreting require questions that allow teachers to clearly distinguish learners’ understanding from lack of understanding. If learners are required to make the distinctions and associations described as essential to prompting, teachers may use this opportunity to interpret their learning. A further criterion is now required, however, which is that learners must offer their responses in a manner that makes it easy for teachers to *perceive* them. This includes considerations of framing questions that allow for a quick check of all students, as well as response systems that allow all students to quickly indicate an answer (e.g., whiteboards, hand signals). Raveling is critical here as the identification of critical discernments helps teachers frame such questions. The RaPID model discourages reliance on responses from a small number of learners as well as strategies that allow learners to simply indicate whether or not they have understood something (as opposed to offering responses that more strongly indicate understanding).

*Deciding*. As mentioned in the discussion of raveling, clearly articulating key features and the relationships among them through clear raveling and prompting offers focus and clarity to both strong and struggling students. Once such features are identified, they can be modified and combined in many ways; in this way, ‘features’ may come to be experienced as ‘variables,’ which at one moment may exist in a particular state but have the potential to exist in many. At this point, learners may begin to “see the general in the particular” (Mason & Pimm, 1984). When task variations are rooted in a few well-articulated ideas, there is also a clearer path from ‘struggling’ to ‘strong.’ When students struggle, clarifying features and relationships can offer meaningful support that makes further challenges accessible. At the same time, even highly challenging extensions can typically be defined in terms familiar to all.

### Interrupting Current Obsessions

As we hinted at when describing the sorts of lessons that might fall within each quadrant of the RaPID matrix, the RaPID FLA clearly focuses attention on aspects of mathematics pedagogy that we have found to make a difference to student learning. At the same time, it offers important lenses through which to offer fine-grained distinctions between RaPID and many ideas that are often held up as important to perspectives that identify with one or the other side of a traditional/quasi-reform dichotomy (cf. Davis, Towers, Chapman, Drefs, & Friesen, 2019; Metz et al., 2016). Earlier, we noted that this work has at times placed us at odds with both traditionalists and reformists. We have noticed that unless we explicitly articulate particular points of contrast between RaPID and both of these groups, RaPID descriptors sometimes get interpreted through familiar lenses in ways that are not consistent with our intentions. More encouraging, however, is that clearly distinguishing our model from both traditional and reformist models offers a perspective that speaks to both sides.

On a broad level, even the use of the term ‘structured inquiry’ has at times been interpreted in ways other than what we intend. ‘Structured’ sometimes gets assigned to the traditional pole of prevailing dichotomies, and ‘inquiry’ gets assigned to what is perceived as a ‘reform’ pole; ‘structured inquiry’ becomes a problematic middle or “balance” zone. For example, some take “structured inquiry” to mean some ratio of *rote learning* and *discovery*. In the RaPID model, the ratio of ‘*rote learning*’ should be close to 0%, and the ratio for *sense-making* should be close to 100%. Some take ‘structured inquiry’ to mean guided opportunities for *conjecturing* and *reasoning*. While these are important, RaPID emphasizes the need for these to take place in the context of well-raveled content; they are not treated as generic problem-solving competencies. When ‘structured inquiry’ is taken to mean that students should have structured opportunities to make sense of important ideas, it may be consistent with the RaPID FLA framework. But *how* that content is raveled and *how* attention is prompted to selected ideas are key considerations.

Those whose background tends to be more traditionalist tend to refer to ‘the basics,’ by which they mean a collection of memorized facts and procedures. In Math Minds, we emphasize the importance of critical discernments that will eventually be integrated into other ideas. For example, rather than referring to ‘long division’ as a basic, the RaPID model not only emphasizes *critical discernments* that are essential to long division, but also to other mathematical ideas. These include flexible regrouping of numbers in Base 10 and an understanding of multiplicative relationships between numbers and their factors.

While worked examples play an important role in Math Minds, we emphasize the importance of *clear contrast t*o highlight what is being exemplified. Rather than merely ‘practicing’ examples similar to worked examples, we focus on ensuring that learners are asked to make the distinctions or associations that were highlighted in the initial examples. For example, by contrasting “trade with no trade” (Figure 13.6) or “1 ten with more than 1 ten” (Figure 13.6, lower part), key critical discernments are highlighted rather than merely offered as steps in a procedure.

‘Practice’ would then involve requiring students to *make* those discernments. Given practice sets where “adding without re-grouping” is always separated from “adding with re-grouping,” such discernments require memory of the previous days’ work rather than direct engagement with *that* discernment. Further, when tasks are grouped in such a manner, it becomes possible to simply keep repeating the same action rather than attending to the desired distinction (i.e., the practice becomes rote). This is not to say that everything must be offered all at once; it is just that there needs to be contrast to highlight the piece that *is* being offered. When first learning to add two-digit numbers, a key discernment is that tens must be added to tens and ones must be added to ones. In *that* case, distinguishing place values should be the point of contrast; for example, contrasting 3 + 3, 30 + 3, and 30 + 30 (then 33 + 3, 30 + 33, 33 + 33) might offer deeper insight, prompt thinking about important distinctions, and offer a clearer opportunity for teachers to interpret learner understanding (see Pang, Marton, Bao, & Ki, 2016). If this distinction is clear, the idea of re-grouping extra tens is more likely to make sense to learners.

Even so, a teacher might first vary the number of leftover ones, then the number of tens being re-grouped, *then* both together with clear emphasis on the *transitions* between these: 13 + 16, 13 + 1**7**, 13 + 1**8**, 13 + 1**9**, 13 + **2**9, 13 + **3**9, **2**3 + 39, **34** + 39, etc. Although key ideas are looked at separately, there is always an intentional point of contrast, and there is attention to what happens when the point of contrast changes. In this way, practice also involves *continuous integration of known and unknown* in a manner that is consistent with the structure of mathematics. This further exemplifies our earlier distinction between merely reducing cognitive load and offering necessary contrasts while attending to the limits of working memory.

Shifting now to the ‘reform’ side of the dichotomy, we often hear the term ‘rich problem’ – used variously to describe a problem that offers a ‘real-world’ context – brings together a variety of concepts, requires learners to generate their own examples or strategies, and/or requires them to make, test, and/or justify mathematical conjectures.

‘Real-world context’ can motivate learners and lend insight into mathematical ideas. But what one learner finds motivating may be less familiar and/or of little interest to another. Some contexts include distracting details and/or complex example spaces. In Math Minds, we emphasize the importance of examples that offer clear contrasts and meaningful sequences relevant to a particular *mathematical* discernment. These may or may not be the same contrasts necessary for understanding the scientific problem or social issue that motivated the problem. When the context highlights a relevant example space, it can support the learning of mathematics. In Figure 13.7, the context (riding a train) might be (loosely) considered real-world, but the problem is inherently mathematical and directly focused on the distinction between tens and ones.

In Math Minds, we do acknowledge the value of learners generating their own examples but within carefully defined constraints that ensure that the clear contrasts necessary for making distinctions and associations are not lost (again, raveling matters). We also emphasize the importance of problems that help constrain learner attention to associations among *familiar* ideas rather than expecting them to develop associations among ideas that are themselves poorly understood.

Asking learners to develop ‘personal strategies’ is intended to prompt sense-making and to honor individual ways of knowing. Unfortunately, personal strategies can also be distracting, limiting, and isolating. Here again, a focus on raveling shifts attention to how strategies are developed and linked, both in the moment and over time. This includes honoring students’ ways of knowing while helping them relate their understanding to shared and effective ways of knowing that will support their engagement with increasingly complex mathematics. This can be particularly important when offering intentional transitions from concrete to pictorial to symbolic representations. When symbolic representations are well understood, students often find them *easier* to manipulate than physical objects, and they allow consideration of example spaces much broader than those that are feasible in the physical world.

In a closely related vein, problems with multiple entry points are sometimes offered as opportunities for learners with diverse backgrounds to engage with mathematical ideas at their own level. In Math Minds, we emphasize the importance of clarifying key features such that all learners have access to the core ideas of the lesson. Those features may be varied and combined in many ways, allowing some to extend their understanding beyond what is required.

Finally, ‘rich problems’ are sometimes described in terms of whether they prompt students to make conjectures, reason, exemplify, and justify. While all of these are central to the RaPID model – and indeed are central to working with even the finest-grained contrasts and sequences – it matters *which* conjectures, examples, and arguments are prompted.

So far, most of the distinctions between the RaPID model and other currently popular ideas have focused on raveling, prompting, and interpreting. Given these, however, important differences emerge that distinguish RaPID’s approach to working with diverse learners in the classroom. In the RaPID framework, ‘deciding’ focuses on the decisions teachers make in response to their interpretations of learner understanding. Such decisions typically focus on various ways of ‘differentiating instruction’ to meet the needs of different learners. Sometimes, such attempts aim to remediate, and sometimes they aim to differentiate (see Hale et al., 2016). The RaPID model focuses on how associations might be prompted between learners’ perceptions and conventional ways of thinking about particular mathematical ideas. When done effectively, this can benefit all learners and may even prompt associations that had not been previously considered. Such creative associations in no way undermine the importance of shared understanding of conventional mathematics; the creativity is in the new associations. More clearly separating relevant problem features also makes it easier to extend and combine those variables in ways that may challenge even the most capable students.

The importance of making these distinctions explicit is relevant both for teachers seeking new ways to frame thinking about their practice and for researchers observing those classrooms. The RaPID FLA attends to both of these goals: It has evolved (and continues to evolve) as a tool to support teachers as well as a tool to guide research. Before we elaborate on the evolution of the framework, we articulate an expanded conception of teacher, as we have come to see the vital role of both teacher and resource in a teacher-resource partnership.

### Teacher-Resource Partnership

As we described in the introduction to this chapter, Math Minds began as a partnership between our home university, local school districts, and a (non-profit) resource-development organization: JUMP Math. Extensive interactions between teachers, researchers, and JUMP Math representatives have been essential to the evolution of the RaPID framework and continue to inform the evolution of the resource.

The RaPID model acknowledges the incredible complexity of even seemingly simple elementary mathematical concepts. For this reason, a strong teacher-resource partnership is an essential element of the model. At the elementary level, many teachers are not mathematics specialists; we have observed that a strong resource not only supports teachers in developing coherently raveled mathematics lessons within the grades for which they are responsible, but also increases the coherence between grades. This is further enhanced when there are opportunities for teachers at different grade levels to interact. The nature and quality of such interactions is supported not only by consistency of terminology and approach but also by a clearly discernible relationship between what happens in different grades.

Having described the history of Math Minds, the theoretical underpinnings of the RaPID FLA, and the key elements of the model, we now offer a description of the evolution of the framework as both a model for teaching and a tool for research.

## Development of the Framework for Lesson Analysis

This section includes an overview of the RaPID FLA framework and then elaborates on how it has been developed and how it can be used as a tool for teacher professional development.

### Overview of the RaPID Framework for Lesson Analysis

The RaPID FLA may be viewed in terms of the broad principle of offering contrasts intended to *prompt attention* to and *invite engagement* with carefully selected, sequenced, and integrated (i.e., raveled) mathematical features and relationships (i.e., critical discernments).

*Raveling(a) (Ra-a)* is about identifying critical mathematical discernments and sequencing them in a manner that allows them to be gradually integrated into a coherent whole. (Note: This is difficult to observe in a single lesson and is therefore part of the RaPID model but not of the most recent RaPID FLA.)

*Raveling(b) (Ra-b)* considers whether those discernments are sufficiently decomposed so as to be discernible by a particular group of learners. It is possible for mathematical content to be well raveled for one group of learners but not for the target group. In terms of the rope metaphor introduced in Figure 13.4, ideas are sufficiently decomposed if the pieces being woven together are themselves well understood.

*Prompting(a) (P-a)* is about *offering* meaningful contrasts and *highlighting* key mathematical relationships.

*Prompting(b) (P-b)* is about inviting learners to *make* distinctions and associations among mathematical discernments.

*Interpreting(a) (I-a)* is about *asking questions* that allow the *teacher* to discern understanding from lack of understanding, that is, whether learners have made the intended mathematical discernments. This requires clear prompts and perceptible means of response; it also requires direct evidence of understanding as opposed to students’ reports of whether or not they understand.

*Interpreting(b) (I-b)* is about the teacher *checking* all responses.

*Deciding(a) (D-a)* is about using information gathered from all learners to inform the unfolding lesson.

*Deciding(b) (D-b)* is about adjusting raveling, prompting, and/or interpreting in ways that clarify, combine, and/or extend key variables and/or that more clearly bridge students’ mathematical understanding and conventional (i.e., shared) mathematical understanding.

*Student Engagement (SE)* refers to whether students are engaged in mathematical activities that allow them to extend their understanding. This component of the framework, in contrast to the previous components, does not provide suggestions for teaching. However, we decided that it was important to keep a record of how students engage with the mathematical content of the lesson.

A more elaborated version of the RaPID FLA is included in Appendix A.

### Evolution of the RaPID Framework for Lesson Analysis

The evolution of the RaPID FLA informed the development of the RaPID model and became a tool for teacher professional development. We now offer a review of classroom observation protocols in mathematics education. In situating the RaPID FLA relative to other models, we stress two important aspects of our model: (1) its purpose in supporting professional development rather than teacher evaluation; and (2) its theoretical orientation. We then elaborate on how the descriptors and component of the framework have evolved.

*Classroom observation tools*. Classroom observation instruments, also called observation protocols, have been used for research, teacher evaluation, and teacher professional development. These tools are often developed from the literature and reflect theoretical orientations. For example, the Reform Teaching Observation Protocol (RTOP), described by Sawada et al. (2002), is based on literature supporting the reform perspective in the United States at that time, and it is that literature which is used to validate the instrument. Similarly, Thompson and Davis (2014) developed a literature-based observation protocol for teacher professional development. They found that a relational-feedback intervention model in which teachers receive feedback from observations is a productive tool for improving teaching and learning at the elementary level.

Other instruments have been developed from empirical observations. For instance, the Learning Mathematics for Teaching Project (2011) developed a framework for measuring quality of mathematical instruction that combined an iterative process of video-tape analysis, researchers’ “own histories and lenses for looking at instruction” (p. 32), and key insights from the literature. The protocol was validated with the Mathematics Knowledge for Teaching survey for teachers, also developed by the Learning Mathematics for Teaching Project. The constructs in this framework include richness and development of mathematics (e.g., multiple representations); responding to students; connecting classroom practice with mathematics; language; equity; and presence of mathematical errors. The authors, however, indicated that they were not able to account for how the teacher scaffolds students in particular cases.

Another observation tool intended for teacher professional development and grounded in empirical data is the Teaching for Robust Understanding of Mathematics (TRU Math) framework (Schoenfeld, 2013), which was influenced by a focus on problem solving. Interestingly, when the research team tried to use the existing protocols at that time, they found that “important elements of quality of mathematics teaching were missing” (p. 610). As a result, the team created its own framework, which started with a large set of categories. Schoenfeld described the process of creating the observation tool, including early attempts and some challenges. The final tool includes a focus on teachers’ decisions-in-the-moment. The framework consists of the following dimensions:

- Mathematical Focus, Coherence and Accuracy: To what extent is the mathematics discussed clear, correct, and well justified (tied to conceptual underpinnings)?
- Cognitive Demand: To what extent do classroom interactions create and maintain an environment of intellectual challenge?
- Access: To what extent do classroom activity structures invite and support active engagement from the diverse range of students in the classroom?
- Agency, Authority and Accountability: To what extent do students have the opportunities to make mathematical conjectures, explanations and arguments, developing ‘voice’ (agency and authority) while adhering to mathematical norms (accountability)?
- Uses of assessment: To what extent is student reasoning elicited, challenged, and refined? (Schoenfeld, 2013, p. 616).

Notably, the development of the TRU Math observation protocols is underpinned by specific orientations to reform education and problem solving. The RaPID FLA tool is consistent with the theories of learning discussed in this chapter. While the RaPID model shares some elements with TRU Math, there are fundamental differences. For instance, we neither pay attention to authority and accountability nor rely mainly on problem solving (e.g., productive struggle), student presentations, or group work.

Despite the fact that classroom observations provide a window into what is going on in class, there are issues of reliability inherent in these tools that have to be considered when making inferences from this type of data. For instance, Walkington and Marder (2015) developed the UTech Observation Protocol using value-added models, finding surprising results: some classroom attributes corelated to a high score in one grade level were related to low test scores in another grade level. They concluded that classroom observation and value-added models offer complementary and separate information about what happens in classrooms. Schlesinger and Jentsch (2016) analyzed several instruments for mathematical quality for instruction, identifying theoretical and methodological challenges in measuring instructional quality in mathematics classrooms. Campbell and Ronfelt (2018) more recently found that rates in measures of classroom observation protocols depend, in part, on factors beyond a teacher’s performance or control, such as the teacher’s gender and racial group, student population, and lower levels of student performance at the beginning of the year. White (2018) raised the issue of raters, concluding that there is a need for monitoring and re-training. For these reasons, we insist that the framework for lesson analysis should not be used as an evaluation tool.

*The development of the RaPID FLA*. The development of the RaPID FLA can be summarized in five iterations. Initially, there was no protocol for classroom observation in the Math Minds project. This was an exploratory period, and both teachers and researchers were familiarizing themselves with JUMP Math resources. Researchers visited classrooms every week, and the teachers video-recorded their own classes at their own discretion.

In 2014, the research team started to formalize Iteration 1 of the classroom observation framework based on insights gleaned through observing classrooms, conversations with teachers following classroom visits, and the JUMP Math strategies described in the Teacher Guides. At that time, the emerging framework used a scale from 0 to 4 for various categories that were significantly different than they are now (instruction, step up/back, assist, bonus, practice). The resource emphasized the importance of frequent assessment, so each entry was time-stamped to record the duration of what we considered distinct units; such entries lasted from a few seconds to a few minutes.

JUMP Math also emphasizes the power of ‘stepping back’ to fill gaps rather than trying to remediate work that was too hard; there is a strong emphasis on building confidence through independent mastery. Complementing this, JUMP Math emphasizes the use of ‘bonus questions,’ or mathematical challenges for students who completed assigned work. This early version of the framework already included elements of the model explained in this chapter, such as ribboning, interpreting (frequent assessment), and deciding (stepping back and bonusing). On a pragmatic level, we found that overlapping categories made this framework difficult to work with. On a conceptual level, we began to articulate strategies that would support teachers in designing step-backs and bonus questions. This led us deeper into the world of variation theory, or more accurately, variation pedagogy. We also began to note limitations in the language of ‘steps,’ which partially motivated the sharp distinction we now draw between critical discernments and steps.

The following year, we started to analyze the longitudinal data of student performance and identified two classrooms with contrasting results that surprised us: both teachers had received the same level of support, and it seemed to us that both had used the resource with fidelity. But one class showed a prominent, statistically significant gain in performance in mathematics while the other showed no significant change. In an attempt to distinguish factors that may have contributed to this surprising result, we analyzed videos from each classroom (Preciado-Babb, Metz, Sabbaghan, & Davis, 2016). This analysis contributed to the development of Iteration 2 of the classroom observation framework, with particular attention to effective patterns of variation, evidence of mastery learning, and the nature of teachers’ responses. We also started to record the level of students’ engagement in the mathematical tasks. The components of this iteration were:

- Engagement: Were students engaged in work that challenged them at an appropriate level?
- Learning Visible: Are responses visible to the teacher?
- Responsive Teaching: Is teaching responsive?
- Variation (Resource + Adaptations): Was variation sufficient for students to discern and combine the necessary features of the object of learning?

The framework included descriptors for a scale from 1 to 4 for each component used to score lessons; subsequent iterations followed the same scale. We tested for reliability using a two-way Intraclass Correlation Coefficient (ICC) (McGraw & Wong, 1996). A mixed model was used when all the raters were involved, and a random model was used when a subset of the raters was involved. We tested for both consistency and for absolute measures, and we reported both single and average measures. Following Koo and Li (2016), we interpreted reliability for intraclass correlation coefficient measures as poor, moderate, good, and excellent for values less than 0.5, between 0.5 and 0.75, between 0.75 and 0.9, and greater than 0.90, respectively.

Three members of the research team independently observed and rated 20 video-recorded mathematics lessons using this second iteration of the framework. Large discrepancies were discussed, the descriptors were refined and the lessons were scored again. The intraclass correlation coefficient analysis resulted in moderate to good average scores for both consistency and absolute measures for each component of the framework. Single measures, however, were mostly poor, and we decided to continue refining the framework.

For Iteration 3 of the framework, we further refined the categories in light of new observations and literature that helped to explain some of the differences in our perceptions. For instance, we consistently noticed that when too much information was provided at once, students were not able to complete the corresponding tasks. This was not just an issue with frequent assessment but one of learners needing more frequent opportunities to engage with each new idea before attempting to integrate it. This insight helped us refine our notion of prompting and differentiate it from what we were then still calling ‘assessment.’ Prompting involved offering an idea and requiring each learner to engage with that idea. Assessment also mattered, but it played a different role in teaching and learning.

We also noticed that students were excited about some of the extensions or bonus questions provided in class. In some classes, students started creating their own extensions either for themselves or for their peers. While this was successful at times, their self-generated questions were sometimes too difficult and engagement waned. Teachers, too, identified generating effective bonus questions as difficult (Preciado-Babb et al., 2016). Here again, helping teachers attend to the patterns of variation in resource materials helped them become more aware of how particular variables might be further extended and/or combined.

Finally, by this time we had also started to notice that there were instances when lessons were effectively parsed but not effectively ‘re-connected.’ We began to talk about ‘connecting’ as a separate category. Eventually, connecting also proved inadequate; the language of parsing and connecting still hearkened to a notion of learning as taking apart and putting back together, which was limiting. Connecting eventually became ‘raveling,’ which considers connections in terms of the gradual integration of critical discernments into broader webs of coherence, a notion that is vital to the current version of the RaPID FLA. At this time, we also started to more systematically use the framework as a tool for providing teachers with feedback from classroom observation.

Iteration 4 of the framework was quite similar to the RaPID FLA, as described in this chapter (see Appendix A). However, at that time we used the terms *monitoring* and *adapting* instead of *interpreting* and *deciding*. While monitoring was intended to highlight the importance of consistent awareness of learners’ sense-making, it carries strong connotations of a correspondence theory of learning, which was increasingly at odds with the work we were doing with teachers: we weren’t interested merely in whether answers were right or wrong, but in how learners were making sense of each critical discernment and with how selected prompts influenced what learners attended to. Further feedback from JUMP Math reinforced our decision to change the term. At this time, we also removed *Raveling-a* from the FLA (but not the model), as it requires information about mathematical content that is not readily observable in a single lesson.

In 2018, Iteration 4 of the framework was used in a different project involving 17 teachers from a rural, aboriginal community using the same resource (JUMP Math) and participating in both JUMP Math and Math Minds professional development sessions. The purpose of using the RaPID FLA was to provide teachers with feedback on how they incorporated the teaching approach into their classrooms. A team of observers received one training session for using the framework to score each component of the framework. Two observers were assigned to each lesson. The scores, along with a record of the lessons including photographs, comments, and a timeline, were provided as feedback to teachers. Teachers in this new project consistently reported that the feedback was useful in helping them incorporate the Math Minds model into their teaching.

The Intraclass Correlation Coefficient average scores ranged from poor to good, while the single measures were mostly poor. These results suggest the need to keep a least two observers for each lesson and that observers require additional training.

Iteration 5 corresponds to the RaPID FLA as described in this chapter (see Appendix A). We continued testing the framework for reliability and selected eight videos for maximal variability to be rated by a team of two researchers and eight observers. Initially, four videos were rated, and scores were discussed by the team. We noticed that strong differences in the ratings were based on raters’ personal perceptions of good teaching rather than what we intended in the framework. For instance, one observer rated a lesson as high in raveling because it included multiple representations. However, that lesson focused more on presenting a set of steps than it did on conceptual understanding. Additionally, while the lesson included different representations for subtracting on a number line, it did not weave these representations into a coherent explanation of the intended strategy.

In another case, observers gave high scores for prompting when steps in an algorithm were clearly explained and varied elements from one task to the next; however, there was little emphasis on the meaning underlying those steps. When it came to interpreting, observers tended to give high scores for lessons in which the teachers asked students to show their work. However, a closer look indicated that the teacher actually did not consistently check students’ responses, and sometimes the questions themselves did not allow sufficient information regarding whether students were ready to move to the next part of the lesson.

Following this discussion, the team re-scored these videos, then observed and rated the other four videos. Again, there were some differences. After discussion of episodes from selected videos, the team rated each lesson again. The Intraclass Correlation Coefficient average measures in this case were excellent; however, it is unfeasible to use ten observers for classroom observation. The single measures ranked from poor to moderate, suggesting the need for more than one observer for each lesson and for additional training.

### Implications for Teacher Professional Development

The RaPID model evolved from the identification of essential elements of effective mathematics lessons. The RaPID FLA tool is an attempt to support teachers as they implement this model in their practice. We have used this framework to provide feedback to teachers so they can reflect on aspects to improve in their practice in the context of the RaPID model. The elements of the framework have also been used by teachers in an online course as a tool to plan and analyse mathematics lessons. In both cases, teachers have reported positive results, and we have noted a shift in the way they focus their lessons and in the language they use to talk about them. We have also observed increasing enthusiasm for using the model, and teachers have consistently reported integrating some elements of the model in other subjects. The framework can also be used in specific modalities of teacher professional development involving lesson planning and analysis, such as lesson study (Preciado-Babb, Metz, & Davis, 2019).

The RaPID FLA has become a valuable tool in our work with teachers in a variety of contexts. It continues to inform our interactions with the Math Minds teachers whose classrooms we observe, both in terms of what we (as observers) attend to and in terms of how we guide the debriefing sessions afterward. Because all of us have developed a shared understanding of the terms and how they are distinct from what we have called contemporary obsessions, the potential misconceptions identified earlier have been reduced.

Further, this year, we started piloting a 15-week online course in which we engage teachers with the elements of the framework. In this course, teachers were introduced to each of the RaPID components, engaged in analyzing existing lessons using the framework, and were invited to design and reflect on their own lessons using RaPID as a guide. The course emphasises key distinctions of the model from contemporary obsessions. We intend to develop a version of this course for math coaches/consultants and administrators that would allow them to more clearly recognize key features of the framework and to clearly distinguish them from contemporary obsessions so as to better support the teachers with whom they work.

Finally, each of us has found important ways that the RaPID framework has helpfully influenced our own teaching. Preliminary observations suggest that *learning* mathematics in ways that embody the RaPID FLA may be a powerful way for teachers to both appreciate and understand its elements, particularly if learners have the opportunity to reflect on those experiences through the lens of the RaPID framework. Such reflection may be further enhanced when experiences are carefully contrasted with other approaches to teaching and learning.

## Conclusion

The development of the RaPID model and the corresponding framework for lesson analysis has been grounded in the empirical evidence gathered and interpreted through the Math Minds initiative for more than six years. Although we have refined the model to highlight elements that have had an impact on students’ performance in mathematics, it has been difficult to share the model with other teachers and with classroom observers: Biases rooted in contemporary obsessions loosely associated with either traditional or reform perspectives regarding the teaching and learning of mathematics tend to shape the manner in which the a Framework for Lesson Analysis has been interpreted.

We concluded, through the review of hundreds of theories of learning, that most of the theories underpinning these perspectives are theories of influencing teaching that are either rooted in popular metaphors that lack formal evidence or are theories of learning rooted in defensible principles that ignore other research insights. Such obsessions, which often confuse means with goals, are reflected not only in teachers’ rationales for their actions, but also in school district policy; this has posed a significant challenge for our efforts to broadly share the findings of this study. Our current efforts are centered on more clearly articulating the ways in which the RaPID model differs from both traditional and reform approaches to mathematics teaching and learning while offering meaningful insights that address the concerns of those who hold either of those views.

We continue to refine the RaPID model, including the framework for lesson analysis. In this process, we face similar challenges to other scholars designing similar instruments. We conclude with a reminder that while the RaPID model should not be used for evaluating teaching, it is a valuable tool for teacher professional development that supports efforts to replicate and outperform the results we have observed through the Math Minds initiative.

## Acknowledgement

We gratefully acknowledge Suncor Energy Foundation’s generous sponsorship of the Math Minds Initiative.

## References

Bloom, B. S. (1968). Learning for mastery, (UCLA-CSEIP). The Evaluation Comment, 1 (2). In

*All our children learning*. London: McGraw-Hill.Campbell, S. L., & Ronfeldt, M. (2018). Observational evaluation of teachers: Measuring more than we bargained for?

*American Educational Research Journal*, 55(6), 1233–1267.Davis, B., and the Spatial Reasoning Research Group. (2015).

*Spatial reasoning in the early years: Principles, assertions and speculations*. New York, NY: Routledge.Davis, B., Towers, J., Chapman, O., Drefs, M., & Friesen, S. (2019). Exploring the relationship between mathematics teachers’ implicit associations and their enacted practices.

*Journal of Mathematics Teacher Education*. https://doi.org/10.1007/s10857-019-09430-7Ericsson, K. A., Charness, N., Feltovich, P., & Hoffman, R. R. (2006).

*Cambridge handbook on expertise and expert performance*. Cambridge: Cambridge University Press.Fauconnier, G., & Turner, M. (2003).

*The way we think: Conceptual blending and the mind’s hidden complexities*. New York, NY: Basic Books.Gibson, J. J. (1979).

*The ecological approach to visual perception*. New York, NY: Houghton Mifflin.Gu, L., Huang, R., & Marton, F. (2004). Teaching with variation: A Chinese way of promoting effective mathematics learning. In F. Lianghuo, W. Ngai-Ying, C. Jinfa, & L. Shiqi (Eds.),

*How Chinese learn mathematics: Perspectives from insiders*(Vol. 1, pp. 309–347). Singapore: World Scientific Publishing Co.Hale, J. B., Chen, S. A., Tan, S. C., Poon, K., Fitzer, K., & Boyd, L. A. (2016). Reconciling individual differences with collective needs: The juxtaposition of sociopolitical and neuroscience perspectives on remediation and compensation of student skill deficits.

*Trends in Neuroscience and Education*, 5, 41–51.Koo, T. K., & Li, M. Y. (2016). A guideline of selecting and reporting intraclass correlation coefficients for reliability research.

*Journal of Chiropractic Medicine*, 15(2), 155–163.Lai, M. Y., & Murray, S. (2014). Teaching with procedural variation: A Chinese way of promoting understanding of mathematics.

*International Journal for Mathematics Teaching and Learning*. Retrieved from http://www.cimt.org.uk/journal/lai.pdfLakoff, G., & Johnson, M. (1999).

*Philosophy in the flesh: The embodied mind and its challenge to Western thought*. New York, NY: Basic Books.Learning Mathematics for Teaching Project. (2011). Measuring the mathematical quality of instruction.

*Journal of Mathematics Teacher Education*, 14(1), 25–47.Marton, F. (2014).

*Necessary conditions of learning*. New York, NY: Routledge.Mason, J. (2017). Issues in variation theory and how it could inform pedagogic choices. In R. Huang & Y. Li (Eds.),

*Teaching and learning mathematics through variation: Confucian heritage meets Western theories*(pp. 407–438). Rotterdam, The Netherlands: Sense Publishers.Mason, J., & Pimm, D. (1984). Generic examples: Seeing the general in the particular.

*Educational Studies in Mathematics*, 15, 277–289.McGraw, K., & Wong, S. (1996). Forming inferences about some intraclass correlation coefficients.

*Psychological Methods*, 1(1), 30–46.Metz, M., Preciado-Babb, P., Sabbaghan, S., Davis, B., Pinchbeck, G., & Aljarrah, A. (2016). Transcending traditional/reform dichotomies in mathematics education. In M. B. Wood, E. E. Turner, M. Civil, & J. A. Eli (Eds.),

*Proceedings of the 38th Annual Meeting of the North American chapter of the International Group for the psychology of mathematics education*(pp. 1252–1258). Tucson, AZ: University of Arizona.Nelson. (2019).

*Canadian Test of Basic Skills (CTBS)*. Retrieved from http://www.nelson.com/assessment/classroom-CTBS.htmlNovak, J. (2002). Meaningful learning: The essential factor for conceptual change in limited or inappropriate propositional hierarchies leading to empowerment of learners.

*Science Education*, 86(4), 548–571.Pang, M. F., Marton, F., Bao, J., & Ki, W. W. (2016). Teaching to add three-digit numbers in Hong Kong and Shanghai: Illustration of differences in the systematic use of variation and invariance.

*ZDM Mathematics Education*, 48, 455–470.Preciado-Babb, A. P., Aljarrah, A., Sabbaghan, S., Metz, M., Pinchbeck, G. G., & Davis, B. (2016). Teachers’ perceived difficulties for creating mathematical extension at the boundary of students’ discernments (short oral presentation). In M. B. Wood, E. E. Turner, M. Civil, & J. A. Eli (Eds.),

*Proceedings of the 38th Annual Meeting of the North American chapter of the International Group for the psychology of mathematics education*(pp. 514–517). Tucson, AZ: University of Arizona.Preciado-Babb, A. P., Metz, M., & Davis, B. (2019). How variance and invariance can inform teachers’ enactment of mathematics lessons? In R. Huang, A. Takahashi, & J. P. da Ponte (Eds.),

*Theory and practice of lesson study in mathematics: An international perspective*(pp. 343–367). Springer International Publishing.Preciado-Babb, A. P., Metz, M., Sabbaghan, S., & Davis, B. (2016). Fine-grained, continuous assessment for the diverse classroom: A key factor to increase performance in mathematics. In

*Proceedings of the American Education Research Association Annual Meeting 2016*(pp. 1–20). AERA.Runesson, U. (2005). Beyond discourse and interaction. Variation: A critical aspect for teaching and learning mathematics.

*Cambridge Journal of Education*, 35(1), 69–87.Sawada, D., Piburn, M. D., Judson, E., Turley, J. Falconer, K., Benford, R., & Bloom, I. (2002). Measuring reform practices in science and mathematics classrooms: The reformed teaching observation protocol.

*School Science and Mathematics*, 102(6), 245–253.Schlesinger, L., & Jentsch A. (2016). Theoretical and Methodological challenges in measuring instructional quality in mathematics education using classroom observations.

*ZDM Mathematics Education*, 48(1–2), 29–40.Schoenfeld, A. H. (2013). Classroom observations in theory and practice.

*ZDM Mathematics Education*, 45(4), 607–621.Sweller, J. (2016). Story of a research program. In S. Tobias, J. D. Fletcher, & D. C. Berliner (Series Eds.),

*Acquired Wisdom Series. Education Review*, 23, 1–19.Thompson, C. J., & Davis, S. B. (2014). Classroom observation data and instruction in primary mathematics education: Improving design and rigour.

*Mathematics Education Research Journal*, 26(2), 301–232.Varela, F., Thompson, E., & Rosch, E. (1991).

*The embodied mind: Cognitive science and human experience*. Cambridge, MA: MIT Press.Vygotsky, L. S. (1986).

*Thought and language*. Cambridge, MA: MIT Press.Walkington, C., & Marder, M. (2015). Classroom observation and value-added models give complementary information about quality of mathematics teaching. In T. J. Kane, K. A. Kerr, & R. C. Pianta (Eds.),

*Designing teacher evaluation systems*(pp. 234–277). San Francisco, CA: Jossey-Bass.Watson, A. (2017). Pedagogy of variations: Synthesis of various notions of variation pedagogy. In R. Huang & Y. Li (Eds.),

*Teaching and learning mathematics through variation: Confucian heritage meets western theories*(pp. 85–103). Rotterdam, The Netherlands: Sense Publishers.West, B. T. (2009). Analyzing longitudinal data with the linear mixed models procedure in SPSS.

*Evaluation & The Health Professions*, 32(3), 207–228.White, M. C. (2018). Rater performance standards for classroom observation instruments.

*Educational Researcher*, 47(8), 492–501.Willingham, D. T., Hughes, E. M., & Dobolyi, D. G. (2015). The scientific status of learning styles theories.

*Teaching of Psychology*, 42(3), 266–271.

*Paulino Preciado-Babb*

*Werklund School of Education*

*University of Calgary*

*Martina Metz*

*Werklund School of Education*

*University of Calgary*

*Brent Davis*

*Werklund School of Education*

*University of Calgary*

*Soroush Sabbaghan*

*Werklund School of Education*

*University of Calgary*

## Appendix A Rapid Model

Overall Lesson | |||

Student Engagement | |||

All engage together in continuously extending understanding; extensions integrated throughout lesson. | Most engage together in extending understanding; some students separated for extra help/extensions. | Most engage together in extending understanding; some waiting for help or extension. | Many are disengaged. |

Ra / PID Matrix (Q1, Q2, Q3, Q4 refer to quadrants in the matrix) | |||

Q1: Clearly raveled content, strong PID cycles. | Q2: Clearly raveled content, weak PID cycles (potentially coherent mathematical ideas offered as a block lesson). | Q4: Weakly raveled content, strong PID cycles may be focused on procedural steps or general problem solving competencies. | Q3: Weakly raveled content, weak PID cycles (block lesson with poorly raveled content or open problem for which students have inadequate background). |

Raveling: Decomposing and recomposing concepts to identify critical discernments (distinctions, associations, and relations) | |||

R-a happens prior to teaching and involves long-range planning (by the teacher and embedded in the resource). | |||

The teacher/resource decomposes relevant concepts into fine-grained discernments and re-integrates these into meaningful wholes. | The teacher/resource decomposes relevant concepts into fine-grained discernments and sometimes re-integrates them. | The teacher/resource identifies relevant concepts, but these are not adequately decomposed or re-integrated. | The teacher/resource identifies procedural steps rather than mathematical discernments. |

R-b describes the nature and scale of key ideas relevant to the lesson. | |||

Mathematical concepts are sufficiently unpacked for all learners, and the lesson remains focused on the central idea. | Mathematical concepts are sufficiently unpacked for most learners; brief detours from the central idea are quickly refocused. | Mathematical concepts are sufficiently unpacked for some learners; the lesson may be loosely defined around a broad topic and/or require learners to connect ideas that have not been well developed. | Mathematical concepts are not clearly focused; the lesson may meander from idea to idea or be centered around general competencies or procedural steps. |

Prompting learners to engage with key distinctions, associations, and relationships. | |||

P-a refers to how the teacher / resource highlights important distinctions and associations. | |||

The teacher/resource effectively draws attention to each critical distinction, association, and relationship (clear and appropriate contrasts against a constant background; careful bridging of known and new). | The teacher/resource effectively draws attention to most critical distinctions, associations, and relationships. | The teacher/resource points to/explains critical distinctions, associations, and relationships but does not effectively draw attention to them (too many or wrong things change, extraneous information distracts, too much space between relevant contrasts). | The teacher/resource points to/explains some critical distinctions and associations. Multiple new ideas may be introduced simultaneously. |

P-b refers to how students are invited to engage with those distinctions and associations. | |||

Clear prompts require all learners to make all critical distinctions and associations. | Clear prompts require all learners to engage in tasks/questions related to most critical distinctions and associations. | Prompts invite learners to engage in tasks/questions at various points during the lesson, but not with each critical distinction and association. | Many ideas are presented before students are invited to engage with them; those who attempt to engage on their own mav fall behind. |

Interpreting the ways that learners interpret new noticings. | |||

I-a is about the nature of information gathered from students during the lesson. | |||

Most responses quickly and clearly indicate which students are able to make critical distinctions and associations. | Most responses indicate whether students can answer questions related to most critical distinctions and associations. | Most responses offer limited information re: students’ understanding of critical distinctions and associations. | Most responses do not demonstrate understanding (e.g., students indicate whether they understand). |

I-b is about how the teacher attends to student responses. | |||

All responses checked at key points during the lesson. | More than half of responses are checked at key points; those most in need of help or extension are consistently checked. | More than half of responses checked at most key points; those requiring assistance sometimes overlooked. | Few or no students checked during the lesson. |

Deciding what to do in the next moment in order to best bridge and/or extend learner interpretations of critical discernments. | |||

D-a is about whether the lesson moves in response to all learners. | |||

Throughout the lesson, the teacher attends to all learners. | For most or all of the lesson, the teacher attends to most learners. | Many learners are waiting for help or extension. | The teacher makes few or no adjustments to the lesson. |

D-b focuses on the nature of the teacher’s response. | |||

The teacher adjusts raveling, prompting, and/or interpreting to support all learners (by identifying, clarifying, extending, and/or combining critical features). | The teacher clarifies prompts to support struggling learners; extensions allow quick finishers to extend their understanding. | The teacher re-explains ideas when some students struggle; extensions (if present) have little impact (too easy or too hard). | The teacher may indicate when students need to try again, but offers little or no assistance. |