How can you tell if a student really understands something?
They learn early on to play the game—tell the teacher and/or the test what they ‘want to know,’ and even the best assessment leaves something on the table. (In truth, a big portion of the time students simply don’t know what they don’t know.)
The idea of understanding is, of course, at the heart of all learning, and solving it as a puzzle is one of the three pillars of formal learning environments and education.
1. What do they need to understand (standards)?
2. What (and how) do they currently understand (assessment)?
3. How can they best come to understand what they currently do not (planning learning experiences and instruction)?
But how do we know if they know it? And what is ‘it’?
Understanding As ‘It’
On the surface, there is trouble with the word ‘it.’ Sounds vague. Troublesome. Uncertain. But everyone somehow knows what it is.
‘It’ is essentially what is to be learned, and it can be a scary thing to both teachers and students. ‘It’ is everything, described with intimidating terms like objective, target, proficiency, test, exam, grade, fail, and succeed.
And in terms of content, ‘it’ could be almost anything: a fact, a discovery, a habit, skill, or general concept, from mathematical theory to a scientific process, the importance of a historical figure to an author’s purpose in a text.
So if a student gets it, beyond pure academic performance what might they be able to do? There are many existing taxonomies and characteristics, from Bloom’s to Understanding by Design’s 6 Facets of Understanding.
The following actions are set up as a linear taxonomy, from most basic to the most complex. The best part about it is its simplicity: Most of these actions can be performed simply in the classroom in minutes, and don’t require complex planning or an extended exam period.
By using a quick diagram, concept map, t-chart, conversation, picture, or short response in a journal, quick face-to-face collaboration, on an exit slip, or via digital/social media, understanding can be evaluated in minutes, helping to replace testing and consternation with a climate of assessment. It can be even be displayed on a class website or hung in the classroom to help guide self-directed learning, with students checking themselves for understanding.
How This Understanding Taxonomy Works
I’ll write more about this soon and put this into a more graphic form soon; both of these are critical in using it. (Update: I’m also creating a course for teachers to help the, use it.) For now, I’ll say that it can be used to guide planning, assessment, curriculum design, and self-directed learning. Or to develop critical thinking questions for any content area.
The ‘Heick’ learning taxonomy is meant to be simple, arranged as (mostly) isolated tasks that range in complexity from less to more. That said, students needn’t demonstrate the ‘highest’ levels of understanding–that misses the point. Any ability to complete these tasks is a demonstration of understanding. The greater number of tasks the student can complete the better, but all ‘boxes checked’ are evidence that the student ‘gets it.’
36 Thinking Strategies To Help Students Wrestle With Complexity
The Heick Learning Taxonomy
Domain 1: The Parts
Explain or describe it simply
Label its major and minor parts
Evaluate its most and least important characteristics
Deconstruct or ‘unbuild’ it efficiently
Give examples and non-examples
Separate it into categories, or as an item in broader categories
Example Topic
The Revolutionary War
Sample Prompts
Explain the Revolutionary War in simple terms (e.g., an inevitable rebellion that created a new nation).
Identify the major and minor ‘parts’ of the Revolutionary War (e.g., economics and propaganda, soldiers and tariffs).
Evaluate the Revolutionary War and identify its least and most important characteristics (e.g., caused and effects vs city names and minor skirmishes)
Create a diagram that embeds it in a self-selected context
Explain how it is and is not useful both practically and intellectually
Play with it casually
Leverage it both in parts and in whole
Revise it expertly, and explain the impact of any revisions
Domain 3: The Interdependence
Explain how it relates to similar and non-similar ideas
Direct others in using it
Explain it differently–and precisely–to both a novice and an expert
Explain exactly how and where others might misunderstand it
Compare it to other similar and non-similar ideas
Identify analogous but distinct ideas, concepts, or situations
Domain 4: The Function
Apply it in unfamiliar situations
Create accurate analogies to convey its function or meaning
Analyze the sweet spot of its utility
Repurpose it with creativity
Know when to use it
Plausibly theorize its origins
Domain 5: The Abstraction
Insightfully or artfully demonstrate its nuance
Criticize it in terms of what it might ‘miss’ or where it’s ‘dishonest’ or incomplete
Debate its ‘truths’ as a supporter or devil’s advocate
Explain its elegance or crudeness
Analyze its objectivity and subjectivity, and how the two relate
Design a sequel, extension, follow-up, or evolution of it
Domain 6: The Self
Self-direct future learning about the topic
Ask specific, insightful questions about it
Recall or narrate their own learning sequence or chronology (metacognition) in coming to know it
Is comfortable using it across diverse contexts and circumstances
Identify what they still don’t understand about it
Analyze changes in self-knowledge as a result of understanding
Advanced Understanding
Understanding by Design’s 6 facets of Understanding, Bloom’s Taxonomy, and Marzano’s New Taxonomy were also referenced in the creation of this taxonomy; a learning taxonomy for understanding
Grading problems are one of the most urgent bugaboos of good teaching.
Grading can take an extraordinary amount of time. It can also demoralize students, get them in trouble at home, or keep them from getting into a certain college.
It can demoralize teachers, too. If half the class is failing, any teacher worth their salt will take a long, hard look at themselves and their craft.
So over the years as a teacher, I cobbled together a kind of system that was, most crucially, student-centered. It was student-centered in the sense that it was designed for them to promote understanding, grow confidence, take ownership, and protect themselves from themselves when they needed it.
Some of this approach was covered in Why Did That Student Fail? A Diagnostic Approach To Teaching. See below for the system–really, just a few rules I created that, while not perfect, went a long way towards eliminating the grading problems in my classroom.
Which meant students weren’t paralyzed with fear when I asked them to complete increasingly complex tasks they were worried were beyond their reach. It also meant that parents weren’t breathing down my neck ‘about that C-‘ they saw on Infinite Campus, and if both students and parents are happy, the teacher can be happy, too.
How I Eliminated (Almost) All Grading Problems In My Classroom
1. I chose what to grade carefully.
When I first started teaching, I thought in terms of ‘assignments’ and ‘tests.’ Quizzes were also a thing.
But eventually I started thinking instead in terms of ‘practice’ and ‘measurement.’ All assessment should be formative, and the idea of ‘summative assessment’ makes as much sense as ‘one last teeth cleaning.’
The big idea is what I often call a ‘climate of assessment,’ where snapshots of student understanding and progress are taken in organic, seamless, and non-threatening ways. Assessment is ubiquitous and always-on.
A ‘measurement’ is only one kind of assessment, and even the word implies ‘checking in on your growth’ in the same way you measure a child’s vertical growth (height) by marking the threshold in the kitchen. This type of assessment provides both the student and teacher a marker–data, if you insist–of where the student ‘is’ at that moment with the clear understanding that another such measurement will be taken soon, and dozens and dozens of opportunities to practice in-between.
Be very careful with what you grade, because it takes time and mental energy–both finite resources crucial to the success of any teacher. If you don’t have a plan for the data before you give the assessment, don’t give it, and certainly don’t call it a quiz or a test.
2. I designed work to be ‘published’
I tried to make student products–writing, graphic organizers, podcasts, videos, projects, and more–at the very least visible to the parents of students. Ideally, this work would also be published to peers for feedback and collaboration, and then to the public at large to provide some authentic function in a community the student cares about.
By making student work public (insofar as it promoted student learning while protecting any privacy concerns), the assessment is done in large part by the people the work is intended for. It’s authentic, which makes the feedback loop quicker and more diverse than one teacher could ever hope to make it.
What this system loses in expert feedback that teacher might be able to give (though nothing says it can’t both be made public and benefit from teacher feedback), it makes up for in giving students substantive reasons to do their best work, correct themselves, and create higher stands for quality than your rubric outlined.
3. I made a rule: No Fs and no zeroes. A, B, C, or ‘Incomplete’
First, I created a kind of no-zero policy. Easier said than done depending on who you are and what you teach and what the school ‘policy’ is and so on. The idea here, though, is to keep zeroes from mathematically ruining a student’s ‘final grade.’
I try to explain to students that a grade should reflect understanding, not their ability to successfully navigate the rules and bits of gamification stuffed into most courses and classrooms. If a student receives a D letter grade, it should be because they have demonstrated an almost universal inability to master any content, not because they got As and Bs on most work they cared about but Cs or lower on the work they didn’t, and with a handful of zeroes thrown in for work they didn’t complete ended up with a D or an F.
Another factor at work here is marking work with an A, B, C, or ‘Incomplete.’ Put another way, if the student didn’t at least achieve the average mark of C, which should reflect average understanding of a given standard or topic, I would mark it ‘Incomplete,’ give them clear feedback on how it could be improved, and then require them to do so.
4. I went over missing assignments frequently.
Simple enough. I had a twitter feed of all ‘measurements’ (work they knew that counted towards their grade), so they didn’t have to ask ‘what they were missing’ (though they did anyway). I also wrote it on the board (I had a huge whiteboard that stretched across the front of the classroom).
5. I created alternative assessments.
Early on in teaching, I noticed students saying, in different ways, that they ‘got it but don’t all the way get it.’ Or that they believed that they did, in fact, ‘get it’ but not the way the assessment required (reminder: English Lit/ELA is a highly conceptual content area aside of the skills of literacy itself).
So I’d create an alternative assessment to check and see. Was the assessment getting in the way–obscuring more than it revealed? Why beat my head against the wall explaining the logistics of an assignment or intricacies of a question when they assignment and the question weren’t at all the points? These were just ‘things’ I used the way a carpenter uses tools.
Sometimes it’s easier to just grab a different tool.
I’d also ask students to create their own assessments at times. Show me you understand. It didn’t always work the way you’d expect, but I got some of the most insightful and creative expression I’ve ever seen from students using this approach. As with most things, it just depended on the student.
6. I taught through micro-assignments.
Exit slips were one of the the greatest things that ever happened to my teaching. I rarely used them as ‘exit tickets’ to be able to leave the classroom, but I did use them almost daily. Why?
They gave me a constant stream of data for said ‘climate of assessment,’ and it was daily and fresh and disarming to students because they knew it was quick and if they failed, another one would be coming soon.
It was a ‘student-centered’ practice because it protected them. They had so many opportunities and, math-wise, so many scores that unless they failed everything every day, they wouldn’t ‘fail’ at all. And if they were,
I could approach a single standard or topic from a variety of angles and complexities and Bloom’s levels and so on, which often showed that the student that ‘didn’t get it’ last week more likely just ‘didn’t get’ my question.
In other words, they hadn’t failed my assessment; my assessment had failed them because it had failed to uncover what they, in fact, knew.
7. I used diagnostic teaching
You can read more about diagnostic teaching but the general idea is that I had a clear sequence I used that I communicated very clearly to the students and their families. It usually took the first month or two for everyone to become comfortable with it all, but once I did, grading problems were *almost* completely eliminated. Problems still surfaced but with a system in place, it was much easier to identify exactly what went wrong and why and communicate it all to the stakeholders involved in helping support children.
At the end of the day, teaching is about learning and learning is about understanding.
And as technology evolves to empower more diverse and flexible assessments forms, constantly improving our sense of what understanding looks like–during mobile learning, during project-based learning, and in a flipped classroom–can not only improve learning outcomes but just might be the secret to providing personalized learning for every learner.
This content begs the question: why does one need alternatives to the established and entrenched Bloom’s? Because Bloom’s isn’t meant to be the alpha and the omega of framing instruction, learning, and assessment. Benjamin Bloom’s taxonomy does a brilliant job of offering ‘verbs’ in categories that impose a helpful cognitive framework for planning learning experiences, but it neglects important ideas, such as self-knowledge that UbD places at the pinnacle of understanding, or the idea of moving from incompetence to competence that the SOLO taxonomy offers.
So with apologies to Bloom (whose work we love), we have gathered six alternatives to his legendary, world-beating taxonomy, from the TeachThought Learning Taxonomy, to work from Marzano to Fink, to Understanding by Design.
6 Alternatives To Bloom’s Taxonomy For Teachers
1. The TeachThought Learning Taxonomy
TheTeachThought Learning Taxonomy orders isolated tasks that range from less to more complexity into six domains:
The Parts (i.e., explain or describe a concept in simple terms)
The Whole (i.e., explain a concept in micro-detail and macro-context)
The Interdependence (i.e., explain how a concept relates to similar and non-similar concepts)
The Function (i.e., apply a concept in unfamiliar situations)
The Abstraction (i.e., demonstrate a concept’s nuance with artfulness or insight)
The Self (i.e., self-direct future learning about the concept)
2. UbD’s Six Facets Of Understanding
Created by Grant Wiggins and Jay McTighe to work with and through their Understanding by Design model, the 6 Facets of Understanding is a non-hierarchical framework designed to help teachers evaluate and assess student understanding.
3. Marzano & Kendall/Taxonomy
image attribution: Matt Drewette-Card
Marzano’s and Kendall’s taxonomy arranges a score of processes into six categories, from lowest to highest level of difficulty. Accompanying each category are verbs and phrases that may prove useful for teachers in designing assessments and evaluating mastery:
Self System Thinking (i.e. examining emotions, examining efficiency, examining importance)
4. The Taxonomy Of Significant Learning
Dr. Dee Fink’s Taxonomy of Significant Learning describes attributes of ‘significant’ learning as opposed to ‘less significant’ learning (the former having greater endurance, resonance, and potential to improve student learning, and the latter being more classroom-centered and less relevant or applicable outside of the classroom). The center of the taxonomy is the ‘sweet spot’ of learning design.
5. Webb’s Depth Of Knowledge Framework
image attribution: Edmentum
Webb’s Depth of Knowledge framework is designed to promote rigor, and organizes specific strategies and higher order thinking skills into four domains, moving from lower to higher complexity:
Recall
Skill/Concept
Strategic Thinking
Extended Thinking
6. The SOLO Taxonomy
image attribution: Structural Learning
SOLO stands for the “structure of observed learning outcomes.” Created by John Biggs and Kevin Collis, the SOLO taxonomy is made up of five levels of understanding, as illustrated above. According to Biggs, “At first, we pick up only one or few aspects of the task (unistructural), then several aspects that are unrelated (multi-structural), then we learn how to integrate them into a whole (relational), and finally, we are able to generalize that whole to as yet untaught applications (extended abstract).”
Admit it–you only read the list of the six levels of Bloom’s Taxonomy, not the whole book that explains each level and the rationale behind the Taxonomy. Not to worry, you are not alone: this is true for most educators.
But that efficiency comes with a price. Many educators have a mistaken view of the Taxonomy and the levels in it, as the following errors suggest. And arguably the greatest weakness of the Common Core Standards is to avoid being extra-careful in their use of cognitive-focused verbs, along the lines of the rationale for the Taxonomy.
1. The first two or three levels of the Taxonomy involve ‘lower-order’ and the last three or four levels involve ‘higher-order’ thinking.
This is false. The only lower-order goal is ‘Knowledge’ since it uniquely requires mere recall in testing. Furthermore, it makes no sense to think that ‘Comprehension’ – the 2nd level – requires only lower-order thought:
The essential behavior in interpretation is that when given a communication the student can identify and comprehend the major ideas which are included in it as well as understand their interrelationships. This requires a nice sense of judgment and caution in reading into the document one’s own ideas and interpretations. It also requires some ability to go beyond mere rephrasing of parts of the document to determine the larger and more general ideas in it. The interpreter must also recognize the limits within which interpretations can be drawn.
Not only is this higher-order thinking – summary, main idea, conditional and cautious reasoning, etc.–it is a level not reached by half of our students in reading. And by the way: the phrases ‘lower-order’ and ‘higher-order’ appear nowhere in the Taxonomy.
2. “Application” requires hands-on learning.
This is not true, a misreading of the word “apply”, as the text makes clear. We apply ideas to situations, e.g. you may comprehend Newton’s 3 Laws or the Writing Process but can you solve novel problems related to it – without prompting? That’s application:
The whole cognitive domain of the taxonomy is arranged in a hierarchy, that is, each classification within it demands the skills and abilities which are lower in the classification order. The Application category follows this rule in that to apply something requires “comprehension” of the method, theory, principle or abstraction applied. Teachers frequently say, “If a student really comprehends something then he can apply it.”
A problem in the comprehension category requires the student to know an abstraction well enough that he can correctly demonstrate its use when specifically asked to do so. “Application,” however, requires a step beyond this. Given a problem new to the student, he will apply the appropriate abstraction without having to be prompted as to which abstraction is correct or without having to be shown how to do it in this situation.
Note the key phrases: Given a problem new to the student, he will apply the appropriateabstraction without having to be prompted. Thus, “application” is really a synonym for “transfer”.
In fact, the authors strongly assert the primacy of application/transfer of learning:
The fact that most of what we learn is intended for application to problem situations in real life is indicative of the importance of application objectives in the general curriculum. The effectiveness of a large part of the school program is therefore dependent upon how well the students carry over into situations applications which the students never faced in the learning process. Those of you familiar with educational psychology will recognize this as the age-old problem of transfer of training. Research studies have shown that comprehending an abstraction does not certify that the individual will be able to apply it correctly. Students apparently also need practice in restructuring and classifying situations so that the correct abstraction applies.
Why UbD is what it is. In Application problems must be new; students must judge which prior learning applies, without prompting or hints from scaffolded worksheets; and students must get training and have practice in how to handle non-routine problems. We designed UbD, in part, backward from Bloom’s definition of Application.
As for instruction in support of the aim of transfer (and different types of transfer), the authors soberingly note this:
“We have also attempted to organize some of the literature on growth, retention, and transfer of the different types of educational outcomes or behaviors. Here we find very little relevant research. … Many claims have been made for different educational procedures…but seldom have these been buttressed by research findings.”
3. All the verbs listed under each level of the Taxonomy are more or less equal; they are synonyms for the level.
No, there are distinct sub-levels of the Taxonomy, in which the cognitive difficulty of each sub-level increases.
For example, under Knowledge, the lowest-level form is Knowledge of Terminology, where a more demanding form of recall is Knowledge of the Major Ideas, Schemes and Patterns in a field of study, and where the highest level of Knowledge is Knowledge of Theories and Structures (for example, knowing the structure and organization of Congress.)
Under Comprehension, the three sub-levels in order of difficulty are Translation, Interpretation, and Extrapolation. Main Idea in literacy, for example, falls under Interpretation since it demands more than “translating” the text into one’s own words, as noted above.
4. The Taxonomy recommends against the goal of “understanding” in education.
Only in the sense of the term “understand” being too broad. Rather, the Taxonomy helps us to more clearly delineate the different levels of understanding we seek:
To return to the illustration of the term “understanding” a teacher might use the Taxonomy to decide which of several meanings he intended. If it meant that the student was…aware of a situation…to describe it in terms slightly different from those originally used in describing it, this would correspond to the taxonomy category of “translation” [which is a sub-level under Comprehension]. Deeper understanding would be reflected in the next-higher level of the Taxonomy, “interpretation,” where the student would be expected to summarize and explain… And there are other levels of the Taxonomy which the teacher could use to indicate still deeper “understanding.”
5. The writers of the Taxonomy were confident that the Taxonomy was a valid and complete Taxonomy
No they weren’t. They note that:
“Our attempt to arrange educational behaviors from simple to complex was based on the idea that a particular simple behavior may become integrated with other equally simple behaviors to form a more complex behavior… Our evidence on this is not entirely satisfactory, but there is an unmistakable trend pointing toward a hierarchy of behaviors.
They were concerned especially that no single theory of learning and achievement–
“accounted for the varieties of behaviors represented in the educational objectives we attempted to classify. We were reluctantly forced to agree with Hilgard that each theory of learning accounts for some phenomena very well but is less adequate in accounting for others. What is needed is a larger synthetic theory of learning than at present seems available.
Later schemas – such as Webb’s Depth of Knowledge and the revised Taxonomy – do nothing to solve this basic problem, with implications for all modern Standards documents.
Why This All Matters
The greatest failure of the Common Core Standards is arguably to have overlooked these issues by being arbitrary/careless in the use of verbs in the Standards.
There appears to have been no attempt to be precise and consistent in the use of the verbs in the Standards, thus making it almost impossible for users to understand the level of rigor prescribed by the standard, hence levels of rigor required in local assessments. (Nothing is said in any documents about how deliberate those verb choices were, but I know from prior experience in New Jersey and Delaware that verbs are used haphazardly – in fact, writing teams start to vary the verbs just to avoid repetition!)
The problem is already on view: in many schools, the assessments are less rigorous than the Standards and practice tests clearly demand. No wonder the scores are low. I’ll have more to say on this problem in a later post, but my prior posts on Standards provide further background on the problem we face.
Update: Already people are arguing with me on Twitter as if I agree with everything said here. I nowhere say here that Bloom was right about the Taxonomy. (His doubts about his own work suggest my real views, don’t they?) I am merely reporting what he said and what is commonly misunderstood. In fact, I am re-reading Bloom as part of a critique of the Taxonomy in support of the revised 3rd edition of UbD in which we call for a more sophisticated view of the idea of depth and rigor in learning and assessment than currently exists.
Grading problems are one of the most urgent bugaboos of good teaching.
Grading can take an extraordinary amount of time. It can also demoralize students, get them in trouble at home, or keep them from getting into a certain college.
It can demoralize teachers, too. If half the class is failing, any teacher worth their salt will take a long, hard look at themselves and their craft.
So over the years as a teacher, I cobbled together a kind of system that was, most crucially, student-centered. It was student-centered in the sense that it was designed for them to promote understanding, grow confidence, take ownership, and protect themselves from themselves when they needed it.
Some of this approach was covered in Why Did That Student Fail? A Diagnostic Approach To Teaching. See below for the system–really, just a few rules I created that, while not perfect, went a long way towards eliminating the grading problems in my classroom.
Which meant students weren’t paralyzed with fear when I asked them to complete increasingly complex tasks they were worried were beyond their reach. It also meant that parents weren’t breathing down my neck ‘about that C-‘ they saw on Infinite Campus, and if both students and parents are happy, the teacher can be happy, too.
How I Eliminated (Almost) All Grading Problems In My Classroom
1. I chose what to grade carefully.
When I first started teaching, I thought in terms of ‘assignments’ and ‘tests.’ Quizzes were also a thing.
But eventually I started thinking instead in terms of ‘practice’ and ‘measurement.’ All assessment should be formative, and the idea of ‘summative assessment’ makes as much sense as ‘one last teeth cleaning.’
The big idea is what I often call a ‘climate of assessment,’ where snapshots of student understanding and progress are taken in organic, seamless, and non-threatening ways. Assessment is ubiquitous and always-on.
A ‘measurement’ is only one kind of assessment, and even the word implies ‘checking in on your growth’ in the same way you measure a child’s vertical growth (height) by marking the threshold in the kitchen. This type of assessment provides both the student and teacher a marker–data, if you insist–of where the student ‘is’ at that moment with the clear understanding that another such measurement will be taken soon, and dozens and dozens of opportunities to practice in-between.
Be very careful with what you grade, because it takes time and mental energy–both finite resources crucial to the success of any teacher. If you don’t have a plan for the data before you give the assessment, don’t give it, and certainly don’t call it a quiz or a test.
2. I designed work to be ‘published’
I tried to make student products–writing, graphic organizers, podcasts, videos, projects, and more–at the very least visible to the parents of students. Ideally, this work would also be published to peers for feedback and collaboration, and then to the public at large to provide some authentic function in a community the student cares about.
By making student work public (insofar as it promoted student learning while protecting any privacy concerns), the assessment is done in large part by the people the work is intended for. It’s authentic, which makes the feedback loop quicker and more diverse than one teacher could ever hope to make it.
What this system loses in expert feedback that teacher might be able to give (though nothing says it can’t both be made public and benefit from teacher feedback), it makes up for in giving students substantive reasons to do their best work, correct themselves, and create higher stands for quality than your rubric outlined.
3. I made a rule: No Fs and no zeroes. A, B, C, or ‘Incomplete’
First, I created a kind of no-zero policy. Easier said than done depending on who you are and what you teach and what the school ‘policy’ is and so on. The idea here, though, is to keep zeroes from mathematically ruining a student’s ‘final grade.’
I try to explain to students that a grade should reflect understanding, not their ability to successfully navigate the rules and bits of gamification stuffed into most courses and classrooms. If a student receives a D letter grade, it should be because they have demonstrated an almost universal inability to master any content, not because they got As and Bs on most work they cared about but Cs or lower on the work they didn’t, and with a handful of zeroes thrown in for work they didn’t complete ended up with a D or an F.
Another factor at work here is marking work with an A, B, C, or ‘Incomplete.’ Put another way, if the student didn’t at least achieve the average mark of C, which should reflect average understanding of a given standard or topic, I would mark it ‘Incomplete,’ give them clear feedback on how it could be improved, and then require them to do so.
4. I went over missing assignments frequently.
Simple enough. I had a twitter feed of all ‘measurements’ (work they knew that counted towards their grade), so they didn’t have to ask ‘what they were missing’ (though they did anyway). I also wrote it on the board (I had a huge whiteboard that stretched across the front of the classroom).
5. I created alternative assessments.
Early on in teaching, I noticed students saying, in different ways, that they ‘got it but don’t all the way get it.’ Or that they believed that they did, in fact, ‘get it’ but not the way the assessment required (reminder: English Lit/ELA is a highly conceptual content area aside of the skills of literacy itself).
So I’d create an alternative assessment to check and see. Was the assessment getting in the way–obscuring more than it revealed? Why beat my head against the wall explaining the logistics of an assignment or intricacies of a question when they assignment and the question weren’t at all the points? These were just ‘things’ I used the way a carpenter uses tools.
Sometimes it’s easier to just grab a different tool.
I’d also ask students to create their own assessments at times. Show me you understand. It didn’t always work the way you’d expect, but I got some of the most insightful and creative expression I’ve ever seen from students using this approach. As with most things, it just depended on the student.
6. I taught through micro-assignments.
Exit slips were one of the the greatest things that ever happened to my teaching. I rarely used them as ‘exit tickets’ to be able to leave the classroom, but I did use them almost daily. Why?
They gave me a constant stream of data for said ‘climate of assessment,’ and it was daily and fresh and disarming to students because they knew it was quick and if they failed, another one would be coming soon.
It was a ‘student-centered’ practice because it protected them. They had so many opportunities and, math-wise, so many scores that unless they failed everything every day, they wouldn’t ‘fail’ at all. And if they were,
I could approach a single standard or topic from a variety of angles and complexities and Bloom’s levels and so on, which often showed that the student that ‘didn’t get it’ last week more likely just ‘didn’t get’ my question.
In other words, they hadn’t failed my assessment; my assessment had failed them because it had failed to uncover what they, in fact, knew.
7. I used diagnostic teaching
You can read more about diagnostic teaching but the general idea is that I had a clear sequence I used that I communicated very clearly to the students and their families. It usually took the first month or two for everyone to become comfortable with it all, but once I did, grading problems were *almost* completely eliminated. Problems still surfaced but with a system in place, it was much easier to identify exactly what went wrong and why and communicate it all to the stakeholders involved in helping support children.
If the ultimate goal of education is for students to be able to answer questions effectively, then focusing on content and response strategies makes sense. If the ultimate goal of education is to teach students to think, then focusing on how we can help students ask better questions themselves might make sense, no?
Why Questions Are More Important Than Answers
The ability to ask the right question at the right time is a powerful indicator of authentic understanding. Asking a question that pierces the veil in any given situation is itself an artifact of the critical thinking teachers so desperately seek in students, if for no other reason than it shows what the student knows, and then implies the desire to know more.
Asking a question (using strategies to help students ask better questions, for example) is a sign of understanding, not ignorance; it requires both knowledge and then–critically–the ability to see what else you’re missing.
Questions are more important than answersbecause they reflect understanding and curiosity in equal portions. To ask a question is to see both backward and forward–to make sense of a thing and what you know about it, and then extend outward in space and time to imagine what else can be known, or what others might know. To ask a great question is to see the conceptual ecology of the thing.
In a classroom, a student can see a drop of water, a literary device, a historical figure, or a math theorem, but these are just worthless fragments. A student in biology studying a drop of water must see the water as infinitely plural–as something that holds life and something that gives life.
As a marker of life, and an icon of health.
It is a tool, a miracle, a symbol, and a matter of science.
They must know what’s potentially inside a drop of water and how to find out what’s actually inside that drop of water.
They must know what others have found studying water and what that drop of water means within and beyond the field of science.
They must know that water is never really just water.
Teacher Questions vs Student Questions
When teachers try to untangle this cognitive mess, they sacrifice personalization for efficiency. There are too many students, and too much content to cover, so they cut to the chase.
Which means then tend towards the universal over the individual–broad, sweeping questions intermingling with sharper, more concise questions that hopefully shed some light and cause some curiosity. In a class of 30 with an aggressively-paced curriculum map and the expectation that every student master the content regardless of background knowledge, literacy level, or interest in the material, this is the best most teachers can do.
This only a bottleneck, though, when the teacher asks the questions. When the student asks the question, the pattern is reversed. The individual student has little regard for the class’s welfare, especially when forming questions. They’re on the clock to say something, anything. Which is great, because questions–when they’re authentic–are automatically personal because they came up with them. They’re not tricks or guess-what-the-teacher’s-thinking.
A student couldn’t possibly capture the scale of confusion or curiosity of 30 other people; instead, they survey their thinking, spot both gaps and fascinations and form a question. This is the spring-loading of a Venus flytrap. The topic crawls around in the student’s mind innocently enough, and when the time is right—and the student is confident—the flower snaps shut. Once a student starts asking questions, that magic of learning can begin.
And the best part for a teacher? Questions reveal far more than answers ever might.
The Purpose of Questions
Thought of roughly as a kind of spectrum, four purposes of questions might stand out, from more “traditional” to more “progressive.”
“To be a little more abstract, a good question causes thinking–more questions. Better questions. It clarifies and reveals. It causes hope.
A bad question stops thinking. It confuses and obscures. It causes doubt.”
(More Traditional) Academic View
In a traditional academic setting, the purpose of a question is to elicit a response that can be assessed (i.e., answer this question so I can see what you know).
(Less Traditional) Curriculum-Centered View
Here, a ‘good question’ matters more than a good answer, as it demonstrates the complexity of student understanding of a given curriculum.
(More Progressive) Inquiry View
As confusion or curiosity markers that suggest a path forward for inquiry, and then are iterated and improved based on learning. (Also known as question-based learning.)
(More Progressive Still) Self-Directed View
In a student-centered circumstance, a question illuminates possible learning pathways forward irrespective of curriculum demands. The student’s own knowledge demands–and their uncovering–center and catalyze the learning experience.
To be a little more abstract, a good question causes thinking–more questions. Better questions. It clarifies and reveals. It causes hope. A bad question stops thinking. It confuses and obscures. It causes doubt.
The Relative Strengths of Questions
Good questions can reveal subtle shades of understanding–what this student knows about this topic in this context
Questions promote inquiry and learning how to learn over proving what you know
Questions fit in well with the modern “Google” mindset
Used well, questions can promote personalized learning as teachers can change questions on the fly to meet student needs
The Relative Weaknesses of Questions
Questions depend on language, which means literacy, jargon, confusing syntax, academic diction, and more can all obscure the learning process
Accuracy of answers can be overvalued, which makes the confidence of the answerer impact the quality of the response significantly
“Bad questions” are easy to write and deeply confusing, which can accumulate to harm a student’s sense of self-efficacy, as well as their tendency to ask them on their own
7 Common Written Assessment Question Forms
Questions as written assessment (as opposed to questions as inquiry, questions to guide self-directed learning, or questions to demonstrate understanding) most commonly take the following forms in writing:
Matching
True/False
Multiple Choice
Short Answer
Diagramming
Essay
Open-Ended
Questioning In The Classroom & Self-Directed Learning
For years, questions have guided teachers in the design of units and lessons in classrooms, often through the development of essential questions that all students should be able to reasonably respond to and that can guide their learning of existing and pre-mapped content.
In the TeachThought Self-Directed Learning Model, learners are required to create their own curriculum through a series of questions that emphasize self-knowledge, citizenship, and communal and human interdependence. In this model, existing questions act as a template to uncover potential learning pathways.
Cognitive Dissonance is the cognitively-uncomfortable act of holding two seemingly competing beliefs simultaneously. If you believe that Freedom of Speech is the foundation of democracy, but then are presented with a perspective (through Socratic-style questioning in the classroom from the teacher, for example), you arrive (or the student does) at a crossroads where they have to adjust something–either their belief or their judgment about the validity of the question itself.
In this way, questions can promote Cognitive Dissonance, meaning a good question can change a student’s mind, beliefs, or tendency to examine their own beliefs. Questions, cognitive, and self-reflection go hand-in-hand.
The Role of ‘Lower-Level’ Questions in the Classroom
Lower-level questions inquire at ‘lower levels’ of various learning taxonomies.
These are often ‘recall’ questions that are based in fact—definitions, dates, names, biographical details, etc. Education is thought to have focused (without having been there, who knows for sure?) on these lower levels, and ‘low’ is bad in academics, right? ‘Lower-level’ thinking implies a lack of ‘higher-level’ thinking, so instead of analyzing, interpreting, evaluating, and creating, students are defining, recalling, and memorizing, the former of which make for artists and designers and innovators, and the latter of which make for factory workers.
And that part, at least, is (mostly) true. Recall and memorization aren’t the stuff of understanding, much less creativity and wisdom, except that they are. Bloom’s Taxonomy was not created to segregate ‘good thinking’ from ‘bad thinking.’ In their words, “Our attempt to arrange educational behaviors from simple to complex was based on the idea that a particular simple behavior may become integrated with other equally simple behaviors to form a more complex behavior.” In this way, the taxonomy is simply one way of separating the strands of thinking like different colored yarn–a kind of visual scheme to see the pattern, contrasts, and even sequence of cognitive actions.
Nowhere does it say that definitions, names, labels, and categories are bad–and if it did, we’d have to wonder about the taxonomy rather than assuming that they were. It doesn’t take much imagination to see that if a student doesn’t know there was a war, and that it was fought in the United States in the 1800s, and that it was purportedly over states’ rights, and that both culture, industry, and agriculture all impacted the hows, whens, and whys of the war, that ‘higher-level thinking strategies’ aren’t going to be very useful.
In short, lower-level questions can illuminate and establish foundational knowledge to build a more complex and nuanced understanding of content. They provide a foothold for thinking. To further the point, in 5 Common Misconceptions About Bloom’s Taxonomy, Grant Wiggins explains that the phrases ‘higher-order’ and ‘lower-order’ don’t appear anywhere in the taxonomy.
Essential Questions in the Classroom
Grant Wiggins defined an essential question as “broad in scope and timeless by nature. They are perpetually arguable.”
Examples of Essential Questions
What is justice?
Is art a matter of taste or principles?
How far should we tamper with our biology and chemistry?
Is science compatible with religion?
Is an author’s view privileged in determining the meaning of a text?
A question is essential when it:
causes genuine and relevant inquiry into the big ideas and core content;
provokes deep thought, lively discussion, sustained inquiry, and new understanding as well as more questions;
requires students to consider alternatives, weigh evidence, support their ideas, and justify their answers;
stimulates vital, ongoing rethinking of big ideas, assumptions, and prior lessons;
sparks meaningful connections with prior learning and personal experiences;
naturally recurs, creating opportunities for transfer to other situations and subjects.
Think-Pair-Share is a collaborative learning strategy that promotes discussion and allows students to share their thoughts and questions with a partner before sharing with the larger group.
Process
Think: Pose a thought-provoking question or problem related to the lesson. Give students a few minutes to think about their responses individually.
Pair: Have students pair with a partner to discuss their thoughts and questions. Encourage them to come up with additional questions during their discussion.
Share: Pairs share their questions and ideas with the class. This can be done by having each pair present their most interesting question or facilitating a larger group discussion where pairs contribute to a growing list of questions.
Follow-Up: Use the questions generated from the Think-Pair-Share activity to guide further inquiry, research projects, or class discussions.
10. Wonderwall
Description: A Wonder Wall is a dedicated space in the classroom where students can post questions that come to mind during lessons, discussions, or independent activities. It is a visual and interactive tool to foster a culture of inquiry.
Process
Create the Space: Designate a section of a wall or a bulletin board as the Wonder Wall. Provide sticky notes, markers, and a way for students to add questions easily.
Introduce the Concept: Explain to students that the Wonder Wall is a place for them to post any questions about the topics being studied or other related curiosities. Encourage them to write their questions on sticky notes and place them on the wall.
Regularly Review and Address Questions: Set aside time each week to review the questions on the Wonder Wall. Select a few questions to investigate further as a class or to incorporate into future lessons and activities.
Encourage Peer Interaction: Allow students to read and respond to their peers’ questions on the Wonder Wall. They can add comments, suggestions, or additional questions, creating a collaborative and dynamic learning environment.
Integrate into Curriculum: Use the questions from the Wonder Wall to guide inquiry-based projects, research assignments, or class discussions. This ensures that student curiosity directly influences learning and keeps students engaged.
A Guide To Questioning In The Classroom; image attribution flickr user flickeringbrad
Bloom’s Taxonomy was a remarkable attempt to create a system of learning that focuses on how people learn and organize content around those natural aptitudes.
Created by Benjamin Bloom in 1956, Bloom’s Taxonomy offered a method and structure to think about thinking. Below, we’ve collected a list of blog posts, apps, tools, videos, and strategies to help educators become more proficient with the system.
Note: The items below have been changed but the graphic itself still needs updating : )
What about the most popular trends in education heading into 2024 specifically? Well, that’s a tricky question.
Deciding what’s ‘trending’ is an important part of digital publishing and social media interaction. Facebook articles, Google News, Apple News, trending hashtags on twitter, and even our own TeachThought website all depend heavily on statistics.
It’s easy to have a problem with this concept philosophically–namely, the most popular isn’t always the most effective or the ‘best.’ So this post isn’t about the most innovative trends, most exciting trends, or most effective trends, but rather the most popular trends in innovative education insofar as we can see from our necessarily limited data and individual perspective.
The requirements?
1. It must be ‘popular.’
2. And it must involve some kind of ‘innovation’ or growth in education.
How We Measured
So how did we measure the ‘most popular trends’ in education?
We basically took four quantifiable data points and combined them with fallible but hopefully useful good old-fashioned human awareness and recognition. The result is four objective measures and on subjective ‘sense of things.’ We then combined them to create a ‘score,’ quantified on a scale of 1 through 10 where 10 is the highest.
5 Data Points To Identify The Most Popular Trends In Education
1. Popular search engine data (e.g., Google, Bing, etc.) [objective]
2. TeachThought search data [objective]
3. Traffic and search trends within and across popular education websites [objective]
4. Social media metrics [objective]
5. TeachThought editorial impression [subjective]
Based on our search database trends, analytics data on content, and a decidedly unscientific but daily skimming of industry chatter, press releases, peer content, internal dialogue, and social media usage, here are–in light of the above–most popular 12 trends in innovative education for 2024.
For each, we’ve also shared one of our top bits of content to get you started reading. Also, feel free to use search to find and share additional resources as well.
This clearly has plenty of inherent bias built-in, is decidedly non-scientific, and is nowhere close to exhaustive. Take it with a grain of salt as one sampling of modern trends in ‘western’ K-12 public education.
30 Of The Most Popular Trends In Education For 2024
Related Topics: Personalized Learning, Game-Based Learning, Adaptive Learning Algorithms
Other popular trends in education: self-directed learning, alternatives to letter grades, artificial intelligence, micro-education, modular education, sociocultural/socioeconomic equity, flipped classroom (see also blended learning), scenario-based learning, adaptive learning algorithms, BYOD/BYOT, social media in the classroom, digital portfolios