Discover more from The Optimalist
Some reordering and the question of what to automate in education
A term often used in engineering when a project increases in scope as it progresses is “scope creep”. This is how the Big Dig in Boston came in 5x over the original estimates, and why the high speed rail between San Francisco and Los Angeles will likely never come to fruition. These are spectacular examples of how good we’ve gotten at scope creep as a society. They underscore why it’s so hard to build anything these days.
Our scope creep isn’t just contained to building things though.
Over the last few decades, the job of teaching has had its own version of scope creep. Evolving standards. More and more testing. Accelerating grading and assessment. Growing class sizes. Emergent behavioral challenges due to societal and technological change. COVID. All of this has increased the scope of being a teacher. And not by a little. It was always a hard job, but now it’s a reeaaallly hard job.
Some say that it is now an unsustainable job.
And so it is logical, even desired, to look at AI as a way to automate some aspects of teaching. More than anything, teachers need to save time - especially on what you could consider busy work. Paperwork. Grading. Etc. AI is arriving at exactly the same time they really need the help the most.
This is why we’re advocating for teacher adaptability as this opportunity arrives. It’s why we advocate for tools that do this. And it is why we are developing our own tools to help motivate change through community and reflection. For teaching to be the thriving profession we need, we need to start adapting the job with AI with some urgency.
But this isn’t the only reason.
AI, unfortunately, brings its own version of scope creep to the job of teaching. How it does this is only starting to emerge. In one respect, it would seem that AI compensates for people’s differences in cognitive ability and there have been minimal impacts on jobs so far. That would almost imply the bar could be lowered in education. But this is likely only because the agentic aspects of AI have yet to come to fruition, and that probably isn’t too far away. It already demonstrates some of these capabilities - so it’s just a matter of tuning or packaging before its impacts are felt at some level.
So for future generations to have agency, job prospects, and to participate in society constructively, educators need to aim higher than ever.
We’ve been breaking down what this means in our last few posts in the context of writing, creativity, and curiosity. We’ll do it again today in the context of some of the more familiar models educators know and leverage most.
Bloom’s Taxonomy is a good place to start. It’s the OG of higher order thinking - or as educators have come to call it, HOTS. As with all models it has its flaws worthy of critique, it’s still been a highly useful one - especially in recent years - as a way to explain how to respond to the threat of job automation. It provided a clear path to explain where to aim skill development on the assumption automation could only do repetitive, knowledge-based things. But of course, now, we know that assumption was wrong and all of what Bloom’s Taxonomy outlines is able to be automated. AI can create, analyze, and evaluate well beyond average humans already.
Some additions to Bloom’s model would help explain this scope creep.
In the first addition, the level of creation, analysis, and evaluation needs to be extended to a new level. The goal can no longer be to create, analyze, and evaluate at the level your class or community values, but to create, analyze, and evaluate at the level of all of society, across all of history. This sounds insane at first when you say it, of course. But once you realize this is about leveraging AI to its fullest and pushing its boundaries - it can start sounding less absurd. These skills are likely very different from HOTS. There is radically more tinkering, observation and failure required, and it surely will require new methods of teaching in support.
The second addition is the metacognitive layer. Because pretty much all of cognition is already automated, and shortcuts to learning and doing hard things will abound, we’ll need radically more self-awareness about personal growth to have the motivation to learn. And we still need to learn. We need to know things and be able to do things without AI to be fully human. And we need to be able to know and do things to be able to leverage AI itself fully too, especially given its flaws. This is how we can detect errors, hallucinations, and bias. Even more critically, this meta-cognitive layer helps us tackle challenges that AI doesn’t simplify down to what is stimulating and instantly motivating - which to my way of thinking, is the key to keeping humans involved in forward progress.
To provide some name distinction with existing HOTS, these two layers could be called EHOTS (Even Higher Order Thinking Skills). This has the added benefit of making them sound cool and vaguely cutting edge too.
There’s another aspect of scope creep that isn’t obvious at first glance. Once you integrate meta-cognition further into all levels, it includes self-regulation of thinking and doing. It’s not just about having strong narratives for doing what we do. A lot of other models get rolled up into that meta-cognitive peak: Maslow’s, Bloom’s Affective and Psychomotor, SEL, etc. To some degree, I think we’re finding this change in how we think about things is already necessary - it is why attention and behavior is becoming such a huge challenge, and prioritizing SEL and self-regulation has become so critical to operating schools today.
One way to consider this needed change is that it is a re-integration of these disparate models, or aspects of these models, into human development. We can likely no longer look at them as separate things.
Bloom’s isn’t the only model for higher order thinking either. The 21st Century Skills movement is also a model worthy of some inclusion. It was itself developed for responding to the threat of automation. Yet at a basic level, it too emphasizes skills that have been automated and offer limited comparative advantage for humans over AI. AI is more creative, collaborative, communicative, etc, than the average human already. The latter of which is particularly concerning with respect to jobs, as you see preferences emerging for AI to replace humans due to these abilities. Worse, it is far more productive and flexible than pretty much all of us. For this model to remain relevant, each skill needs to be adjusted to an even higher level, and wrapped up in metacognition to help learners understand why they are so important.
Perhaps again renaming can help - something like Mid-Century Skills might do the trick here, and again, it is kinda fun.
What Should be Automated
So what do all these modified orders tell us about what to automate in education?
One simplistic logic you could apply, is that if AI can perform a skill, it can likely teach and assess it quite well too. Also, well-established AI can personalize instruction - at least in the way personalization is commonly referred to - across large classes better than a human ever could, too. Given that this refers to pretty much all of Bloom’s (unmodified) and all the traditional subjects of math, science, English, etc, within it, this strategy would really free teachers to focus on EHOTS. And in some ways, this would be a good thing - as we mentioned, there is real urgency here.
But this is obviously not possible, nor surely even desirable. We need teachers’ human powers of observation at all levels of education. We need their help to personalize in ways that reflect the whole human and the whole context, not just provide an adaptive learning experience. So some lower order things should be taught by and with humans, especially when teachers are deeply passionate about the subject. The interest and choice of teachers should be reflected in these decisions too.
But maybe what we can do is gain enough time back to focus teachers on leveled cognition. We need to figure out how to teach these new skills. It will require new methods of instruction. And it very much will require participation by a knowledgeable human to help students find the limits of AI.
An interesting opportunity opens up though when you look at the metacognitive layer. As noted before, AI can aid metacognition when prompted properly. This presents the opportunity to automate the development of many aspects of metacognition and self-regulation. This is a promising insight, especially because of the magnitude of the task. To produce radically more self-awareness, we need radically more reflection, and a coach that helps you be more aware than ever before. We need a coach that helps you understand how you feel and helps you generate more and more motivating goals. Practically speaking, this also might be one of best ways to prove learning occurred (ie, no shortcuts were taken). The scale of this would likely be impossible for teachers to address. And AI can do this, with appropriate shaping of its goal. So automating it is really our only hope, and this has us thinking about how to do this in our technology development plans.
There are many answers to the question of what to automate, and very likely, many good answers. The key for us as humans, though, is adaptability. As long as we can find a way to be adaptable, we can keep scope creep under control and meet the needs of our very near future.
The Optima List
The best possible list of opportunities
📝 AI Adaptability Webinar Series
The arrival of creative AI is putting immediate pressure on individuals and organizations to become more adaptable than ever. We can no longer try to avoid new technology and trends — especially when it comes to professional learning.
As part of Swivl’s Adaptability Initiative, we’re featuring leading AI thought leaders and innovative thinkers to support educators who are creating learning communities within their organizations to foster continuous adaptability.
Our exclusive webinars began last week and will continue through the end of the year, indefinitely. Upcoming this week are Laura Ebersole and Adeel Khan, featured below. Visit our webinar series information page to see the full line-up as they are added, and to register for all sessions.
🗣️ Doing Better this School Year, According to Our Podcast Guests
Last week, for our 20th episode, we celebrated the voices of our past guests and our collective vision for the future by answering the question:
How can we help young people reach the even higher order capabilities that are now needed to thrive in the world they will be entering? Or rather Now that we are permanently living and working alongside AI, how are we going to do better?
Through responses from past guests, we delve into the topic of building an optimal future with AI and how we can assist young people in developing the necessary skills to thrive in such a world. We emphasize the importance of self-reflection, intentionality, and living in alignment with our core values. We also stress the value of providing clear, non-judgmental feedback and acknowledge the evolving nature of technology and the need for educators to embrace AI. Overall, this journey requires we recognize the importance of constant reflection, adaptation, and fostering curiosity in students. Connecting with communities and developing partnerships with industry and business are also vital for preparing students for this now unpredictable world. Our goal is to support youth in finding their passion and curiosity, while also harnessing the potential of digital technology and AI.
You can listen at the Spotify player below, right here in our Substack, or anywhere else you like to listen to podcasts.
Subscribe to The Optimalist to receive new posts, support AI adaptability, and champion the future of schools.