
I spent this semester writing a paper for my graduate program on something that’s been quietly nagging at me: the gap between what we know about how learning works and how we design the digital environments meant to support it.
The more I read, the harder it became to ignore an uncomfortable conclusion. Many of the features we celebrate in learning management systems, like their efficiency, their analytics, and their tidy automation, aren’t pedagogically neutral. They quietly shape what learning becomes. And often, not for the better.
Here’s the core idea, as briefly as I can put it.
Learning requires risk. Our systems are built to remove it.
James Zull, a neuroscientist who’s written extensively about how brains change through learning, makes a deceptively simple point: real learning is the literal physical reorganization of neural pathways. That kind of change doesn’t happen in comfort. It happens when learners encounter ideas that don’t fit their current understanding, when they sit with confusion, and, crucially, when they test their ideas in conditions where they might be wrong.
Roger Schank, a cognitive scientist, makes the case even sharper: people learn by doing, failing, and figuring out why. Failure isn’t a bug in the learning process. It’s the engine.
Now look at what most LMS-based environments are designed to do.
1. Automated grading rewards correctness, not thinking.

Auto-graded assessments are particularly effective for vocabulary or procedural fluency. But they’ve quietly become the default for almost everything. The problem? To automate grading, you need predetermined right answers. That requirement narrows what kinds of questions get asked, and over time, what kinds of thinking are practiced. Students learn to optimize for the answer the system expects, not to construct and defend their ideas.
Carol Dweck’s research on growth mindset suggests the issue matters more than we realize: when systems consistently punish wrong answers, they cultivate exactly the kind of fixed-mindset orientation that suppresses real learning.
2. Public visibility raises the social cost of being wrong.

Discussion boards sound like a fantastic idea. Make students post publicly, force engagement, and expose them to their peers’ thinking. But the rational strategy in a graded, archived, instructor-visible forum isn’t to share an underdeveloped thought and see how it holds up. It’s time to wait, watch what gets approved, and produce something safe.
Immordino-Yang and Damasio’s research on emotion and learning is relevant here: the brain doesn’t engage in deep, exploratory thinking when it perceives social threats. Public discussion forums often produce careful performances rather than honest intellectual risk.
3. Behavioral surveillance trades intrinsic motivation for compliance.

This one is the most insidious. Modern LMS platforms track everything: time on page, login frequency, click patterns, and scroll behavior. It’s framed as a tool to identify struggling students. And occasionally it is.
But it’s also a panopticon. Foucault’s old observation still applies: what makes surveillance powerful isn’t being constantly watched; it’s knowing you might be. Students start to perform engagement rather than engage in it. They log in on schedule, click through at a pace that registers as “active,” and post within the prescribed window. Schank’s point about intrinsic motivation gets buried: the system has trained them to chase external signals rather than follow their curiosity.
What is lost is agency.
The cumulative effect of these design choices is that students learn something. It’s that they learn to navigate the system instead of wrestling with the subject. The exploratory, risk-tolerant engagement that deep understanding requires quietly squeezes out strategic compliance. The habits of mind that students take with them—comfort with uncertainty, willingness to fail, and ability to direct their own inquiry—are the ones we need to cultivate most.
This isn’t a call to abandon digital learning. It’s a call to design it better.
A few things would help:
- Build low-stakes spaces where students can think provisionally without being graded for it.
- Redesign discussion structures to lower, rather than raise, the social cost of being wrong.
- Rethink what data we collect. Time on page measures the performance of learning, not learning itself.
Most fundamentally, we need to stop designing environments that manage learning and start designing ones that support it. Managing implies control and standardization. Supporting implies responsiveness, flexibility, and a willingness to let the process be somewhat messy, because that messiness is, as Zull and Schank would both insist, where the learning happens.
Education that consistently rewards strategic compliance over genuine engagement doesn’t just produce shallow learning. It shapes the kind of thinkers we become. That feels like a higher standard than we usually hold our edtech to.
It’s also the right one.

If you want to go deeper: James Zull, The Art of Changing the Brain; Roger Schank, Teaching Minds; Carol Dweck, Mindset; and Immordino-Yang & Damasio, “We Feel, Therefore We Learn.” And if you’re feeling ambitious, Foucault’s Discipline and Punish.
Curious to hear from other educators, instructional designers, and edtech folks: are you seeing this in the systems you work with? What’s working to push back?