【Thought】A Dynamical Systems Interpretation of How Productive Failure Promotes Novelty in Problem-Solving (Speculative)

Productive Failure (Kapur, 2012) is an instructional strategy where learners engage in problem solving before receiving formal instruction. The failure refers to the likely unsuccessful outcomes of these initial attempts. The productive quality arises because this phase of struggle has been empirically shown to improve later learning and the ability to transfer knowledge to new problems (Güreş et al., 2025; Sinha & Kapur, 2021). Beyond better recall, evidence suggests this approach may specifically enhance creativity and novel problem solving (Schwartz & Martin, 2004; Loibl et al., 2017). This raises a central question: why would a period of difficulty and error cultivate novel thinking?

We offer a speculative interpretation using concepts from dynamical systems theory. A dynamical system model describes how a state changes over time according to some rules. We can model a learner’s understanding as a state vector $ x(t) $ within a high-dimensional cognitive space. The evolution of this state is governed by a differential equation: $ \dot{x} = f(x, t, u, P) $. Here, the function $ f $ defines the dynamics. The parameters $ P $ represent the system parameters, here can be the learner’s cognitive architecture, physical features, among others. The parameters shape the system’s inherent dynamics repertoire. The stimulus/inputs $ u(t) $ represents external influences, such as instructional guidance or feedback.

Within such a system, an attractor is a region of the state space toward which nearby trajectories tend to converge over time. A solution concept or a well-practiced procedure can be thought of as a cognitive attractor. Asymptotic stability is the property of such an attractor where trajectories starting sufficiently close to it will approach it increasingly closely as time goes to infinity, and will remain nearby if slightly disturbed. Instruction typically aims to make correct solution states asymptotically stable attractors.

In a conventional instruction-first sequence, a learner begin from a initial state, denoted $ x_0 $. Instruction $ u $ is applied early, modifying the dynamics $ f $ to make a target solution a strongly asymptotically stable attractor. A learner trajectories then converge along a similar, efficient path to this single attractor basin. The learned cognitive landscape is optimized for this one route.

Productive Failure inverts this sequence. First, learners attempt the problem without the guiding input $ u $. This unguided exploration acts as a significant perturbation to the cognitive dynamics. Without instruction shaping the vector field toward a target attractor field toward a specific attractor, the learner’s state evolution does not dominated by an external force. The initial state $ x_0 $ branches into a diverse set of states $ {s_1, s_2, …, s_N} = {s}_N $ by autonomous evolution. Each state $ s_i $ represents a distinct, often incorrect, hypothesis or mental model formed during exploration. This branching is a structured divergence that broadly samples the problem space, including regions near the correct solution and its boundaries.

After this diversification, the instructional input $ u $ applied. This instruction embeds information that can direct the dynamics $ f $. It is applied not to a single starting point, but able to apply to each distinct state $ s_i $ in the set $ {s}_N $. The reshaped dynamics now make the correct solution an asymptotically stable attractor from each of these varied starting points.

Following instruction, each state $ s_i $ asymptotically converges toward the target attractor along its own unique trajectory $ \gamma_i(t) $. Because the starting points $ {s}_N $ were scattered, the set of resulting $N$ flows collectively covers a much larger volume of the cognitive state space than a single trajectory from a common start $ x_0 $. These multiple flows end at the attractor. They also map the boundaries between solutions and non-solutions, explore adjacent regions, and effectively chart a richer topology of basins and ridges around the core concept.

In dynamical terms, Productive Failure works by first expanding exploration through perturbation-driven divergence $ (s_0 \to {s}_N) $, and then structuring asymptotic convergence through instruction-guided attraction from each $ s_i $. The resulting cognitive landscape is a network of attractors with well-explored boundaries. When confronted with a novel problem, a point in state space outside any trained basin, the learner’s cognitive dynamics are not confined to one habitual pathway. The system can access multiple basins, traverse known boundaries mapped during the divergent phase, and recombine pathways. This expanded coverage of boundary regions and the availability of multiple convergent histories provide a dynamical substrate for analogical transfer and the generation of novel solutions. The initial trying and failures is productive because it forces a borader and diverse exploration of the cognitive geometries, which subsequent instruction then organizes into a robust, navigable, and generatively rich landscape for thinking.

Reference

Güreş, F. B., Nazaretsky, T., Radmehr, B., Rau, M., & Käser, T. (2025). How Instructional Sequence and Personalized Support Impact Diagnostic Strategy Learning. arXiv:2507.17760.

Kapur, M. (2012). Productive failure in learning the concept of variance. Instructional Science, 40(4), 651–672.

Loibl, K., Roll, I., & Rummel, N. (2017). Towards a theory of when and how problem-solving before instruction supports learning. Educational Psychology Review, 29(4), 693–715.

Schwartz, D. L., & Martin, T. (2004). Inventing to prepare for future learning: The hidden efficiency of encouraging original student production in statistics instruction. Cognition and Instruction, 22(2), 129–184.

Sinha, T., & Kapur, M. (2021). When problem solving followed by instruction works: Evidence for productive failure. Computers & Education: Artificial Intelligence, 2, 100017.