Field Notes

Ideas worth thinking with.

Each note highlights a single principle, claim, or argument from the learning-engineering literature — with direct quotes, a pointer to the original, and a question for you to sit with.

Grounding. Every note cites a named source we've read. No practitioner-opinion filler, no fabricated examples. If a note can't point to a published piece, it doesn't belong here.

processicicleframework

ICICLE's five-phase learning engineering process

The closest thing the field has to an official process definition. Five phases, one diagram, and an explicit claim that they run concurrently rather than sequentially. Worth reading directly, not through someone else's gloss.

If you need a single canonical answer to “what is the learning engineering process,” ICICLE has published one. It’s on their Learning Engineering Process page, and it’s short enough to read in five minutes.

The five phases, verbatim

ICICLE frames LE as applying:

“the learning sciences using human-centered and engineering design methodologies and [Iterative] data-informed decision making to support learners.”

The process itself has five phases:

  1. Start with the Challenge — identify learner and learning problems within specific contexts.
  2. Create the Solution — design, instrument data collection, and iterate with stakeholders.
  3. Implement the Solution — deploy in authentic settings while gathering data.
  4. Investigate with Data — analyze outcomes to assess effectiveness.
  5. Iterate Continuously — refine throughout the entire process.

The claim worth sitting with

The page includes a diagram titled Learning Engineering Process with Iterative Design-Build. It explicitly argues that the five phases run concurrently, not sequentially — accommodating “both well-defined and complex, ill-defined problem spaces.”

That’s not a small claim. Most process diagrams in adjacent fields (ADDIE, design thinking) imply a linear progression, even when practitioners know better. ICICLE’s diagram says the quiet part out loud: in real work, you’re almost always in multiple phases at once, and the field’s process definition should acknowledge that directly.

A question for the reader

If you wrote down what you actually did on your last project, which phase did you spend the most time in — and did your team’s process document say the same? Often the gap between the two is where the useful improvement lives.

The source

  • Primary: IEEE ICICLE, Learning Engineering Process, sagroups.ieee.org/icicle/learning-engineering-process/
  • Background: the process is elaborated chapter-by-chapter in Goodell & Kolodner (eds.), Learning Engineering Toolkit (Routledge, 2022). Three chapters are open access via Taylor & Francis; both are on the Reading List.
definitionsaxbergicicle

Saxberg's working definition of a learning engineer

The most-cited short definition of the role — from Bror Saxberg, quoted in Shelly Blake-Plock's 2018 announcement of ICICLE. Three clauses, each doing real work. Worth unpacking one at a time.

When people want one sentence that defines a learning engineer, they usually reach for Saxberg’s. Blake-Plock quotes it in his January 2018 Getting Smart piece announcing ICICLE:

“A Learning Engineer is someone who draws from evidence-based information about human development — including learning — and seeks to apply these results at scale, within contexts, to create affordable, reliable, data-rich learning environments.”

— Bror Saxberg, quoted in Blake-Plock, Learning Engineering: Merging Science and Data to Design Powerful Learning (Getting Smart, 2018)

It looks innocuous. It isn’t. Each phrase is a load-bearing commitment.

Clause by clause

“evidence-based information about human development — including learning” Not “instructional theory,” not “pedagogical models” — evidence-based information about human development. The reference class is the learning sciences and cognitive psychology. This is the hard commitment: if what you’re doing can’t trace back to that body of evidence, it isn’t learning engineering by Saxberg’s definition.

“apply these results at scale, within contexts” Two moves compressed. At scale rules out one-off interventions and bespoke consulting as the primary mode. Within contexts rules out context-blind best-practice prescriptions. Both together: apply the evidence to many learners while respecting that each context is different. That is genuinely hard.

“affordable, reliable, data-rich learning environments” Three adjectives that set the bar for what counts as success. Not “effective” alone; not “engaging.” Affordable (you can actually deploy it), reliable (it works the same way twice), data-rich (you can tell whether it worked). Learning environments that fail any of those three aren’t engineered yet.

The claim worth sitting with

Saxberg is defining the role by what it produces, not by what it studies or teaches. The test isn’t whether someone has the right training. It’s whether the environments they build are affordable, reliable, and data-rich, and whether those environments move outcomes the evidence says they should.

A question for the reader

Of the three product properties — affordable, reliable, data-rich — which does your current work fall down on? That’s probably where your next six months of skill development should point.

The source

  • Primary: Shelly Blake-Plock, Learning Engineering: Merging Science and Data to Design Powerful Learning, Getting Smart (January 29, 2018). Saxberg’s quote is attributed directly in that article.
  • Related: Saxberg’s earlier Why We Need Learning Engineers (Chronicle, 2015) makes the same argument in longer form — paywalled, but findable in the Reading List.
synthesisidentityid-vs-le

Instructional designer or learning engineer? What two practitioners actually wrote in 2019.

A short synthesis of the two most-linked 2019 pieces on the ID-vs-LE question — Goodell and Bozarth. Both agree more than the 'turf war' framing suggests. With direct quotes so you can check.

The “am I an ID or an LE” conversation is often framed as a turf war. If you read the two pieces most cited in that conversation — Jim Goodell’s Are You Doing Learning Engineering—Or Instructional Design? and Jane Bozarth’s reply Learning Engineering? Instructional Design? — you find something different: rough agreement, from two different starting points.

This note summarizes what each actually said, with direct quotes so you can verify. I’m deliberately not paraphrasing or extending their arguments into the later literature; I haven’t re-read the later pieces carefully enough to ground that, and I’d rather under-claim than invent.

What Goodell (2019) actually argues

Goodell, writing in The Learning Guild, does not say LE replaces ID. He positions LE as a subset of ID that handles problems ID was never scoped for:

“So one could say that learning engineering falls within the broader category of instructional design.”

“Learning engineering is about solving problems that may sometimes fall outside the context of learning experience design or instructional design.”

Reading this directly, the move Goodell is making is conservative: if you build courses, you’re already doing ID; if the problem you’re solving extends past course design into measurement, infrastructure, or systems, that’s where the “LE” label earns its keep. He’s not asking IDs to reclassify themselves. He’s naming a specific kind of work.

What Bozarth (2019) actually argues

Bozarth, in the same publication (The Learning Guild, January 2019), is not the skeptical reply I initially remembered. She’s sympathetic — but from the ID side, framing LE as a career expansion for practicing IDs:

“Enter learning engineering. It’s basically meshing an understanding of learning theory, learning science, and learning ecosystems with expertise in engineering design.”

“No longer just responsible for building courses, the ID is now called on to help create apps, build dashboards, track informal learning, link sensor data to training data, and embed analytics in the workflow.”

“For now, the idea of learning engineering offers the ID practitioner a chance to stretch into some new areas across learning science, data science, and computer science.”

Her argument is that IDs who don’t expand their skill sets will find their work increasingly narrow. LE is not a replacement; it’s the next place IDs can grow into.

Where they agree

Both accept the premise that there is new work — data instrumentation, systems analysis, evidence-based iteration — that pure ID doesn’t cover. Both are unwilling to call it a new profession that cancels ID. Goodell calls LE “a broader category” that ID sits inside. Bozarth calls LE “a chance to stretch.” Neither treats it as a turf war.

What I haven’t read carefully enough to synthesize

The later literature — Wagner 2024, Kolodner 2023, Craig et al. 2023, Baker/Boser/Snow 2022 — almost certainly extends this argument. But I’d be making things up if I summarized those positions without re-reading each piece. You can find all of them on the Reading List, and a future Field Note will revisit them when I have. Worth reading in their own words before taking my summary as definitive.

A note on method

Field Notes on this site are grounded in material you can verify. If I quote someone, I’ve read the piece; if I’m synthesizing, I label it as synthesis; if I haven’t done the reading, I say so. The idea is to surface what the field is saying, not to invent positions.