Okay, okay. Here’s a non-exhaustive list of examples and some questions they might try to answer.
Static analysis (type analysis and linting tools): “Does the code I wrote have valid syntax and grammar?”
Code build/compilation: “Is my code compiling?”
Unit test: “Does that small unit of code do what I think it does?”
Integration test (whatever that means): “Does that last piece of code works as expected when used in its environment?”
Application-level tests: “Does that last feature behave as expected, from a user standpoint?”
Pair programming: “Did we come up with the best possible design?”
Code review: “Are we writing readable, maintainable code? Does it match our standards?”
Manual testing: “Is the software working in conformance to the specified requirements we agreed upon?”
CI pipeline: “Can we deploy that last feature we wrote? Is our build ready to do so?”
Monitoring: “Did something go wrong in production? Are we able to know about it before our users do?”
Daily standup: “Are we set to work on the right thing for the next few hours?”
Retrospective: “Did we work well, at a sustainable pace, for the last days?”
Stakeholders demo: “Did we deliver a useful, valuable software increment? Are we headed the right direction?”
Iteration planning: “Are we planning on working on feasible, valuable features for the next few days?”
Usage analytics: “Did the new feature get the expected usage and performance?”
User research: “Are we planning on building the right feature?”
Again, this is a non-exhaustive list. Some even define a more fine-grained differentiation between steps of writing a failing test → making it pass.
Can you come up with additional feedback loops?
As I see it, we can split items in the list into two groups:
- Are we building the right thing?
- Are we building the thing right?
Here’s the tradeoff
To create as many feedback loops as possible, you need to keep things simple. Simplicity means maximizing the amount of work not done. I’m afraid it is a requirement, not a suggestion.
Build simple things (and simple processes and team dynamics) so that they are simple to test, measure, and iterate on. Otherwise your feedback loops become slow and painful, and you’ll skip them. Oh, by the way: if something hurts, do it more often.
Don’t get me wrong — simplicity is hard. You can always make things simpler, right? It is the only way, thou, to provide useful feedback loops. And we want as many of them as possible.
Wait, wait a minute. Agile, DevOps, Software Crafting, Extreme Programming…
…what if the common ground between them is about creating feedback loops, and keeping them as short as possible?
What a surprise, huh?