Course Correction: Implementing Effective Feedback
Effective feedback is essential for software products to thrive. This article explores strategies to "shift left" feedback, making it more assignable, frequent, and actionable.
The Ficus lyrata, also known as the fiddle leaf fig, grows up to four meters. It has large, lush green leaves and is native to the tropics. It is also one of the most difficult houseplants to care for. It needs an immensely hard-to-figure-out combination of light and moisture levels… It requires a lot of light, but no direct light, only indirect light… Too much direct sunlight can make its leaves drop… It also needs high humidity... which is hard to achieve, especially during winter months… also it is veeeery prone to overwatering and underwatering …

Imagine receiving this plant as a birthday present. Your job now is to take care of it, nurture its growth, and maintain its visual appeal. As a person not known for his green thumb (I have a hard time keeping cacti alive), caring for this plant would prove to be a particular challenge for me.
Nevertheless, what would your approach be? I guess you would examine the plant every day or two, check the soil's moisture, and observe whether the leaves show any signs of degradation. You would then act if you noticed something wrong. If the soil is too dry, water it. If individual leaves degrade, remove them. Your goal is to make it look as nice as possible. So, besides ensuring that you provide it with its basic needs, you would also use fertilizer or repot the plant.
Your approach would likely involve checking on it every day or two and then acting based on your observations. You would rely on something called feedback.
What, Exactly, Is Feedback?
Here is a definition I quite like:
Feedback is “the transmission of evaluative or corrective information about an action, event, or process to the original or controlling source”
Source: Merriam-Webster
Providing evaluative or corrective information (dry soil, damaged leaves) about an action, event, or process (the Ficus lyrata plant) to the original or controlling source (you as its proud owner) is not only a necessity for gardening but also for developing software. Neither plants nor software are actions, events, or processes; thus, for our sake, we can refer to them as “artifacts.”
This evaluative or corrective information can be of different types:
Functional feedback is an echo of the usefulness of the system's features, often derived from user responses or tracking user behavior. It gives information about whether functionality is used, how it is used, and what functionality is most used.
Operational feedback comes from monitoring systems, e.g., average load metrics.
Quality feedback for attributes like performance metrics or maintainability issues.
Before deploying to production and receiving evaluative and corrective information, we cannot know for sure how the software will behave, how users will interact with it, and how well it performs in the context of critical quality attributes.
Without feedback, we cannot learn and cannot make decisions based on reality. We can only progress by guessing. Feedback gives us important input for course correction. How else would we know if users even interact with the new feature idea we released? Or how would we know if the new resilience concept we implemented enhances the reliability of our system?
Effective Feedback
If feedback is so important, what makes good feedback? The following comprise the most important attributes for feedback to be valuable to development teams.
Assignable
For evaluative or corrective information to be helpful, it must be clear which changes in the system caused the response.
Positive Example: A team releases a small change to a calculation algorithm within a service they are responsible for. The release is small and only contains this update. After the update, calculations seem wrong in some particular cases that were not covered in automated tests during deployment. The team immediately knows which change caused this, rolls back the change, and fixes the issue.
Negative Example: A team releases all changes once a year to production within a big release. After the release, performance metrics declined drastically. The team does not know what caused the downgrade.
Frequent
Every change to the system—whether commits, deployments, or configuration changes—should trigger feedback mechanisms.
Positive Example: A team responsible for an online shop implements a new idea for a product recommendation banner they want to test. They deploy it as an A/B test, where some users see the new banner and some don't. For a week, they see conversion rates of users who visit the new banner versus users who don't on a dashboard, updated every 5 seconds. After a day, they detect immensely higher conversion rates for users who see the recommendation banner. They take the new feature live for everyone.
Negative example: A developer pushes code to the main branch and releases a new version of the system. Afterward, bugs occur that simple unit tests would have caught. However, they were not discovered since neither the developer nor an automated pipeline ran the tests.
Timely
Feedback should reach development teams and every interested party without delay.
Positive example: Two teams deploy their services at the same time. Both communicate through a recently introduced API. In production, the API breaks both services because of high latency that exceeds normal levels. The teams are notified immediately about the outage and roll back the changes.
Negative example: A team's changes led to a reliability downgrade that went unnoticed for months, frustrating users.
Automated
Feedback mechanisms should not involve manual steps whenever feasible. Automation is crucial for other aspects (frequent and timely).
Positive example: A developer pushes new code to the main branch, resulting in an error within the application's main workflow. Automated end-to-end tests detect this issue and notify the developer.
Negative example: A company releases a new, security-sensitive application. Before release, an internal security team checks for automatable issues, like unused open ports and dependencies. After several weeks of rigorous configuration checking, they write a report and send it to all development teams as a PDF.
Trigger Action and Improvement
Effective feedback must be clear and actionable. It should give development teams specific action items and ideas for system improvements.
Positive example: Several teams want to replace synchronous communication with asynchronous messaging. They track the percentage of channels already migrated to the messaging infrastructure and use gamification to show which teams have migrated most of their communication channels. This motivates other teams to catch up.
Negative example: An operations team provides development teams with a dashboard showing central performance metrics. By looking at the dashboard, developers can see that performance degrades gradually by over 25% over the course of a year. However, they take no further action.
Gathering Feedback by Shifting Left
We should provide feedback earlier in the development process to make it more assignable, frequent, and timely.
Here are some approaches to “shifting left” feedback:
Automated Tooling
Automated tooling is most useful during and immediately after developing code. Use linters (like ESLint) and static code analysis tools (like SonarQube) for real-time feedback. You can also set up pre-commit hooks to reject commits that introduce security issues or violate coding guidelines.
Pre-Development Testing
You can do some testing before development even starts. During ideation, for example, you can test new features and workflows with users by providing them with clickable mockups. You can also run surveys and ask for improvements and feature ideas.
Low-Risk Deployment
Lowering deployment risk helps with feedback. You can deploy to a (semi-)realistic environment and gather feedback without disturbing (all) your users. Deploying to a staging environment after every integration and running load tests to verify reliability is only one example. Using canary tests to release a new feature to just a handful of users is another.
Small Work Items
Work items (like features, bugs, etc.) progress through the development process illustrated above. As I wrote in “What is Flow, Actually?”, smaller work items make feedback more assignable:
Shorter lead and process time obviously means that a development team finishes and releases work items faster. Usually, value streams containing smaller work items have shorter lead times because they are usually finished faster, bind fewer resources and are easier to test and deploy. Being able to release new ideas and bugfixes faster not only leads to a better product and happier customers but also makes experimentation easier and early feedback possible. A continuous flow of value also makes feedback more assignable. Large releases usually mean substantial disruptions: for your customers as well as for the deployment environment. It becomes hard to pinpoint what exactly caused observables changes in system and user behavior, like performance or customer engagement metrics to go down. If releases are smaller, you can pinpoint sudden changes in metrics and user behaviour more precisely.
Conclusion
Feedback is an important factor to induce course-correction during development of complex software systems. Make sure that feedback is assignable, timely, frequent, and automated and helps you make informed decisions about next actions and improvements. Considering techniques to “shift left” feedback is a major pillar to achieving these goals.