CI/CD: Implement Feedback Loops
Together, CI and CD implement the changes necessary to make the DevOps philosophy of delivering value quickly a reality. Read our overview in The Key to Realizing Agile and DevOps is CI/CD.
Is Your Testing Undermining the Feedback Loop?
Many organizations successfully meet agile development guidelines on standups and retrospectives and incorporate project feedback into the backlog. But shortcomings in testing can undermine the feedback loop. Some common challenges that constrain an organization’s ability to deliver continuously include:
- Testing is predominantly manual, leading to extended release timelines
- Feedback loop is missing or slow, resulting in “late-in-the-lifecycle” fixes and production patches
- Production system is unreliable and encounters repeated performance and stability issues, requiring significant rework.
Implement Feedback Loops
In a software delivery pipeline, feedback involves putting in place triggering systems (monitors, tests, and meetings) at every stage of the lifecycle, and responding systems (processes and rules, fixes, and solutions) to ensure software can be deployed all the way through production, safely, quickly, and sustainably. Importantly, feedback also involves understanding the consequences of what happened, and taking action to improve results the next time around.
Four activities are instrumental in creating and automating feedback loops:
- Understand Present State of Triggering Systems. Always ask “Are my monitors, tests and communication touch points helping continuous delivery?” Specifically, answer the following questions:
What are the present state and use of monitors, tests, and communication touch points? At many companies, the main monitors of systems are in production with some basic log usage during defect triage. Nothing of substance is used earlier in the lifecycle. This means there is no easy way to quickly identify issues with environments or the basic quality of software deployed to them.
What kind of testing is performed? If testing is mostly manual, early feedback on the quality of an application during the development lifecycle is likely missing.
How is performance testing conducted? Even if performance testing is conducted before every release, often findings are not analyzed. If you’re not finding anything of substance or are experiencing false positives, issues can crop up in production.
- Outline How Testing Will Contribute to Fast and Consistent Feedback. Because one of the primary goals of a CI/CD initiative is to improve quality across the lifecycle—with an emphasis on offering fast feedback throughout the lifecycle—eliminating testing shortcomings is critical. Effective ways to get vital feedback into the feedback loop include:
- Rebuilding and extending the automated functional testing from development through staging.
- Redefining the performance testing function.
- Integrating test automation with deployment automation.
- Embedding QA engineers in scrum teams (shift left testing).
- Define and Implement Appropriate Architecture for Automated Functional Testing. Ensuring that functional test automation can be applied across the SDLC requires choosing the right architecture from two main options: siloed or shared. Siloed architectures are typically de-coupled, can scale faster, and are easier to manage. But they also can result in redundancy and technical debt if not governed or managed effectively. A shared or centralized architecture is tightly coupled to work across delivery stages and enables collaboration across the enterprise. But they can be harder to manage because of the potential for increased complexity and bloat to the core platform.
Implementing a Centralized Architecture for Functional Test Automation
SQA helped a major healthcare company implement this solution in about six months, and the existing regression suite was rewritten shortly thereafter. Then, the team integrated the suites with the build and deployment pipeline. Every build was then automatically tested, and so was every deployment into any development, test, or staging environment. As a result, test coverage increased 400 percent in about a year. Additionally, the solution provided a gradual introduction of automation across the application suite and scrum teams, enabling continuous integration and sprint acceptance testing.
- Define the Approach and Implement Performance Testing. Performance testing should be goal oriented. Define and implement scenarios that best mimic production so performance testing emulates the real-world. To eliminate shortcomings in performance testing, integrate performance testing tools into the software development lifecycle. Additionally, identify non-functional issues during performance testing to enhance the efficacy of this feedback loop. This can dramatically reduce the number of patches in production related to system stability.
Eliminating Shortcomings in Performance Testing
SQA helped a major healthcare company identify a new suite of performance testing tools and integrate them into the software development lifecycle. In about six months, the tools were paying big dividends. The healthcare company found false readings from performance testing had declined significantly. Importantly, the team also started identifying non-functional issues during performance testing, improving the value of this feedback loop. The number of patches in production related to system stability reduced dramatically. From an organizational perspective, the development and operational teams began trusting the performance testing team—so much so that the former sought the latter’s input and invited them to contribute to planning future releases of major application updates.
In Summary, Use Feedback (Left to Right) Across the Lifecycle
There’s no question that automation—both functional and performance—is critical to building fast feedback loops over time. But such loops have little impact unless they’re augmented by practices that encourage the use of feedback throughout the delivery lifecycle. Some of the most important practices include:
- Defining scenarios and features during development using techniques like Behavior Driven Development (BDD).
- Embedding QA engineers in scrum teams to implement tests at the same time features are developed.
- Making product owners or representatives of the customer part of reviews and testing in every sprint.
- Testing systems and application integration scenarios incrementally after every deployment to test environments.
- Automatically validating every deployment to both non-production and production environments.
Implementing some or all of these practices can create faster feedback loops, drastically reducing overall development-to-production timeframes.