Using mini-specs to drive better quality in Agile projects

Agile projects are often prone to issues related to requirements and design clarity, especially when there is involvement of distributed teams. One of the biggest quality levers in my opinion is “design” – both functional and technical. Putting thought upfront on the requirement and the system visualization of the functionality, and all its dependencies, is one of the best ways to improve quality. I’ve found the concept of a “mini-spec” to be the right balance between zero documentation and over-documentation – both of which can kill an Agile project, especially for projects that already have a base architecture defined and where the work mostly involves implementation of user stories.

In the mini-spec approach, user stories are broken down into chunks that can fit within a sprint and defined with sufficient details that allow upfront thought and debate as well as clarity required for implementation and testing. A mini-spec captures the below points explicitly. A simple mini-Image-(99)spec review with the technical and domain SMEs including those from QA, will provide an opportunity to capture many potential issues upfront. Mini-specs offers the benefit of forcing developers to think through many areas that will help make their effort lead to ultimate success. Completed mini-specs are used to enable both developers and QA engineers with implementation, test design and testing, and facilitates an efficient development workflow at a user-story level.

Following are some of the areas that a mini-spec should cover:

  • Business need. In this phase, as an user of mini-spec, answer the following questions:
    • Why is the feature required?
    • What problem does it solve?
    • What is the business objective(s) it is trying to achieve?
  • Feature overview. In this particular section, clearly articulate the feature. A brief overview, explaining key aspects of the feature should be brought out.
  • Validation/key success criteria. How to know if the feature will meet the need when it is rolled out into production? Identify some measurable criteria to prove that the feature meets its intended need once it is implemented and put into production. This derives from the lean-startup principles and forces the developer to think through the end objective.
  • Operational requirements (if applicable). In this part, elaborate the operational requirements viz. configuration options, admin reports, alerts and monitoring requirements, etc. These are requirements that may not be obvious when thinking about the end user functionality but are needed to enable operational teams to ensure smooth running of the feature while in production. Having this placeholder helps developers put themselves in the shoes of the operational teams upfront rather than downstream when it becomes too late in the development cycle.
  • Approach and design. Clearly list out the functional components and related work involved. For example, what happens in the UI (“add a new screen or modify an existing screen”), business layer (“add these new APIs”), and data layer (“add these new data tables”) may be some of the scenarios. Wherever the user story raises new cross-cutting concerns that are not part of the existing base architecture, it is important for the designs to be elaborated to provide a comprehensive understanding. Alternately, it would be better if skeleton code/architecture POCs are developed for these designs.
  • UI/UX (if applicable). The level of detail will depend on the state of the team. If everyone in the team understands the base UI standards and expectations, keep this simple and descriptive. Otherwise, you can focus on developing wireframes or mockups.
  • Dependencies. Often it is seen that areas of failure hinge around “dependency” factors. It is important to have an understanding of the dependencies. In this case, gain in-depth knowledge of the components, requirements etc. affected by this feature, especially the upstream and downstream systems. Additionally, track those requirements/features pushed out to future sprints that will depend on this feature.
  • Assumptions. Have profound knowledge of the assumptions we’re making that are critical for success in terms of environmental dependencies, sequence of work order, etc.
  • Test cases. Lastly, it is good to have an understanding about test cases and their outcomes. Plan for:
    • Positive test cases. What should happen when the expected conditions/input are provided?
    • Negative test cases. What should happen when unexpected conditions/input are provided?

In the design factory model, user stories are elaborated into ‘mini-specs’ by an independent team whose job is to interact with clients and elaborate these features. In addition, they should also be the proxy between clients and the development team. This creates a backlog of “implementable” user stories that can be included into implementation sprints. The advantage of this approach is that it eliminates a significant amount of ‘wasted’ time by developers waiting for feedback.

Post implementation reviews of user stories can also be aligned to the mini-specs, where the developer showcases the user story and shows the functionality as well as the ‘working’ unit tests.

Developers should also be involved in evaluating the success of the implemented feature in production in terms of the validation criteria. This increases their sensitivity towards the business outcomes of their efforts and helps build a lean-startup mentality across the entire team.

Chandika Mendis

Executive Vice President and Global Head of Engineering, Virtusa. Chandika Mendis is the Global Head of Engineering for Virtusa. The Software Engineering function at Virtusa consists of Global Technology and Architecture as well as the Independent Software Quality practice. In this role, he drives consistent standards and best practices as well as innovation, R&D and consulting/solutions in technology and testing areas. Chandika carries over 15 years of experience in IT. Prior to his current role, he has played many technology leadership roles both within and outside Virtusa. He was the Chief Software Architect for Global Software Labs, playing an instrumental role in setting up and leading their San Francisco operation. Prior to that, he worked for P&O Nedlloyd in London as part of their technical strategy team and for a German ISV startup in a General Management and Software Development Lead capacity. Chandika has a First Class honours from the University of Moratuwa, Sri Lanka and an MSc in Parallel Computers and Computation from University of Warwick, UK.

More Posts