Title: Solving the integration testing puzzle with contract testing
Overview: Many modern software systems are built out of loosely coupled, independent components (i.e., microservices) that communicate with one another through standardized protocols and message formats (HTTP, REST, …).
While this is a blessing for software development, as it enables quick deployments and parallel development, among other things, it does pose some additional challenges in terms of integration and E2E testing. At all times, you want to be informed about whether all of the components and services that make up your application can still communicate with one another. Setting up large scale test environments for all of these components, either using ‘the real thing’ or stubs, can quickly become a Herculean task.
Contract testing is an approach that is meant to address these challenges of large-scale integration and E2E testing in distributed environments. It is not a new technique, in fact, it has been around for several years now, but unfortunately there is a lack of good introductions, examples and use cases that clearly show the added value of contract testing, and how it can help speed up integration testing.
This workshop is meant to address that. Using a fictional use case, that’s based on real-world systems, the participants will go through the following steps:
The challenges of integration testing in distributed environments If you’re working on a monolith and all components of a system are developed in the same team or department, integration testing is relatively easy. Once the monolith is deployed, you can start testing. However, if a system consists of small, independent components, teams quickly start to struggle. Are all components available for testing? Is development ready? Is all required test data in place? How about versioning? The higher the level of distribution, the more difficult this gets. In this step, I’ll introduce the players in the arena: two API consumers from different teams that would like to communicate with a single API provider, developed in a different department. Introducing contract testing As a potential solution to the integration testing challenges, I’ll introduce the concept of contract testing and how it differs from (and augments) traditional functional and integration testing.
In this step, we’ll discuss the various approaches to contract testing, and learn which approach is a good fit for what context. I’ll introduce Pact, and together we will go through a complete contract testing cycle. This helps participants grasp what contract testing really is about and how it addresses the issues we talked about earlier.
Changes over time
As systems evolve, new changes will be implemented both on the consumer and on the provider side. But what effect does that have on the ability for consumers and providers to work together?
In this step, I will introduce a couple of changes on the consumer side that lead to updated contracts. The participants will see what effect this has on the contracts and the contract testing process (i.e., how this breaks the testing pipeline). In this way, they will experience first-hand the types of potential integration issues that contract testing is meant to detect and signal.
Automating the process
So far, many of our steps have still been semi-manual. The Pact ecosystem, however, offers a variety of tools that help add contract testing to an automated delivery pipeline.
In this step, we’ll look at the Pact Broker, and how we can automatically publish and distribute contracts through the broker. We’ll also see how to publish verification results, and how to use those to determine whether or not it is safe to deploy the latest version of a service to production.
A new kid on the block
As participants will have seen by now, contract testing can be a great addition to an integration testing strategy. However, introducing contract testing means adding new tools and new ways of working, which is a pretty big first step for a lot of teams. The concept of bi-directional contracts (https://pactflow.io/blog/bi-directional-contracts/) is meant to address that challenge.
In this step, I’ll introduce a second API provider and show how they can be added to the existing contract testing setup using their existing contract (API) specifications, without the need for implementing full-blown ‘traditional’ contract testing.
At the end of this workshop, participants will have a good overview of the concept of contract testing, they will have gained hands-on experience with contract testing and they will have seen what it takes to introduce contract testing in their software development lifecycle.
All code from the workshop will be available through GitHub, giving the participants an example they can replay from the comfort of their own home or office.
Bio: Hey, my name is Bas Dijkstra, and I am an independent test automation consultant and trainer. I have been active in the test automation field for some 17 years now, and have worked on software testing and automation solutions across a wide range of programming languages, frameworks and technology stacks. I’ve delivered test automation training to dozens of companies and hundreds of conference attendees in the Netherlands as well as abroad, to excellent reviews. You can find a complete overview of my professional life on my LinkedIn profile. If you want to get in touch, please use the contact form on this site, or send me an email at email@example.com. I’m also the developer of RestAssured.Net, a library that is meant to to make writing tests for HTTP APIs in C# a breeze. I live in Amersfoort, The Netherlands, together with my wife and two sons. When I am not at work, I like to go outside for a long bike ride, or to sit down and read a good book
Janet Gregory and Lisa Crispin
Title: Improve your team’s quality practices with QPAM: Quality Practices Assessment Model
Overview: Most software teams want to deliver a high-quality product. And most face significant challenges that get in the way of this goal. They may sense they are doing some things well but are unsure how to quantify that. They often struggle to clearly articulate what exactly is helping or hindering them. They can’t fix everything at once, and they don’t know the best place to start improving.
There is an objective way to gauge a team’s or organization’s competencies in different quality aspects. With this information, teams can identify where to focus their quality improvement initiatives. In this workshop, participants will learn how to use the agile quality practices assessment model (QPAM) developed by Janet Gregory and Selena Delesie. This model provides a practical means to reflect, assess, and adapt practices that impact quality within a team and an organization.
In this workshop, you’ll practice using the model by applying it to an organization described in a case study. You will leave the workshop with insight and experience to assess, model, and recommend specific improvements for an agile team.
- Understand which quality practices move your team towards its quality goals
- Identify your team or organization’s current competency in each of ten key quality aspects
- Identify the focus area for the next improvement efforts
- Learn specific ways each quality practice can be improved
- Practice assessing, modelling, and recommending specific improvements for an agile team
Bio: Janet Gregory is a testing and process consultant with DragonFire Inc. She specializes in showing agile teams how testing activities are necessary to develop good quality products. She works with teams to transition to agile development and teaches agile testing courses worldwide. She contributes articles to publications and enjoys sharing her experiences at conferences and user group meetings around the world. For more about Janet’s work and her blog, visit https://janetgregory.ca or https://agiletester.ca You can also follow her on twitter @janetgregoryca or LinkedIn.
She is the co-author with Selena Delesie of Assessing Agile Quality Practices with QPAM, and with Lisa Crispin of Agile Testing Condensed: A Brief Introduction (LeanPub 2019), More Agile Testing: Learning Journeys for the Whole Team (Addison-Wesley 2014), and Agile Testing: A Practical Guide for Testers and Agile Teams (Addison-Wesley, 2009), the Live Lessons Agile Testing Essentials video course, and the courses, Holistic Testing: Strategies for agile teams and Holistic Testing for Continuous Delivery.
Together with Lisa Crispin, she has founded the Agile Testing Fellowship to grow a community of practitioners who care about quality. Check out https://agiletestingfellow.com to find out more about courses and membership.
Lisa Crispin is the co-author, with Janet Gregory, of Holistic Testing: Weave Quality Into Your Product; Agile Testing Condensed: A Brief Introduction; More Agile Testing: Learning Journeys for the Whole Team; and Agile Testing: A Practical Guide for Testers and Agile Teams; the LiveLessons “Agile Testing Essentials” video course. She and Janet co-founded the Agile Testing Fellowship, which offers “Holistic Testing: Strategies for agile teams” and “Holistic Testing for Continuous Delivery” live training courses both remotely and in-person. Lisa was voted by her peers as the Most Influential Agile Testing Professional Person at Agile Testing Days in 2012. She is co-founder with Janet of Agile Testing Fellowship, Inc. and is happily available for training and consulting. Please visit www.lisacrispin.com, www.agiletestingfellow.com, and www.agiletester.ca for more. Contact Lisa on Twitter as @lisacrispin, and LinkedIn, https://www.linkedin.com/in/lisa-crispin-88420a/.
Jenna Charlton and Jenny Bramble
Title: Running Risk Assessments Together!
Overview: Risk is a feature of every project. We work to minimize it when we’re aware of it and mitigate it regardless. However, we have very few avenues to formally discuss risk with our teammates, which is why we need risk assessment sessions. Jenny and Jenna will lead the group through the five stages of a well-run risk assessment session, and you will be able to break off into smaller groups to run your mini risk assessment on an example application! You will:
- Pick your features: how do we select the features for a risk assessment session? Jenny and Jenna will suggest several methods, including grouping by pages and user stories.
- Pick a rating system: how do we talk objectively about the risks we’re uncovering? We select a rating system and then give it value.
- Level-set your rating system: what does our system mean? Jenny and Jenna will lead the group in level setting your rating system using input from business and product (gathered from real users of the system we’ll be using!
- Assign values: this is the meat of the session
- Re-evaluate: make sure that everything you’ve said still feels correct
Expect hearty discussions on what features are and how important they are to the system overall. You’ll come away from the session with a deeper understanding of risk, its place in our sprints, and hands-on experience with risk assessment sessions.
Bio: Jenna is a software tester and director of product with over a decade of experience. They’ve spoken at a number of dev and test conferences and is passionate about risk-based testing, building community within agile teams, developing the next generation of testers, and A11y. When not testing, Jenna loves to go to punk rock shows and live pro wrestling events with their husband Bob, traveling, and cats. Their favorite of which are the 2 that share their home, Maka and Excalipurr.
Jenny ended up in a quality assurance career after coming up through support and devops, cutting her teeth on that interesting role that acts as the ‘translator’ between customer requests from support and the development team. Her love of support and the human side of problems lets her find a sweet spot between empathy for the user and empathy for my team. She’s done testing, support, or human interfacing for most of her career. She finds herself happiest when she’s making an impact on other people–whether it’s helping find issues in applications, leading scrum, speaking at events, or just grabbing coffee and chatting.