January 12, 2026

REST Assured with BDD: A Practical Playbook for Production-Ready API Automation

Summary: A hands-on, runnable playbook that teaches REST Assured in BDD style and demonstrates end-to-end API test automation using a real public API.

Introduction

APIs are the backbone of modern software systems. Whether you are testing microservices, mobile apps, or cloud platforms, validating APIs accurately and consistently is a core automation skill.

Recently, I delivered a hands-on session titled "REST Assured with BDD: Fundamentals" and published a runnable playbook on GitHub so learners can reproduce the demo, run the tests locally, and extend the code for real-world projects. View the running playbook in the REST Assured tutorial below. Then, read on.

This post summarizes what the session covers, why the BDD approach matters, and how you can run the demo on your own machine in minutes.

What the Session Covers

Session 1 focuses on the fundamentals required to write clean, production-grade API tests using REST Assured.

  • BDD-style test structure using given(), when(), and then() so tests read like requirements
  • Core REST Assured components such as RequestSpecification, ResponseSpecification, logging, and filters
  • Status code, header, and JSONPath assertions
  • Project setup using Maven, JUnit 5, and test resources
  • Three demo test cases covering create, read, and negative validation flows

During the live demo, learners saw the full flow in action. Maven builds the project, REST Assured sends a POST request to the PetStore API, the created resource is fetched and verified, and a negative GET returns a 404 as expected.

The generated request and response logs make it easy to understand failures and debug issues, which is critical when learning API automation.

REST Assured with BDD Playbook

Why This BDD Approach Matters

Many teams learn REST Assured by copying isolated code snippets. While this may work for quick experiments, it often leads to brittle tests that are difficult to maintain or run in CI.

My playbook is designed around three practical goals:

  1. Reproducibility: The same Maven commands produce the same results every time.
  2. Readability: BDD-style tests act as living documentation for API behavior.
  3. Extendability: A clean structure that you can easily adapt for authentication, reporting, and CI.

This is the same structure I use when building proofs of concept and training automation teams in enterprise environments.

What Is Inside the Repository

This GitHub repository is intentionally compact and focused on learning by doing.

  • pom.xml for Maven configuration and dependencies
  • BaseTest.java for global REST Assured setup
  • PetApiTest.java containing BDD-style test cases
  • pet_create.json as a reusable JSON payload template
  • Simple scripts to run the demo with a single command

How to Run the Demo Locally

You can run the entire demo in about two minutes.

  1. Clone the repository from GitHub.
  2. Ensure JDK 11 or higher and Maven are installed.
  3. Run the provided script for your operating system.

The script executes mvn clean test and prints clear REST Assured logs along with test results.

What Learners Practiced in the Lab

During the hands-on lab, participants actively modified and extended the tests.

  • Updated JSON payload placeholders to create unique resources
  • Executed POST and GET flows to verify API behavior
  • Added JSONPath assertions on nested fields
  • Reviewed logs to understand failures and data flow

This approach ensures learners leave with working code and confidence to adapt it to other APIs.

Who This Playbook Is For

  • QA engineers transitioning from UI automation to API testing
  • SDETs building CI-friendly API automation suites
  • Engineers preparing for interviews that require hands-on REST Assured skills

The playbook is small enough to understand quickly, yet realistic enough to serve as a foundation for real projects.

Next Steps in the Learning Path

This session is the starting point. Following sessions include:

  • Advanced JSONPath and Hamcrest matchers
  • Data-driven API testing
  • Authentication flows such as OAuth and bearer tokens
  • CI/CD integration and reporting with tools like Allure

If you want a structured learning path or a customized proof of concept for your team, you can reach out directly.

If you want any of the following, send a message using the Contact Us (right pane) or message Inder P Singh (19 years' experience in Test Automation and QA) in LinkedIn at https://www.linkedin.com/in/inderpsingh/

  • Production-grade REST Assured automation templates with playbooks
  • Working REST Assured projects for your portfolio
  • Deep-dive hands-on REST Assured training
  • REST Assured resume updates

January 07, 2026

5 Git Commands That Feel Like Superpowers for QA and Automation Engineers

Summary: Git is often treated as a simple backup tool, but it is more powerful than that. This article explores five Git features that can save you from disasters, track down bugs faster, and dramatically improve collaboration using Git and GitHub.

Introduction: The Hidden Depths of a Daily Tool

For most SDETs and QA Automation Testers, Git is part of the daily routine. We add files, commit changes, push to a remote repository, and pull updates from teammates. Over time, Git starts to feel like nothing more than a smart backup. In order to learn more, view Git Interview Questions and Answers.

However, Git was designed to handle complex development scenarios, recover from mistakes, and support large teams working in parallel. Many of its most powerful features remain unused simply because SDETs and Automation Engineers do not know they exist.

Before diving deeper, it is important to clear up a common confusion.

Git vs GitHub: What Is the Difference?

Git is an open-source distributed version control system that runs locally on your machine. It tracks changes, manages branches, and records the full history of your code.

GitHub is a cloud-based platform that hosts Git repositories. It adds collaboration features such as pull requests, code reviews, issue tracking, and automation pipelines.

In short, Git is the engine, and GitHub is the collaboration platform built around it.

Git and GitHub differences

Now let us look at five Git features that feel like superpowers once you start using them.

1. Safely Undo Changes Without Rewriting History

Undoing mistakes in a shared repository can be dangerous. Commands like git reset can rewrite history and cause serious problems for teammates who already pulled the changes.

The safer alternative is git revert. Instead of deleting history, revert creates a new commit that reverses the changes introduced by an earlier commit.

This keeps the project history honest and easy to understand. Everyone can see what changed and why it was undone.
Example:

git revert HEAD
  

This approach is essential for team-based development and should be your default way of undoing public commits.

2. Let Git Hunt Down Bugs for You

Finding when a bug was introduced can be painful. Was it added yesterday, last week, or months ago?

git bisect turns Git into a debugging assistant. It uses a binary search strategy to quickly identify the exact commit that introduced a bug.

You mark one commit as bad and another as good. Git then checks out intermediate commits and asks you to test them. With each answer, Git narrows the search until the faulty commit is found.
Example:

git bisect start
git bisect bad
git bisect good v1.0
  

What could take hours manually can often be solved in minutes with git bisect.

3. Pause Your Work Without Making a Messy Commit

Sometimes you are in the middle of unfinished work when an urgent issue appears. Your changes are not ready to be committed, but you need to switch branches immediately.

git stash is the solution. It temporarily saves all uncommitted changes and restores your working directory to a clean state.

You can then fix the urgent issue and return to your work later.
Example:

git stash push -m "WIP changes"
git checkout main
git stash pop
  

This keeps your commit history clean and makes context switching painless.

4. Recover Lost Work Using Git Reflog

Accidentally deleting a branch or running a hard reset can feel like a disaster. It often looks like your work is gone forever.

In reality, Git keeps a private history of where your HEAD and branches have been. This history is stored in the reflog.

By checking the reflog, you can find the commit hash of lost work and restore it.
Example:

git reflog
git branch recovered-branch <commit-hash>
  

The reflog is Git’s safety net. It provides peace of mind and protects you from most mistakes.

5. Git Is Local, GitHub Is Where Collaboration Happens

Git and GitHub are often used interchangeably, but they serve different roles.

Your work begins locally with Git. You commit changes, create branches, and manage history on your machine.

GitHub is where collaboration happens. It hosts your repository remotely and enables code reviews, pull requests, issue tracking, and team workflows.

Adding a remote repository connects your local Git project to GitHub.
Example:

git remote add origin https://github.com/Inder-P-Singh/xpath-playbook
git push -u origin master
  

Understanding this separation clarifies how modern teams collaborate effectively.

GitHub also enables automation through GitHub Actions, allowing teams to automatically run tests, builds, and deployments on every pull request.

Conclusion

Git is far more than a tool for saving code. It is a powerful system designed to protect your work, improve collaboration, and solve complex development problems.

Once you start using features like revert, bisect, stash, reflog, and proper GitHub workflows, Git stops feeling like a chore and starts feeling like a superpower.

Which Git feature has saved you the most time, or which one are you excited to try next?

If you want any of the following, send a message using the Contact Us (right pane) or message Inder P Singh (19 years' experience in Test Automation and QA) in LinkedIn at https://www.linkedin.com/in/inderpsingh/

  • Production-grade Git/GitHub automation templates with playbooks
  • Working Git and GitHub projects for your portfolio
  • Deep-dive hands-on Git and GitHub Training
  • Git and GitHub resume updates

January 06, 2026

XPath Techniques To Make Your Automation Tests Unbreakable

Summary: Fragile XPath locators are one of the biggest causes of flaky automation tests. This article shares five proven XPath techniques that help you write stable, readable, and long-lasting locators that can survive UI changes. First, view the XPath tutorial for beginners below. Then, read on.

Introduction

If you work in test automation, you know the frustration well. Tests fail not because the application is broken, but because a small UI change invalidated your locators.

This problem wastes time, increases maintenance effort, and erodes trust in automation. The good news is that most of these failures are avoidable.

Stop thinking of XPath as just a way to locate elements and start treating it as a language for describing elements in a stable and logical way.

Try the free XPath Playbook on GitHub with demo XPaths.

In this post, we will look at five XPath techniques that can turn brittle locators into robust, maintainable ones.

1. Avoid Absolute Paths and Prefer Relative XPath

The first step toward reliable locators is understanding the difference between absolute and relative XPath.

An absolute XPath starts from the root of the document and defines every step along the way. While this may look precise, it is extremely fragile. A single extra container added to the page can break the entire path.

Relative XPath, on the other hand, focuses on the unique characteristics of the target element and ignores irrelevant structural details.

For example, instead of relying on a full path from the root, describe the element based on a stable attribute or relationship. Relative XPath continues to work even when the surrounding structure changes.

Avoid: //html/body/div[2]/div[1]/form/input[2]
Prefer: //form//input[@name='email']

As a rule, absolute XPath has no place in a professional automation framework.

Note: Want to learn XPath in detail? View How to find XPath tutorial.

2. Use XPath Axes to Navigate Smartly

Many testers think XPath only works top to bottom through the DOM. This limited understanding leads to weak locators.

XPath axes allow you to navigate in all directions: up, down, and sideways. This lets you describe an element based on its relationship to another stable element.

Some commonly used axes include ancestor, parent, following-sibling, and preceding-sibling.

This approach is especially powerful when the element you want does not have reliable attributes. Instead of targeting it directly, you anchor your XPath to nearby text or labels that rarely change.

For example, rather than locating an input field directly, you can describe it as the input that follows a specific label. This makes the locator far more resilient.

//label[normalize-space()='Password']/following-sibling::input[1]
//div[contains(@class,'card')]/ancestor::section[1]

3. Handle Messy Text with normalize-space()

Text-based locators often fail because of hidden whitespace. Extra spaces, line breaks, or formatting changes can cause simple text checks to stop working.

The normalize-space() function solves this problem by trimming leading and trailing spaces and collapsing multiple spaces into one.

//button[normalize-space()='Submit']
//h3[normalize-space()='Account Settings']

When you use normalize-space(), your locator becomes immune to minor formatting differences in the UI. This single function can eliminate a surprising number of flaky failures.

If you are locating elements by visible text, normalize-space() should be your default choice.

Brittle XPath Locators vs Robust XPath Locators

4. Defeat Dynamic Attributes with Partial Matching

Modern web applications often generate dynamic values for attributes like id and class. Trying to match these values exactly is a common mistake.

XPath provides functions like contains() and starts-with() that allow you to match only the stable portion of an attribute.

Use starts-with() when the predictable part appears at the beginning of the value, and contains() when it can appear anywhere.

//input[starts-with(@id,'user_')]
//div[contains(@class,'item-') and contains(@class,'active')]

This technique is essential for dealing with dynamic IDs, timestamps, and auto-generated class names. It dramatically reduces locator breakage when the UI changes slightly.

5. Combine Conditions for Precise Targeting

Sometimes no single attribute is unique enough to identify an element reliably. In such cases, combining multiple conditions is the best approach.

XPath allows you to use logical operators like and and or to build precise locators. This is similar to using a composite key in a database.

By combining class names, text, and attributes, you can describe exactly the element you want without relying on fragile assumptions.

//a[@role='button' and contains(@href,'/checkout') and normalize-space()='Buy now']

This strategy ensures that your locator is specific without being overly dependent on one fragile attribute.

Conclusion: Write Locators That Survive Change

Stable XPath locators are not about clever tricks. They are about clear thinking and disciplined design.

When you start describing elements based on stable characteristics and relationships, your automation becomes more reliable and easier to maintain.

Adopt a locator-first mindset. Write XPath expressions that anticipate change instead of reacting to it. That mindset is what separates brittle test suites from professional automation.

To get working Selenium/Cypress/Playwright projects for your portfolio (paid service), deep-dive in-person Test Automation and QA Training and XPath resume updates, send me a message using the Contact Us (right pane) or message Inder P Singh (18 years' experience in Test Automation and QA) in LinkedIn at https://www.linkedin.com/in/inderpsingh/

January 05, 2026

5 Powerful TestNG Features That Will Transform Your Automation Framework

Summary: Many teams use TestNG only for basic test annotations, but the framework offers more. This article explores five powerful TestNG features that help you build resilient, scalable, and professional test automation frameworks. View TestNG Interview Questions and Answers here.

Introduction

For many developers and SDETs, TestNG starts and ends with the @Test annotation. It is often used simply to mark methods as test cases and run them in sequence.

But using only @Test means you are missing most of what makes TestNG such a powerful test framework. TestNG was designed to solve real-world automation problems like flaky tests, complex execution flows, reporting, and parallel execution.

In this post, we will explore TestNG features that can move you from writing basic tests to designing a robust automation architecture. First, view my TestNG Tutorial for beginners below. Then, read on.


1. Stop at the End, Not the First Failure with SoftAssert

By default, TestNG assertions are hard assertions. As soon as one assertion fails, the test method stops executing. This behavior is efficient, but it can be frustrating when validating multiple conditions on the same page.

SoftAssert solves this problem by allowing the test to continue execution even after an assertion failure. Instead of stopping immediately, all failures are collected and reported together at the end of the test.

You create a SoftAssert object, perform all your checks, and then call assertAll() once. If you forget that final step (which is a common mistake), the test will pass even when validations fail.

SoftAssert is especially useful for UI testing, where validating all elements in a single run saves time and reduces repeated test executions.

2. Reduce Noise from Flaky Tests with RetryAnalyzer

Every automation engineer has dealt with flaky tests. These tests fail intermittently due to temporary issues like network delays, browser instability, or backend hiccups.

TestNG provides a built-in solution through RetryAnalyzer. This feature allows you to automatically retry a failed test a specified number of times before marking it as failed.

You implement the IRetryAnalyzer interface and define retry logic based on a counter. Once configured, a test can be retried automatically without any manual intervention.

RetryAnalyzer should be used carefully. It is meant to handle transient failures, not to hide real defects. When used correctly, it can significantly stabilize CI pipelines.

3. Build Logical Test Flows with Groups and Dependencies

TestNG allows you to control execution flow without writing complex conditional logic. Two features make this possible: groups and dependencies.

Groups allow you to categorize tests using meaningful labels like smoke, sanity, or regression. You can then selectively run specific groups using your test configuration.

Dependencies let you define relationships between tests. A test can be configured to run only if another test or group passes successfully. If the dependency fails, the dependent test is skipped automatically.

This approach is ideal for modeling workflows such as login before checkout or setup before validation. Just be careful not to create long dependency chains, as one failure can skip many tests.

To get working TestNG projects for your portfolio (paid service) and TestNG resume updates, send a message using the Contact Us (right pane) or message Inder P Singh in LinkedIn at https://www.linkedin.com/in/inderpsingh/

4. Speed Up Execution with Parallel DataProviders

Data-driven testing is one of TestNG’s most popular features, thanks to the @DataProvider annotation. It allows the same test to run multiple times with different input data.

What many teams miss is that DataProviders can run in parallel. By enabling parallel execution, each dataset can be processed simultaneously across multiple threads.

This feature is very useful for large datasets, API testing, and scenarios where execution time is critical. When combined with a well-designed thread-safe framework, it can reduce overall test duration.

Parallel execution requires careful resource management. Shared objects and static variables must be handled correctly to avoid race conditions.

5. Extend the Framework with TestNG Listeners

Listeners are one of TestNG’s most powerful features. They allow you to hook into test execution events and run custom logic when those events occur.

Using listeners, you can perform actions such as taking screenshots on failure, logging detailed execution data, integrating with reporting tools, or sending notifications.

For example, the ITestListener interface lets you execute code when a test starts, passes, fails, or is skipped. This makes listeners ideal for cross-cutting concerns that should not live inside test methods.

Listeners become even more powerful when combined with features like RetryAnalyzer, enabling advanced behaviors such as alerting only after all retries fail.

Conclusion

TestNG is far more than a basic testing framework. Its strength lies in features that give you control over execution, resilience against failures, and scalability for large test suites.

By using SoftAssert, RetryAnalyzer, groups and dependencies, parallel DataProviders, and listeners, you can build automation frameworks that are cleaner, faster, and more reliable.

Now take a look at your current TestNG suite. Which of these features could you apply to remove your biggest testing bottleneck?

If you want deep-dive in-person Test Automation and QA projects-based TestNG Training, send a message using the Contact Us (right pane) or message Inder P Singh (18 years' experience in Test Automation and QA) in LinkedIn at https://www.linkedin.com/in/inderpsingh/