Artificial Intelligence is booming and has become integral to nearly every industry, including logistics and e-commerce. No growing sector remains untouched by its influence. At Shiprocket, we have embraced this technological revolution by integrating AI across our operations, from order processing and delivery optimization to customer support and warehouse management. These AI implementations have significantly enhanced our ability to provide seamless shipping experiences while maintaining operational excellence at scale.
AI plays a significant role across various development and quality lifecycle stages. In Quality Assurance (QA), it is transforming the landscape by enhancing efficiency, accuracy, and scalability. At Shiprocket, our QA teams leverage AI-powered tools to ensure the reliability of our platforms that handle millions of shipments daily.
AI-powered tools excel at automating repetitive testing tasks such as regression testing and performance testing, saving human effort and minimizing errors. In addition, AI uses predictive analytics to establish bugs early in the development lifecycle, thus conserving time and costs. At Shiprocket, this AI integration delivers higher-quality software for our teams, allowing them to focus more strategically and creatively on QA initiatives that directly impact customer experience.
In the age of rapid software development, AI in QA is not a convenience but necessary to deliver reliable, high-quality products in shorter timeframes. It enables our QA teams at Shiprocket to focus on strategic activities, such as exploratory testing and improvement of test design, leaving repetitive tasks to intelligent automation. This approach has been crucial in maintaining our platform’s reliability as we continue to scale and serve more e-commerce businesses across the country.
Integrating the AI with QA Processes
Breaking down the process, efforts, and progress have been made in both manual and automated QA, with manual testing dominating. The automation process has its major focus points, with some projects focusing on Web Automation and some projects focusing on API automation.
Web Automation | API Automation |
---|---|
We focused our efforts on the more critical modules in terms of being used, vulnerable, and more exposed to the users than other modules. | Our pre-built framework allows the ease of achieving API automation with the ease of just maintaining the test suite via Google Sheets by entering the API data required to automate in a certain format. |
“Though the manual practices and API automation framework presented gave results, the processed test case & data generation required efforts and time”
Challenges with Traditional QA Approaches
In traditional QA workflows, we faced several limitations that hindered efficiency and scalability:
- Limited Flexibility: The framework could execute an entire test suite but could not execute a single test case.
- Manual Documentation: While entering APIs into a spreadsheet was straightforward, creating and documenting test cases manually was tedious and time-consuming.
- Time-Intensive Processes: Manual QA efforts often required 3-4 days to complete, delaying project timelines.
These challenges underscored the need for a centralized, AI-driven QA solution to simplify workflows and enhance productivity.
QA AI Dashboard: The platform streamlines test case execution, documentation, and management, making QA more flexible, scalable, and insightful. Features like automated test case generation and real-time tracking allow our teams to focus on high-value tasks while AI handles the rest. Embracing AI in QA is not just an upgrade—it’s a necessity for speed, accuracy, and continuous improvement.
QA Dashboard: What do we offer?
The Dashboard is designed to bridge the gap between manual and automation testing. It reduces effort while improving efficiency, accuracy, and scalability.
Addressing traditional QA challenges, the dashboard automates test case and data generation, test execution, test case documentation, and real-time progress tracking. It also allows the creation of a custom Suite (a Group of APIs to be executed together) and a custom Journey (a custom set of APIs mirroring the user journey).
This allows our teams to shift their focus from tedious manual work to high-value tasks like exploratory testing and strategic planning.
Understanding the QA AI Dashboard Tech Stack & Workflow
The QA AI Dashboard is built on a multi-layered architecture that integrates frontend, backend, AI processing, and database management. Here’s a breakdown of the key components:
1. User Interaction Layer (Frontend)
- Bootstrap FE: A responsive web interface designed with Bootstrap to allow users to interact with the dashboard and trigger AI-based test generation.
2. Backend Layer
- Java Spring Boot: Acts as the central API gateway handling user requests, business logic, and communication between different services.
- MySQL Database: Stores test case data, user inputs, and AI-generated results for further analysis.
3. AI Processing Layer
- Python Service: Handles AI-based test generation and processing.
- OpenAI API: Provides AI-powered insights, test case recommendations, and natural language processing capabilities.
- LangChain: Facilitates prompt management, AI workflow orchestration, and retrieval-augmented generation (RAG).
- Qdrant: A high-performance vector database for similarity searches and efficient test case recommendations.
Workflow of QA AI Dashboard
The system follows a structured workflow to ensure efficient test case generation, validation, and storage. Below is a step-by-step explanation of how the QA AI Dashboard functions:
- Step 1: User Interaction – The user interacts with the Bootstrap-based front end to initiate a test case generation request.
- Step 2: Request Handling by Spring Boot – The front end sends the request to the Java Spring Boot backend, which processes and validates the input.
- If necessary, Spring Boot retrieves existing data from MySQL to enhance the request.
- Step 3: AI Processing in Python – Spring Boot forwards the validated request to the Python-based AI service.
- The Python service acts as the central AI engine, coordinating interactions with OpenAI, LangChain, and Qdrant.
- Step 4: AI-Powered Test Generation – OpenAI generates intelligent test cases based on input scenarios.
- LangChain refines the AI workflow by structuring prompts and optimizing responses.
- Qdrant helps retrieve similar past test cases from the vector database, improving the accuracy of recommendations.
- Step 5: Data Storage and Response Handling – The Python service sends the generated test cases back to Spring Boot. The results are stored in MySQL for future reference and check for duplicacy. The processed test cases are sent back to the front end for the user to review.
Dashboard Components & Target Usage
The AI-powered test case generation system follows a structured workflow to automate and optimize API testing:
- User Authentication & Module Selection: The user logs into the system using their credentials and is redirected to a module selection screen. Upon selecting the desired module, the user proceeds to the next step.
- Loading API Test Cases from a Google Sheet: The system requires a Sheet ID, which contains manually created API test cases currently being executed in automation. The sheet holds essential API details, including a validation section that helps AI better understand each API.
- Selecting an API and Generating Test Scenarios: Users can select a specific API from the sheet to begin the test scenario generation process. The system then sends OpenAI a request containing API data and validation details. These validation parameters enhance AI’s comprehension of the API, allowing it to generate precise and meaningful test scenarios.
- Reviewing and Refining AI-Generated Scenarios: AI returns a set of test scenarios, and users are allowed to proofread, modify, or add missing scenarios. This step ensures that all edge cases and critical paths are covered before proceeding to test case generation.
- Generating Test Cases from Scenarios: Once finalised, the user triggers the test case generation process. The scenarios, along with API data, are sent back to OpenAI, which then formulates test cases based on each scenario. These test cases include structured request bodies aligned with the defined API validations.
- Exporting Test Cases: After generation, users can choose to:
- Write the test cases back to the Google Sheet for seamless integration into their QA workflow.
- Download a CSV file for offline review and further modifications before adding the cases back to the sheet.
Benefits of QA AI Dashboard
The QA AI Dashboard delivers tangible benefits that address traditional QA challenges:
- Efficiency Gains: By automating test case creation and execution, the dashboard significantly reduces manual efforts.
- Time Savings: QA cycles that previously took days can now be completed in hours.
- Improved Documentation: Centralised and automated test case documentation simplifies maintenance.
- Real-Time Tracking: Provides transparency into QA progress with real-time updates.
Real-World Impact
QA Feedback: Evaluating AI-Driven Test Case Generation
As AI-driven test case generation continues to evolve, feedback from QA teams plays a crucial role in refining and optimizing the process. Integrating AI within API testing has significantly improved efficiency, accuracy, and test coverage. However, certain areas require further enhancement to ensure the highest level of reliability and completeness.
Key Insights from QA Evaluation
1. Relevance: Ensuring Critical Feature Validation
One of the strongest aspects of AI-generated test cases is their ability to validate critical API features effectively. The AI-powered approach ensures comprehensive coverage of API functionalities, helping teams identify potential vulnerabilities and edge cases that may otherwise be overlooked.
2. Quality: Alignment with API Requirements
While AI-generated test cases generally align well with API specifications, some challenges arise when validation details are missing or too generic. Certain pods have encountered issues where AI struggles to create precise test cases due to incomplete validation data. This highlights the importance of well-defined API validations to enhance test case accuracy.
3. Effort Reduction: Boosting Efficiency by 50-70%
One of the biggest advantages of AI-driven test case generation is the significant reduction in manual effort, estimated at 50-70%. The automation of test scenario creation has particularly benefited positive and negative test case generation, allowing teams to focus on more complex testing aspects rather than repetitive tasks.
4. Usage: Addressing Manual Dependencies
While most API use cases are well covered, the process still requires manual intervention to add validations. This dependency can impact the completeness of the generated test cases, leading to potential gaps in coverage. Automating the validation input process or improving AI’s ability to infer missing validations could further enhance test case accuracy.
5. Feedback: Strengthening Validation Processes
Overall, AI-generated test cases have proven effective in covering a wide range of scenarios. However, a more robust mechanism to ensure all necessary validations are captured is needed. This will help reduce manual dependencies and enhance the reliability of the generated test cases, leading to higher accuracy and confidence in automated testing.