Experience Sharing on Building an Automated Testing Framework for Mobile App Development
Original
-
ZenTao Content -
2025-11-10 09:00:00 -
5
In contemporary mobile application development, it is widely recognized that release cycles must be both rapid and high-quality—a balance that is challenging to achieve. Our team has encountered significant obstacles, including protracted release cycles, constantly evolving requirements, and compressed testing periods, which have often placed us in a reactive mode. As a seasoned IT engineer specializing in testing and development efficiency, I will draw on our project of developing an automated testing framework for an Android application to discuss how automation can enhance efficiency. This article is particularly pertinent to team leaders and mid-to-senior executives concerned with R&D efficiency, and I hope it offers valuable insights to peers in the field.
1. Project Background and Challenges
Our flagship product, Mobile App Y, undergoes a protracted and multi-stage version iteration process, from requirements analysis to final release. Each version incorporates a multitude of requirements, and the accumulating complexity of business logic and functional modules makes comprehensive testing exponentially more difficult. This decelerates the overall pace, diminishes the product's responsiveness to market dynamics, and ultimately impedes the attainment of business objectives.
The version planning cycle can be broken down into the following key stages:
- Version Notification: The version plan is communicated internally to business teams 1.5 months in advance.
- Requirement Cut-off: Submission permissions to the main branch are disabled by a specified deadline to ensure timely completion of requirements.
- Release Branch Creation: Version requirements are frozen, a release branch is created, and main branch commits are re-enabled.
- Integration Testing: Integration testing is performed on the release branch, complemented by automated testing.
- Regression Testing: Issues are addressed, and updated packages undergo regression testing, which includes automated tests and defect verification.
- Staged Release: Once staging criteria are met, official packages and staged release version numbers are generated for incremental rollout.
- Staged Package Updates and Monitoring: Critical issues are resolved, automated testing continues, and business stakeholders monitor the outcomes of new features.
- Official Release: A release review is conducted, leading to the generation and publication of official packages and version numbers, followed by upgrade notifications.
This end-to-end process, from planning to official release, encompasses requirements management, testing and validation, staged release, and final launch, ensuring methodical and controlled version progression. Business teams have expressed concerns that a single release often takes two months or more and is frequently delayed. Missing a release window entails a wait of at least two months for the next opportunity, creating significant pressure to meet business KPIs and fueling a frantic rush to meet every version's deadline.
Amid rapid market shifts and considerable user feedback, ad hoc requests from management are commonplace. Each change necessitates corresponding adjustments to test cases, complicating scope management and occasionally resulting in untested scenarios. This not only jeopardizes product quality but also escalates tensions among testing, development, and business teams, leading to widespread frustration.
Development delays have become commonplace, invariably eroding testing time. Consequently, releases often proceed with inherent risks or necessitate emergency package replacements and repeated staged releases—processes that are both time-consuming and resource-intensive. Each package replacement and subsequent staged release, from initial deployment to quality and business data monitoring, requires a minimum of three days. Constrained by tight deadlines, manual testing often lacks the depth and thoroughness required to ensure quality. As a result, version delays have become a regular occurrence.
2. Upholding Quality Through Breakthroughs in Testing Efficiency
Fortunately, the entire team has reached a consensus that quality is non-negotiable. Since quality standards could not be compromised, we focused on improving testing efficiency.
The primary advantage of automated testing lies in its execution speed. Whereas UI testing previously relied on manual operations, core processes such as login, search, and download updates are now executed in parallel through scripts. Tasks that once required hours can now be completed in minutes. We also consolidated essential validations for each release into a “Version Checklist,” which we have progressively automated and integrated into the daily CI pipeline. Test results are now automatically distributed via email notifications.
We further enhanced the framework by incorporating log capture and exception monitoring, which has improved the accuracy of issue identification. Interface test automation enables real-time validation of parameters and return values, offering more comprehensive coverage than manual sampling. This method helps uncover hidden defects such as data format errors and permission vulnerabilities at an early stage. For business-critical interfaces, we perform automated probing at an appropriately calibrated frequency—taking into account production environment load and client-side invocation patterns—to monitor their availability. This allows us to detect anomalies before most users are affected, facilitating timely intervention or graceful service degradation to prevent widespread complaints.
Test cases have been systematically organized by scenario and module. Foundational functions such as login and registration can be reused across versions with minimal parameter adjustments. When new requirements are introduced, unless they involve completely new scenarios, relevant existing test cases are included in regression testing, thereby reducing effort and duplication.
In terms of tooling, we adopted Appium for cross-platform UI testing, Postman for API testing, and Jenkins for CI/CD orchestration. We also enhanced our in-house framework to support parameterization and multi-environment configuration, and leveraged cloud testing platforms to expand device compatibility coverage.
3. Phased Implementation
A single initiative is not enough; systematic progress across four dimensions—people, processes, tools, and organization—is essential.
-
People: We strengthened the Test Development role by dedicating resources to script and framework development. We also encouraged testers to participate early in requirement reviews and invited developers to attend test showcases, thereby breaking down functional silos.
-
Process: Testers and developers agreed that upon handoff, developers would provide details on code changes and recommended test coverage as input for testing.
For new features, functional testing is performed on the development branch after the completion of the product demo. Clear entry and exit criteria have been defined: code may only be merged into the main branch after obtaining test approval (with required functional coverage and defect density metrics) and passing staging validation, which includes small-scale user trials to collect performance and usability feedback.
Merges into the main branch trigger daily builds and regression testing. This establishes a closed-loop process of “branch testing, mainline integration, bug fixes, staged release,” minimizing gaps introduced by manual oversight.
- Tools and Technology: Beyond the toolchain mentioned earlier, we developed compatibility tests targeting diverse devices and operating systems. By using cloud testing platforms, we covered real-user scenarios across the top 50 device models—taking into account both device type and Android version.
- Organization: Roles and responsibilities have been clearly defined, and regular alignment meetings are held. A dedicated Automation Testing Team was formed, with developers taking ownership of mainline crash resolution and the framework roadmap. Weekly cross-team syncs were instituted to track progress and mitigate risks. Business teams actively participate in review and staging acceptance meetings, and monthly retrospectives are conducted to ensure the automation strategy remains aligned with goals.
4. Key Practices and Outcomes
Testing Shift-Left: Ensuring Quality from the Branch Level
Functional testing and staging verification for new requirements are conducted directly on the development branch. Code is only permitted to merge into the main branch after meeting predefined quality standards. This early integration of quality gates has significantly improved the stability of the main branch. Daily mainline builds are now standard practice, accompanied by the consistent execution of automated test cases from the version checklist, covering functional, compatibility, and performance validation. Dedicated personnel monitor mainline issues and ensure responses within 24 hours to uphold mainline stability.
Closed-Loop Integration of Staging and Testing
Upon the successful completion of branch testing, a staged release is initiated, enabling the collection of real user data. This process supplements automated testing by capturing nuanced aspects of the user experience. We have also established a fan feedback group, offering incentives to encourage active participation, thereby converting user input into actionable test supplements.
Measurable Efficiency Gains
Testing efficiency has shown significant improvement: the average testing cycle per version has been reduced from 20 days to 12 days, automated execution now accounts for 60% of all testing activities, and core regression testing time has been shortened by 70%. As a result, test engineers can dedicate more effort to complex scenarios and new features, and the defect detection timeline has been advanced by 40%.
Enhanced Quality and Confidence
Over a three-month observation period, the post-release defect rate decreased by 65%, while the crash rate for core functionalities remained below 0.1%. Business teams have developed greater trust in the testing process, inter-team friction has been reduced, and user satisfaction scores related to "stability" have increased by 18 percentage points.
Transformed Team Dynamics
Although the team composition remains unchanged, workflow efficiency and collaborative dynamics have improved markedly. Processes have become more standardized, and collaboration is now more transparent. Test development engineers benefit from clearer career growth paths, and technical knowledge sharing has increased. Management decisions regarding resource allocation are now supported by data, and efficiency gains have subsequently accelerated business development, forming a virtuous cycle.
Automated testing is no longer optional but a mandatory practice in mobile application development. From problem identification to implementation, its success depends on managerial support and resource commitment. Looking ahead, we plan to explore AI applications in testing—such as automated test case generation, one-click bug reporting, and intelligent defect classification—to further evolve our automation capabilities toward intelligent testing.
Support
- Book a Demo
- Tech Forum
- GitHub
- SourceForge
About Us
- Company
- Privacy Policy
- Term of Use
- Blogs
- Partners
Contact Us
- Leave a Message
- Email Us: [email protected]