Table of Contents
1 Introduction – “Performance Testing for Handheld Devices”
1.1 Objectives of Handheld Device Performance testing
1.2 Application types running on Handheld Devices
2 Handheld Device Mobile Application Architecture
2.1 Android Platform
2.2 iOS Platform
2.3 Windows Mobile (WinCE) Platform
2.4 Symbian OS Platform
3 Requirements for handheld device application performance testing
4 Testing Strategy for Handheld Device Applications:
5 Testing Process for performance of Handheld Mobile apps
5.1 Requirement Phase
5.2 Performance Test plan
5.3 Test design
5.6 Test Analysis
5.7 Report preparation
6 Testing Team Roles w.r.t Management and Technical aspects of software development
7 Performance Testing types for handheld device mobile apps
7.1 Stress Testing
7.2 Load Testing
7.3 Spike Testing
7.4 Volume Testing
7.5 Endurance Testing
8 Testing tools
9 Current Status of Research, Development and Practice
Nowadays most of the technical devices (such as Smartphones, Watches, tabs, IPads/IPods) have the ability to connect to internet. There are numerous applications available for the users of these systems. Internet connection has changed people’s standard of living, easier and smart. This is the main reason why these internet enabled hand held devices are called as Smart devices. These smart handheld devices are much more interactive than any other handheld devices and can be called an improved version of way of interaction to the complex underlying systems.
There are thousands of applications available for these smart devices over the app stores. Users/Buyers are also allowed to create their own applications and publish them in these smart handheld devices. This enables any user to be a developer and earn money by selling his/her applications.
During the process of development of Smart software applications for such handheld devices, testing is very important and necessary. Testing is done to find out the problems and bugs (errors) occurring at the time of development and/or after it. The issues identified should be fixed before actually releasing that application to the application store or webstore and making it available for download for these smart handheld device users. If the performance of such handheld devices running software’s on them or applications running on them, is not up to the satisfaction level of the users, it will negatively impacts to the producers/key stakeholders of these products. And hence performance testing of such handheld devices is equally important.
Performance testing process assures the effective performance and quality of the software application running on to these handheld devices. It helps us to measure performance in terms of –
- Response time (UI based performance) experienced by the users which ensure high customer satisfaction and retention of such devices by these customers.
- Backend infrastructure supporting to the network traffic generated by these handheld devices
- Performance monitoring once after the application has been released to webstore and keeping track of rise or fall in application adaption and also keeping tracking on user reviews to see if there are any reporting related to performance behavior.So, to test Smart handheld device software, a testing platform is very much needed.
Thus, this report addresses testing platforms for handheld device applications (limited to Smart Phones/Watches, Tabs, IPad/IPods).
Performance testing helps to answer many of the below stakeholder’s (producers, buyers, users etc) questions such as
- Are we ready to go live?
- Will our system cope with the unexpected?
- Will our failover work correctly?
- How stable is our system / product is?
- Why is our system performance poor?
- Is our system design inefficient?
- Where are the bottlenecks in the system?
- Does our system comply with the businesses performance requirements?
- The business does X transactions per day, can the web application handle our busiest hours.
- Does this release perform as well as the live version?
- What will happen if our business grows?
- How does our application scale?
- What is the max throughput we can handle?
Below are the important objectives for carrying out performance testing of handheld device software’s:
- Measure end-to-end transaction response time
- Workload monitoring and to measure server component performance under various different workloads
- Measuring the software performance on various platforms (same software application running on separate handheld devices but using completely different platform).
- Monitoring system resource utilizations (such as memory, CPU utilization, and battery consumption) at varying workloads.
- Network monitoring between the software running handheld device and the backend supporting infrastructure.
- Handheld applications / Native Software:
- This is also called as a “Platform” or an integrated
software component that is running on to the
handheld device. These software components
are accessible all the time on to the handheld
device and are accessible all the time through
the icons resides on the touch panel of the
- Downloadable Software applications – These types of application software’s are available on webstore for download and go. These can be installed on to handheld devices to support a particular need or functionality.
- Platform independent applications accessing hardware components for the handheld device. Examples of such applications are – Camera, Contact lists, or phone dialer accessing the wireless cellular networks to make phone calls
- This is also called as a “Platform” or an integrated
- Client based Applications
- These are similar to native applications sitting on the handheld device, but acts like a client application and run or accessed via web-browser.
- Such applications are developed to run on a single host platform but provide the interfaces so that it is independent of the handheld device operating system.
- Hybrid applications / Application market-place
- These types of applications are similar to client applications accessing web-service through web-browser, but these are developed to work on different handheld device platforms. These are developed once but can be deployed on multiple handheld device platforms. Examples of such applications are – shopping applications.
Most handheld mobile devices extend existing business system or their interfaces. There are typically three major components to a handheld device mobile architecture-
- An existing system
- A middleware application
- A handheld application
Handheld device mobile application architecture is defined by using a set of techniques and patterns that are used to develop handheld device applications. While defining the architecture, specific industry and vendor standards are taken into consideration.
Handheld mobile device application is normally designed in multiple layers as shown in the below diagram. Each layers defines the usability of the app.
Handheld device application, can be developed as a thin Web-based client or a rich client. In rich client type of handheld device applications, you will find that business and data services layers are available on the handheld device. In case of thin client apps, the business and data layers are available on the server.
Google has developed a very popular Linux-based mobile phone operating system (OS) called as “Android”. It powers various handheld devices such as smartphones, tabs, smartwatches, and cameras. Google created a group of hardware, software, and telecommunication companies known as “Open Handset Alliance” with the goal of contributing to Android development. This OS is widely used in almost all touch screen enabled handheld devices.
As you see in the above diagram, in the android platform stack, android applications is the topmost layer. Android development team builds these applications using android java framework. This framework provides high level services and APIs which android applications can use. This framework includes – activity manager, resource manager, notification manager, location manager etc. The Dalvik Virtual Machine using its native core libraries converts the java class program files into executable files that have .dex extensions. These are optimized binaries that can be executed on smaller processors and low memory environments. Dalvik Virtual Machine takes advantages of underlying Linux core features such as multi-threading, process and device management and memory management.
This OS is developed by Apple Inc. and is used as a common platform for developing and building applications for all apple produced handheld devices such as iPhone, iPads, Smart-watches etc. iOS architecture is layered as shown below:
Cocoa Touch Layer: This is the topmost layer of iOS architecture available for the applications. It contains some of the key frameworks such as UIKit framework.
Large number of high level features such as layout generation, printing, sensing gesture etc., are provided to developers in this layer. In addition to this, it also provides Map Kit, Event Kit, and Message UI frameworks.
Media Layer: This layer contains a large number of assets library frameworks, APIs . These library frameworks provide easy access to handheld device photos and videos. There is also a ‘Core Image framework’ that helps in manipulating images through various filters. Similarly the Core Graphics framework provides capabilities for 2D drawings
Core Services Layer: This layer manages fundamental system services which native iOS applications use.
Core OS Layer: This layer is the base layer for all the layers above it. It also provides security framework which applications can use directly. This layer encapsulate the kernel environment and low level UNIX interfaces to which application doesn’t have access for security reasons though through the ‘libSystem’ library low level features related to BSD sockets, POSIX threads, and DNS services can be accessed.
This OS is developed by Microsoft for handheld devices – smartphones, mobile devices. It uses WinCE kernel. Its look and feel is similar to what we see on desktop window. It includes a suite of basic applications developed with the Microsoft Windows API
Most of the windows handheld mobile devices have a standard set of features, such as multitasking and the ability to navigate a file system similar to that of Windows 9xand Windows NT, including support for many of the same file types. Internet explorer is the default web browser.
Almost every mobile phone / handheld device manufacturer has models which have this OS. Symbian user interface mostly used by Motorola and Sony Ericsson. This was the popular OS platform for developing the applications for Nokia handheld mobile devices.
Below table show the comparison between platforms available for the handheld App development:
Handheld device applications need to be tested on various handheld devices and OS under different network conditions. The requirements get formed based on the challenges.
Challenges involved in mobile application testing are explained below which in turn generated the requirements:
- Multiple mobile platforms and versions: There are different mobile operating systems in the market. The major ones (Android, iOS, Symbian, blackberry and windows) are explained above. Testing handheld application across multiple devices running on same platform and every platform poses a unique challenge for testers.
- Handheld mobile device diversity: This is an especially acute problem for android devices. Official mobile device gallery includes over 60 devices of various screen sizes, resolutions and form factors. Providing testing coverage for all the devices is a major challenge even for the big organizations.
- Mobile Device connectivity: Since handheld mobile devices can be connected with windows or Mac using Wi-Fi, USB or cloud network and so it greatly affect handheld mobile testing strategy as all the 3 options are different from one another.
- Frequent Releases : Major and minor OS updates keep test teams continuously involved in testing new applications features or rectifying the app against our new operating system versions.
- Varying n/w condition: Mobile application behavior can be affected by the changes in network conditions such as Wi-Fi, 2G,3G,4G, GSM, CDMA, bandwidth connection speed etc. This requires additional testing to ensure acceptable application behavior in real-world conditions.
- Different mobile app types: Mobile app can be a web app / native or a hybrid app which has both contents. Testing of each such application type is different than another as their implementation is quite different from one another.
- Test Executions: This across multiple devices and browser is a huge challenging effort. Consider a test that needs to be executed on multiple android and iOS devices and versions. Designing a test execution matrix of this scope is both complex and time-consuming.
- Handheld application testing tools: Testing of mobile apps is more complex compared to desktop or web application with the increase in complexity. There are actually less tool available to support mobile testing and the selection of most appropriate tool out of available is another tough task.
- Testing on Emulators/Simulators: Typical mobile app needs to be tested on 5 to 7 iOS and 10 to 15 android devices. This list grows every quarter making almost impossible for the organization to test mobile apps on target devices. In such cases emulators are good choice for mobile apps testing. However, should never be considered a substitute for real mobile devices as they have their own limitations like manual testing of mobile apps.
- Mobile Automation and performance testing: Both are also complex because mobile app objects are quite different than web or desktop applications and also differ depending on operating system and mobile application types. Moreover there is no generic framework available for mobile test automation as mobile industry is still in learning stage.
This defines the approach for testing at a very abstract level. This documents helps to keep project managers, testers and developers updated about key issues that are identified during the testing process.
This basically derived from the Business Requirement Document and this sets the standards for testing. Test strategy defines the scope and objective of whole performance testing. It also defines overall budget of the project which in turn illustrates the time required for testing and the number of resources to be allocated for the project. Testing approach is defined which includes the method of testing to be used – performance, load, stress, functional etc. and whether its manual or automated testing. Defect tracking approach is also defined.
Below are the points which are considered while defining the test strategy for testing of handheld device mobile applications:
- Target Device Selection: Handheld devices and platform diversity is a key challenge in mobile app testing. Testing approach should include creating an optimal mix of emulator and real mobile device of different models to maximize test coverage.
- Network Environment: Primary testing should be done on Wi-Fi and using network simulator. In addition testing mobile applications in real network condition is also essential to assess the behavior of the application.
- Identifying types of testing: Ensure mobile apps work on all devices. Consider different types of testing required including functional, performance, security, compliance, beta and so on.
- Mobile cloud testing: Mobile cloud provides a web-based mobile testing environments where apps can be deployed, tested and managed. Cloud testing env have capability to support complex apps and provides real-time testing results which means results can be analyzed why the tests are running.
- Mobile test automation: Select an effective test automation tool and maximize the use of automation to reduce the cost of regression testing.Thus, despite the challenges in mobile application testing, careful selection of target devices, connectivity options and the tools that maximizes automation can ensure a cost-effective mobile testing process.
Performance testing deals with the nonfunctional requirement specifications. Here we request the below details from the client.
This is the first phase of performance testing life cycle which is carried out at the time of finding out the business technical requirements. The objective of this phase is to find out the performance testing requirements. These requirements are then documented by test engineers in the below two documents and then these documents are baselined –
a) Application information doc – It contains all the application traversals or the flow. Thus, basically this document provides all the information about the navigation flows of the application.
b) Non-functional requirement document – This document contains the information about SLAs (Service level agreements). For example – my response time should be less than 3 secs. My home page should be less than 2 secs and my error rate should 0%, my transactions should be 10 per sec, my hits per second should be 5 per sec . These are some SLA’s which are defined by the customer and all these requirements are then documented and baselined in a Nonfunctional requirement document.
Below are some of the requirements that are captured during this phase:
- Response Time – Acceptable response times test to be in range of 2-3 sec.
- Number of transactions – What are the different types of transactions available in the application.
- Application availability: The application should be available 24*7 and no downtime.
- Number of hits per second
- Workload, number of concurrent users.
- Volume of the data and data growth rate.
- Resource utilization
This document contains the information that answers following questions such as –
What is in scope and out of scope, what is the purpose of the performance testing? What is the schedule? What is the time span taken for the testing? What are the deliverables? How and what we want to deliver to the client and also testing entry criteria and exit criteria Example of entry criteria is like system is stable and exit criteria that the system is performance compliant with all the activities. This phase defines hardware platform (processors, memory, storage, network), software details (OS platform, applications, server software) and test data that will be used for testing.
In this phase, test cases will be created. Here the test cases are nothing but various scenarios are designed keeping in mind the testing strategy. Typical tasks that are performed in this phase of test design are as follows:
- Creation of various scenarios
- Preparing a detailed test execution plan
- Setting up the required test environment
- Record the script or steps
- Perform the script customizations (addition of delays, checkpoints, synchronization points)
- Generation of data
- Parameterization/ Data pooling
In this phase, once the scripts are ready, those are executed for just one or two users to ensure that the scripts are correct and perform the validations/checks of the scripts. Errors are monitored and if found any errors, scripts are modified/ updated to correct those errors. An automated test tools are used in this phase to record all the transactions/flows of the applications and then finally automated scripts are produced from these recorded transactions.
In this phase of performance testing life cycle, a series of test cases are executed based on the test plan. For each test cycle, test execution reports, logs are prepared and then validated against the expected output results.
This phase helps to understand the results of the test execution phase. A preliminary report will be generated based on the all tests which were carried out in the previous phase. Analysis of all test results helps to understand the performance of the handheld mobile applications.
This analysis report will be used as base for all the handheld mobile app tests that will be carried out in the future.
Previous test results, logs etc generated from manual as well as automated tests are combined together, analyzed and a final report Is prepared containing the following information –
Transaction response time, transaction per second, graphs showing information about throughput, upload / download per second rate, transaction summary, transaction performance, transaction response.
Below is the flow chart depicting each and every phase of the testing lifecycle along with their outcomes (internal or external – to the client?)
|S.No.||Position/Role||Duties / Skills||Experience|
|1||Test manager||Responsible for test program,
Customer interaction, Recruiting
Test tool introduction
Test planning/design/development and execution
including the test approach
Below are the types of performance testing which can be used to test handheld device mobile apps:
Intention of stress testing is to validate an application’s constancy and dependability over a comprehensive period of time. It checks the stability and reliability of handheld mobile apps.
This type of test tries to break the app by testing with overwhelming data or resources there by checking the ability of the system to handle errors under extremely heavy load conditions and thus it ensures that the handheld device apps would not crash under crunch situations. It helps to find out the limit at which the handheld device application breaks.
Below are few of the reasons why this kind of testing is valuable –
1. To find out whether the app works under the abnormal conditions.
2. Displaying appropriate error messages when the handheld device is under the stress
3. Failure of apps under extreme conditions could results in enormous revenue loss.
The purpose of load testing is to determine whether the application can maintain the required number of users with acceptable response times. A very large number of simultaneous virtual users are simulated to use the application and then application behavior is studied against how many max users that application can sustain before experiencing system failure. The outcome of this test helps in deciding whether the hardware, software configurations of the handheld device needs to be changed or not to handle those many number of users / load or not. Thus load testing helps to find out the under-performing parts and/or inconsistent parts of the application under the peak workload before the application is released to the market. In a way, load testing avoids poor performance issues of the application by adapting alternative complimentary strategies to enhance the performance of such applications.
This is another type of load testing itself. In this the handheld mobile device application is tested with unusual increment and decrements in the load. The handheld device is unexpectedly loaded and unloaded. It is done to notice how actually the app reacts with unexpected rise and decline of users. The goal of this testing is determine whether performance will suffer, the app will fail or it will be able to handle dramatic change in load or not.
This is non-functional performance testing where the handheld device apps are subjected to a huge volume of data. This is also called as “flood testing”. Behavior of the handheld device app is studied when it is exposed to a high volume of the data and the issues are identified that are likely to occur with the large amount of data. This test also helps to determine the point at which the stability of the handheld device app degrades. In short, it helps to identify the capacity of the hand-held device or the application. It also checks to see if there is any data loss, or whether data is getting correctly stored or not, any warning and error messages are displayed and/or logged or to see if high volume of data affects the speed of processing.
This is also a non-functional type of testing also known as “Soak testing”. In this type of testing, a device that is already performing under a large amount of load is allowed to continue to run for a significant extended period of time, to discover how the handheld device app behaves under sustained use. For example – Sometimes handheld device mobile app behave exactly different from the behavior that was seen within the first 1 hour of initial run. The reason can be like after 3 hours of time problems such as memory leaks cause the handheld device app to fail or behave randomly. This this type of testing generally used to check the memory leaks.
Once after the handheld device applications are unit tested by developers, it is handed over to test teams for further testing. Test teams have two approaches for executing test cases for handheld mobile applications: manual and automated. Generally a combination of manual and automated testing is the best suited for the handheld mobile apps. Test teams introduce the automated software test tool for carrying out performance testing of handheld device mobile applications to do non-functional testing to determine system responsiveness, stability, reliability and scalability. They do the compatibility check to ensure that the application will work with the automated testing tool and to investigate work-around solutions.
Test team manager determines whether the test team has sufficient skills to support the adoption of the automated test tool which supports following two types of testing:
|S.No.||Functional Testing||Non-functional Testing|
|1||This checks the functionality of the system with respect to functional requirements. In general system means computer hardware and software. In our scenario system means application under test.||Checking quality attributes of the system (application under test).
Quality attributes are :
The tools should be able to test handheld mobile applications to ensure for the following performance issues:
A) Application Not Responding (ANR) : Everyone with an handheld mobile device has seen this dialog at least once even if it’s not from your own app. This dialog indicates that the app is doing way too much work on the main thread and it can’t do anything more. So basically its locked up and waiting for your processing to finish. Examples this could happen is if you are reading from a file on the main thread or you are reading / doing a long running operation on the main thread.
B) OutOfMemoryError (OOM) : This is an issue where the application is crashed and observed randomly and can’t be easily reproducible. The picture shows typically how the stack trace would look like when any application crash. This indicates that there are some memory leak issues and the device has no more memory left to store or execute any further.
C) Dropped Frames, Stuttering and slow animations: This is probability most difficult to fix. This is caused when the app can’t meet 16 ms time to draw 1 frame on to the screen as this is the time required to draw 1 frame on to screen (60 fps=16 ms per frame).
D) Draining a user’s battery: This is quite a common occurrence. Generally this happens when
your app is doing too much work.
The graph below shows that depending on the cost of the tools and complexity of the requirements, you will see that right from the open source to the enterprise level, there are many tools. Some tools allow customizations such as framework creations, plug-ins or adaptors that serve small purposes here and there. But overall you will see that few of the handheld device mobile app performance testing tools are available in the market. Some of them are open source, and some of them are strong licensed enterprise level tools and each have specific need. Two most popular tools in the market are :
1) Load Runner tool from HP and
2) JMeter tool
This tool helps to simulate multiple number users for applications load testing. It also helps to create an environment where these multiple users are allowed to connect to applications concurrently to perform the work. During this type of performance testing, Load Runner captures, monitors and analyzes handheld device application performance and functionality. It supports RIA (Rich Internet Applications)
There are 3 main components of HP LoadRunner.
1) Virtual User Generator (VuGen) : This component is used to record and generated scripts out of it. This component has Interactive Development Environment (IDE) which records all the steps, business process flow. These recorded flows later can be tuned for production env. These can then be played back to emulate the real users.
2) Controller: Once after the scripts are recorded and finalized, its given to controller component which then controls the load simulation. Using minimal hardware resources, this component generates multiple concurrent virtual users emulates real production workloads to for any application platform. HP LoadRunner performs repeated, consistent and measurable stress tests on application and it then identifies any scalability issues before they go into the real world production env.
3) Analysis: Once after execution of all the tests, controller components creates a dump of results in raw form which has information such as test outputs, errors and logs of exections. The analysis components, reads these dumps and plots the graphs showing various trends helping us to make understandable the reasoning behind errors and failure under load.
Typical load runner steps are depicted in the below diagram –
This is another popular open source tool (i.e. zero investment) which is used to test handheld mobile web application. It provides features to prepare and run our handheld mobile device specific performance test scripts.
To configure JMeter and mobile device to record scripts for native app in JMeter for android and iOS platforms, below are the high level steps you perform :
JMeter side configuration: In this step, you first need to set the script recording parameters.
1) Launch JMeter -> Navigate to File option -> Templates -> Select Recording -> Click on Create (So this will add all the necessary parameters for Recording scripts) . 2) After this, we go to HTTPS Test Script Recorder -> Set port to 8080.
Now find your IP Address by ifconfig for Linux and ipconfig for Windows. We will load up this IP address on our phone Android/iOS phone to setup proxy.
Handheld Mobile Device side configuration:
1) We go to Settings–>Wi-Fi option and select your connected network
2) Select ‘Manual’ option from HTTP Proxy section
3) Set ‘Server’ value as your computer’s IP address and ‘Port’ value to 8080 as JMeter configuration
4) Save the JMeter Certificate on the handheld mobile device.
5) Once after the proxy setup is done on the handheld device, we start recording and running the scripts. To start recording we Go to Jmeter -> HTTP(S) Test Script Recorder and click Start
6) Add a Listener -> Add Result Tree to HTTP(S) Test Script Recorder.
7) Perform any actions on mobile devices and user can see the actions getting recorded on JMeter
Looking at the current trends over the past 12 months indicates that there are significant changes in how individuals connect with handheld mobile devices at home and at work. Recent studies on handheld device adaption, application treads shows that users are spending more time on videos and they are demanding more personalized experiences if they are willing and comfortable sharing their data with brands.
The difference between how we are developing applications today verses five year ago is that when we first started. The content owner was in control of what the user saw in terms of content. Now we are allowing the user to be able to choose their content. Five years from now we are going to not look to the user to choose the content but use all the various data points that were collecting automatically about that user to just surface recommendations on their behalf, but it’s still personalized. It’s actually just much more hyper personalized.
Right now we draw a picture of an app and then we program it and its static and it does that and we measure it, we try to learn about it and then we change it , but in the future it’s more of a constraint system so we say – make me something like this, make me something that fits these criteria and optimize for the goals and you design the system. So the trend is going towards like allowing computer to design the app for you.
The current research shows that handheld device mobile industry generates $3.1 trillion in annual revenue – a whopping 4.2 percent of global GDP (App Annie). This number will likely to be balloon to $3.7 trillion by 2020 (GSMA).
So all this indicates that the majority of growth in digital media time is now being generated by the use of handheld mobile devices, with desktop computers increasingly becoming a secondary device.
There are a lot of different tools, techniques and means to scale user interfaces of handheld devices and also to conduct performance testing on handheld device mobile app platforms are available to examine in the market. However, these are the only tools and it is up to the developers, designers and test teams to ensure that their user interfaces (UIs) looks good and are well usable on all the target devices. As you know, well thought of layout design (GUI) of a handheld device application should be given equal importance as it forms a basic building block of an app development and it must be stressed here that there is no way around testing the interface. In such scenario’s It is possible that the testing can be done on the actual devices or on various emulators having different configurations, provided by the platform IDEs, but there are many challenges associated with automatic GUI testing.
So it’s difficult for multiplatform handheld device native apps or HTML web app development to provide any specific recommendations. The reason is because there are number of factors such as app’s intended features, target audience or the skills of the development team.
Multi-platform frameworks offer many interesting features, especially for generating rapid prototyping or the development of very simple handheld mobile apps. In current handheld device app market, these frameworks are growing very rapidly. New features are being added to these frameworks very frequently too. So I recommend managers to incorporate the steps in SDLC process that maps to the inspection of several platforms’ features before starting multi-platform development and carefully choosing a well-suited framework.