Appium Python SDK

The Evinced Appium Python SDK integrates with new or existing Appium tests to automatically detect accessibility issues. With the addition of as few as 5 lines of code to your Appium framework, you can begin to analyze your entire application to understand how it can become more accessible. At the conclusion of the test, a rich and comprehensive HTML or JSON report is generated to track issues in any reporting tool.

Supported versions / frameworks

Evinced Appium Python SDK supports the following values for Appium’s automationName desired capability:

  • XCUI SDK
  • UIAutomator2
  • Espresso

Older versions of automation drivers (e.g. deprecated Appium iOS driver based on UIAutomation) are not supported.

Prerequisites:

  • Python 3.7 or higher.

Get Started

Setup

In order to use any of the Evinced Mobile SDKs you first need to create an Evinced account. Once your account is created, you will get your API key and a matching service account ID. Pass these these credentials by calling LicenseManager.setup_credentials(service_id, api_key) and Evinced will validate access upon test execution. If an outbound internet connection is unavailable in your running environment - contact us at support@evinced.com to get an offline token.

Install Evinced Appium SDK from the remote registry

The most basic way to install the latest package would be by pulling it from jfrog repository:

1pip install evinced-appium-sdk --extra-index-url https://evinced.jfrog.io/artifactory/api/pypi/public-python/simple/

No preliminary authentication is required.

Examples

Optional: Setting up the credentials

An additional way to provide credentials to the validator without hardcoding them into your tests would be setting up some env variables.

Windows
1set EVINCED_SERVICE_ID=uuid-type-string-here
2set EVINCED_API_KEY=short-string-here
3set EVINCED_TOKEN=longest-string-here <- if you were given offline access
MacOS and Linux
1export EVINCED_SERVICE_ID=uuid-type-string-here
2export EVINCED_API_KEY=short-string-here
3export EVINCED_TOKEN=longest-string-here <- if you were given offline access

Initialize the SDK

1from appium.options.android import UiAutomator2Options
2from appium import webdriver
3from evinced_appium_sdk import * # This import will provide all necessary structures for writing your solution
4
5# Setup Evinced license
6LicenseManager.setup_credentials(service_account_id, api_key)
7
8# Prepare the target options for Android
9options = UiAutomator2Options()
10options.platform_name = 'Android'
11options.device_name = 'API_30'
12options.app = '/your/path/app.apk'
13driver = webdriver.Remote("http://localhost:4723/wd/hub", options=options)
14
15# Create Evinced Appium default runner
16with EvincedAppiumDefaultRunner(driver, None) as runner:
17 report = runner.report()

Add Evinced accessibility checks

report()

Generates an accessibility report

1...
2# Run analysis and get the accessibility report
3report = runner.report()
4# Assert that there are no accessibility issues
5assert report[0].total == 0

Regardless of the test result, the report method will always output JSON and HTML files. You should be able to find the files by browsing the local folder called evinced-reports. It's also possible to configure this path. Use set_output_directory method.

1with EvincedAppiumDefaultRunner(driver, None) as runner:
2 runner.set_output_directory(Path("/Users/username/output"))
3 runner.report()

For more information regarding the HTML and JSON reports as well as the report object itself, please see our detailed Mobile Reports page.

analyze()

Collects an accessibility snapshot of the current application state and puts it into internal in-memory storage. This method is supposed to be used in conjunction with runner.report_stored() for performing actual assertions against the content of this storage.

report_stored()

Generates a list of accessibility reports, one for each accessibility snapshot collected by runner.analyze().

Here is an example of using analyze() and report_stored() in a test case:

1with EvincedAppiumContinuesRunner(driver, None) as runner:
2 runner.driver.find_element(AppiumBy.ID, "some_selector").click()
3 runner.analyze() # first scan
4 runner.driver.find_element(AppiumBy.ID, "some_selector").click()
5 runner.analyze() # first scan
6 report = runner.report_stored()
7
8 assert report[0].total == 0 # expectation for first scan
9 assert report[1].total == 28 # expectation for second scan

Continuous mode

Continuous mode allows continual scanning of the application without the need to insert individual scan calls with in the test code. Simply substitute EvincedAppiumDefaultRunner with EvincedAppiumContinuesRunner and instead of calling report, analyze and report_stored you need to start a analyzation session using runner.start_analyze() and runner.stop_analyze(). Evinced validator will automatically scan the application upon using appium commands send_keys, clear, click, swipe, scroll, tap. All issues detected during the tests will be added to the HTML and JSON reports automatically generated with the runner.stop_analyze() method is called.

Here is an example of using continuous mode in a single test case:

1...
2driver = get_driver()
3
4with EvincedAppiumContinuesRunner(driver, None) as runner:
5 # Run analysis and get the accessibility report
6 runner.start_analyze()
7 # This click method will produce a dedicated report
8 runner.driver.find_element(By.ID, "SomeButton").click()
9 # ...
10 reports = runner.stop_analyze()

Please, notice that we are using the instance of driver from the runner object runner.driver and not the original driver, they are specifically designed to capture reports on screens upon making an actions.

Note: It's important to note that calling stop_analyze() immediately after making an action could result in a race condition where appium might make a scan mid-animation after the click. This could produce inconsistent results. Currently, there's no way to prevent this from happening in the python ecosystem without a significant cost to performance. We would recommend adding a quick block process before the animation has finished. Here's an example:

1with EvincedAppiumContinuesRunner(driver, None) as runner:
2 runner.start_analyze()
3 runner.driver.find_element(By.ID, "SomeButton").click() # <- animation starts
4 time.sleep(0.25) # <- block thread to prevent appium from scanning too early
5 reports = runner.stop_analyze() # <- will produce a final report correctly

Configuration

Evinced Appium SDK provides a configuration for end-users to fit the reports to their liking.

Screenshot Options

By default, screenshots are provided in HTML reports for each scan, but for JSON-reports, they are disabled by default. You can explicitly enable or disable screenshots using the InitOptions. Screenshots in JSON reports are provided in Base64 format.

Available screenshot options:

OptionDescription
ScreenshotOption.disabledThe screenshot is available only in the HTML report. Disabled by default.
ScreenshotOption.base64Add a screenshot to the JSON report and the Report object in Base64. Available using the screenshotBase64 field
ScreenshotOption.FileSave a screenshot as separately .png file. the name of the screenshot is the same as the id of the corresponding report.
ScreenshotOption.BothSave the screenshot as a .png file and also as Base64 in the json report. See options above

Filtering

It is very simple to filter returned report data. There are two different filtering options: include filters and exclude filters. Include filters will filter out all the other issue types that were not defined by the filter and exclude filter will remove the issues as defined. In combination the include filter will be prioritized, meaning that if we filter out Best Practice issues from the report and include a particular button, then the report will contain only this button, but it will not have Best Practices issues.

To get the most up-to-date information of possible ways to filter out the issue the end user is encouraged to take a look at source code filtering structures.

Filtering based on Severity level

The validator will produce reports containing issues with folowing severity levels:

  • Critical
  • Serious
  • Moderate
  • Minor
  • Needs Review
  • Best Practice
Filtering based on Issue Type

As of now, the validator will produce reports with the following Issue types:

  • Accessible Name
  • Color Contrast
  • Interactable Role
  • Accessibility Not Enabled
  • Tappable Area
  • Type In Label
  • Label Capitalization
  • Special Characters
  • Sentence Like Label
  • Duplicate Name
  • Colliding Controls
  • State In Name
  • Conflicting State In Name
  • Invalid Labeling Attribute
  • Alternative Text
  • Instruction In Name
  • Focus Trap
  • Primary Context Has Title

Examples:

1# In this example we will filter in all needs review
2report_filter_include = ReportFilter([Severity.needs_review])
3ev_config = EvincedConfig(include_filters=[report_filter_include])
4init_options = InitOptions(evinced_config=ev_config)
5
6with EvincedAppiumDefaultRunner(driver, init_options) as runner:
7 runner.report() # <- this will produce reports with isssues that only have NeedsReview severity level
1# In this example we will filter out all critical and serious
2report_filter_exclude = ReportFilter(severity_filters=[Severity.critical, Severity.serious])
3ev_config = EvincedConfig(exclude_filters=[report_filter_exclude])
4init_options = InitOptions(evinced_config=ev_config)
5
6with EvincedAppiumDefaultRunner(driver, init_options) as runner:
7 runner.report() # <- the report will contain only critical serious
1# In this example we will filter out all critical and serious and other issues
2# That is not Color Contrast issue type
3report_filter_exclude = ReportFilter(severity_filters=[Severity.critical, Severity.serious])
4report_filter_include = ReportFilter(issue_type_filters=[IssueType.color_contrast])
5ev_config = EvincedConfig(exclude_filters=[report_filter_exclude])
6init_options = InitOptions(evinced_config=ev_config)
7
8with EvincedAppiumDefaultRunner(driver, init_options) as runner:
9 runner.report()

RulesConfig

RulesConfig is designed to provide configuration for some of the aspects of reporting. Most notably ColorContrast and TappableArea.

TappableArea controls the threshold for the raising of TappableArea violation. More information can be found here tappable area. For even more information please consult with WCAG guidelines.

1rule_options = TappableArea(dimensions=15.0, dimensions_on_edge=10.0)
2rules_config = [RulesConfig(A11yRule.TappableAreaIsTooSmall, False, rule_options.to_dict())]
3options = InitOptions(rules_config=rules_config)

ColorContrast also controls the threshold for raising color contrast issues, for different text sizes and also controls if OCR is enabled. More information is found here

1rule_options = TappableArea(disable_ocr=True, ratio_large_text=35.0)
2rules_config = [RulesConfig(A11yRule.ColorContrast, False, rule_options.to_dict())]
3options = InitOptions(rules_config=rules_config)

Test example

Minimum workable test for the method analyze()

1from appium.options.android import UiAutomator2Options
2from appium.webdriver.common.appiumby import AppiumBy
3from evinced_appium_sdk import LicenseManager
4from evinced_appium_sdk.core.runners import (
5 EvincedAppiumDefaultRunner,
6)
7from appium import webdriver
8
9caps = UiAutomator2Options()
10caps.platform_name = 'Android'
11caps.device_name = 'API_30'
12caps.app = '/apk/your_target_app'
13
14SERVICE_ID = "your_service_id"
15API_KEY = "your_api_key_token"
16
17LicenseManager().setup_credentials(service_id=SERVICE_ID, api_key=API_KEY)
18driver = webdriver.Remote("http://localhost:4723/wd/hub", options=caps)
19
20
21def test_example():
22 with EvincedAppiumDefaultRunner(driver, init_options=None) as runner:
23 runner.analyze() # first scan
24 driver.find_element(AppiumBy.ID, "your_view_id").click()
25 # also can be called through the created class "runner"
26 # runner.driver.find_element(AppiumBy.ID, "your_view_id").click()
27 runner.analyze() # second scan
28 report = runner.report_stored()
29 assert report[0].total == 5 # number of issues detected in the first scan
30 assert report[1].total == 0 # number of issues detected in the second scan