CLI
The most extensible and flexible way to run your Android instrumentation tests on emulator.wtf is via our command line client.
Installation
Assuming you have $HOME/bin
on your PATH:
curl https://maven.emulator.wtf/releases/ew-cli -o $HOME/bin/ew-cli && \
chmod a+x $HOME/bin/ew-cli
Quick start
Run ew-cli
with your API token and point it to your app & androidTest apks:
ew-cli --token YOUR_API_TOKEN --app path/to/app.apk --test path/to/test.apk
Run with --help
to see all the possible options:
ew-cli --help
Exit codes
ew-cli
has various exit codes to indicate the type of failure that occurred:
Code | Description |
---|---|
0 | All tests passed (includes flaky test results) |
1 | General unhandled error occurred (IO, etc) |
2 | Bad CLI arguments (ex: too short of a timeout) |
10 | Some of the tests failed or timed out |
15 | Unexpected error with CLI or emulator.wtf API |
20 | Test results could not be gathered, possibly an emulator infra failure |
Common examples
Define token with an env var
You can pass your API token in via an the EW_API_TOKEN
env var instead of a
command-line argument, i.e. when using in a CI job:
export EW_API_TOKEN="YOUR_API_TOKEN"
ew-cli --app path/to/app.apk --test path/to/test.apk
Run tests and grab results
Use --outputs-dir
to store run results locally, useful for things like
exporting the JUnit XML report to your CI system or storing run logcat as a
build artifact.
ew-cli --app path/to/app.apk --test path/to/test.apk --outputs-dir out
Record a video of the test
Add --record-video
to store a video recording of the test. It will be
downloaded into --outputs-dir
once the test has finished. When running tests
with multiple shards or devices then you will get a separate video per shard-device
combination.
Run tests on a specific device profile
By default emulator.wtf runs tests on a Pixel2-like emulator with API 27 (Android 8.1). If you want
to run on a different version or device profile you can use --device
to do so:
ew-cli --app path/to/app.apk --test path/to/test.apk --device model=NexusLowRes,version=23
Run tests with multiple test profiles
You can add repeated --device
arguments to run on a combination of devices.
ew-cli --app path/to/app.apk --test path/to/test.apk \
--device model=NexusLowRes,version=23 --device model=Pixel2,version=27
Discover available device profiles
You can list all available device profiles by invoking ew-cli
with the --models
argument:
ew-cli --models
Fail tests if they exceed a certain time
You can limit maximum test runtime with --timeout
to ensure the tests don’t get stuck for
too long. For example, to run tests up to 10 minutes:
ew-cli --app path/to/app.apk --test path/to/test.apk --timeout 10m
Run tests with orchestrator while clearing package data
You can use Android Test Orchestrator to run the tests - this will create a new
app VM from scratch for each test. Slower to run, but will ensure no static
state leakage between tests. Add the optional --clear-package-data
flag to
clear app persisted state between each run. Read more about orchestrator
here.
ew-cli --use-orchestrator --clear-package-data --app path/to/app.apk \
--test path/to/test.apk
Grab coverage data
Use the --with-coverage
flag to capture test run coverage data and store the
results (one or more .exec
files) in the path specified by --outputs-dir
:
ew-cli --with-coverage --app path/to/app.apk --test path/to/test.apk \
--outputs-dir out
Run tests with shards
The following example runs 3 separate shards and stores the outputs from each in a separate folder under out
:
ew-cli --app path/to/app.apk --test path/to/test.apk --outputs-dir out --num-shards 3
Reduce test run time to 2 minutes
The following example will split your tests into shards and run them in parallel on multiple emulators so that the whole test duration would be close to 2 minutes. The actual test duration will get more closer to the target time the more you run your tests as it’s based on historical test duration data.
ew-cli --app path/to/app.apk --test path/to/test.apk --outputs-dir out --shard-target-runtime 2m
Add additional files to the device before test
Sometimes you want to add data like fixtures to the device to be consumed by the device. The following command pushes
the fixtures.json
file so it’s readable at runtime in /sdcard/fixtures.json
:
ew-cli --app path/to/app.apk --test path/to/test.apk \
--other-files /sdcard/fixtures.json=fixtures.json
Feed command-line arguments via a YAML file
Instead of passing all arguments via command-line you can pass them in via a YAML file instead. The file should contain
named groups of arguments (which can be composed via a special include
key). The keys inside each argument
group are the same as they would be on the command line.
Example file:
atd:
device:
- model: Pixel2Atd
version: 30
myapp:
app: path/to/app.apk
test: path/to/test.apk
pr-check:
include: [atd, myapp]
And invocation:
ew-cli tests.yaml:pr-check
Results in the exactly same ew-cli
invoke as:
ew-cli --app path/to/app.apk --test path/to/test.apk \
--device model=Pixel2Atd,version=30
Options
--app
Application APK to test.
--test
Test APK, containing Android instrumentation tests (i.e. Espresso).
--token
Your API token. Can alternatively use the EW_API_TOKEN
environment variable
to pass this in.
--device [key1=value1,key2=value2]
Specify device(s) to run test with, use repeated values to test on a combination of devices. Possible keys:
device
- the device profile to use, one ofPixel2
,NexusLowRes
version
- the API version to use, currently supported values:23
,27
--timeout [value]
Fail if the test runtime exceeds the given timeout value. Values are in the format
of number + suffix where suffix is either s
, m
or h
(seconds, minutes or hours).
Examples: 10m
, 2h
. Defaults to 15m
.
--test-targets "<type> <target>"
Run only a subset of matching test targets, these will be forwarded to AndroidJUnitRunner
. See full
list of configuration options here.
Some examples:
- run all tests in a class:
--test-targets "class com.example.Foo"
- run a single test called
bar
:--test-targets "class com.example.Foo#bar"
- run all tests in a package:
--test-targets "package com.example"
- run all tests annotated with
@MediumTest
:--test-targets "size medium"
- run all tests in a package annotated with
@MediumTest
:--test-targets "size medium package com.example"
--outputs-dir
Path to college test run outputs to (JUnit report, logcat, any pulled directories). Tip: when sharding, use a different outputs dir for each shard.
--outputs [value1],[value2]
Specifies what to download into the path specified by --outputs-dir
.
Available options:
summary
- machine-readable summary about test results and outputs, similar output to--json
merged_results_xml
- merged JUnit XML from all emulator instances (devices and shards)coverage
- coverage files gathered from testspulled_dirs
- pulled directories from emulator instancesresults_xml
- all JUnit XML files, separate per emulator instancelogcat
- logcat files, separate per emulator instancecaptured_video
- captured test video, separate per emulator instance
Default: merged_results_xml,coverage,pulled_dirs
--additional-apks
A list of additional APKs to install, in addition to those being directly tested.
--other-files
Any sort of random data you want to send to the device before the tests are run. In the form of
remote-path=local-path
, where remote-path
must start with either /sdcard/
or /data/local/tmp
. Separate with
commas if you want to send multiple files.
--use-orchestrator
Add this flag to use Android Test Orchestrator.
--clear-package-data
Clear package data (any persistent state) between app test runs. Only works
together with --use-orchestrator
.
--with-coverage
Collect test coverage execution data and store it in the outputs folder.
Only makes sense if you also specify --outputs-dir
.
--num-flaky-test-attempts
Add repeat attempts of devices and/or shards where there were test failures.
Maximum number of flaky test attempts is 10. The test attempts will be started
in parallel, e.g. with --num-flaky-test-attempts 3
an extra 3 attempts will
be started in case of a test failure.
--flaky-test-repeat-mode [all|failed_only]
Whether to repeat the whole failed shard (all) or only the failed tests (failed_only) in case of flaky tests. (default: failed_only)
--shard-target-runtime VALUE
Split the test run automatically into multiple shards so that the target runtime for each shard would be around the given time target. This is done based on historical test run data on a best effort basis and subsequent test runs will sharded more accurately.
Example:
Split tests into multiple shards so that every shard would take around 3 minutes: --shard-target-runtime 3m
--num-shards [value]
Splits your tests evenly across multiple devices. Emulator.wtf will try to
balance the number of tests in each shard. Individual test times are not
taken into account so this can lead to uneven shard times but should still
provide better results compared to --num-uniform-shards
.
--num-balanced-shards [value]
Splits your tests across multiple emulator instances by runtime. Emulator.wtf will try to assign tests to devices based on their historical runtime. This will give you the most even spread between shards for large test suites.
If historical data is not available then falls back to --num-shards
(heuristic
sharding so that each emulator instance has roughly the same number of tests).
--num-uniform-shards [value]
Splits your tests randomly across multiple devices.
--test-targets-for-shard [shard X targets]
Specify --test-targets-for-shard
multiple times to spread tests into shards
manually. Possibly options to split: by package
, class
or single test
method.
To specify all tests in a package, use
--test-targets-for-shard "package com.foo"
.
To specify all tests in a class, use
--test-targets-for-shard "class com.foo.MyTestClass"
.
To specify a single test method, use
--test-targets-for-shard "class com.foo.MyTestClass#myTestMethod"
.
The arguments can be repeated in a comma separated list, e.g. the following
argument will run both classes com.example.Foo
and com.example.Bar
in a
single shard:
--test-targets-for-shard "class com.example.Foo,com.example.Bar"
To mix argument types, separate them with a semicolon ;
, i.e. to run all tests
in the package com.example
and also class com.foo.MyTestClass
:
--test-targets-for-shard "package com.example;class com.foo.MyTestClass"
--environment-variables [key1=value1,key2=value2]
A comma-separated list of key-value pairs that are passed to AndroidJUnitRunner.
--directories-to-pull [dir1,dir2,...]
A comma-separated list of directories to pull from the device and store in
the path specified by --outputs-dir
. The path in --outputs-dir
will have
the same relative path as the absolute path on the device, i.e.
--directories-to-pull /sdcard/acmeapp/screenshots --outputs-dir out
will pull the contents of /sdcard/acmeapp/screenshots
on the device to
out/sdcard/acmeapp/screenshots
.
--json
Print machine readable test result into STDOUT
, useful when wrapping ew-cli
with your own scripts.
--quiet
Suppress any logging (sent to STDERR
), use this together with --json
to only get
json output in STDOUT
.
--models
List available device models. Works together with --json
to get the models list
in a machine-readable fashion.
--proxy-host
Configure a HTTP proxy host to use for all requests.
--proxy-port
Configure a HTTP proxy port to use for all requests
--proxy-user
Set the HTTP proxy username to use for authentication
--proxy-password
Set the HTTP proxy password to use for authentication
--library-test PATH
The path to your com.android.library modules APK
--file-cache-ttl VALUE
Max time to keep cached files in the remote cache followed by an unit (d, h, m or s), with the maximum value being 90d and minimum value being 5m (default: 1h)
--record-video
Enable/disable recording video during the test
--no-file-cache / --file-cache
Don’t use / use remote file cache to skip uploading APKs or test data that hasn’t changed.
--side-effects / --no-side-effects
Indicates that the test run has side effects, i.e. it hits external resources and might be a part of a bigger test suite. Adding this flag means that the test will not be automatically retried in case of errors.
Defaults to ‘–no-side-effects’.
--no-test-cache / --test-cache
Don’t use / use remote test cache to skip running tests if the exact same test was run before
--display-name TEXT
Display name of the test run in the web results UI
--scm-url
Source control repository URL of the current run, on popular CI integrations this will be guessed from env variables.
--scm-commit TEXT
Commit identifier of the current run (hash), on popular CI integrations this will be guessed from env variables
--async
Run the test asynchronously, without waiting for the results. This shines when used together with our GitHub integration.