The most extensible and flexible way to run your Android instrumentation tests on emulator.wtf is via our command line client.
Assuming you have
$HOME/bin on your PATH:
curl https://maven.emulator.wtf/releases/ew-cli -o $HOME/bin/ew-cli && \ chmod a+x $HOME/bin/ew-cli
ew-cli with your API token and point it to your app & androidTest apks:
ew-cli --token YOUR_API_TOKEN --app path/to/app.apk --test path/to/test.apk
--help to see all the possible options:
ew-cli has various exit codes to indicate the type of failure that occurred:
|0||All tests passed (includes flaky test results)|
|1||General unhandled error occurred (IO, etc)|
|2||Bad CLI arguments (ex: too short of a timeout)|
|10||Some of the tests failed or timed out|
|15||Unexpected error with CLI or emulator.wtf API|
|20||Test results could not be gathered, possibly an emulator infra failure|
Define token with an env var
You can pass your API token in via an the
EW_API_TOKEN env var instead of a
command-line argument, i.e. when using in a CI job:
export EW_API_TOKEN="YOUR_API_TOKEN" ew-cli --app path/to/app.apk --test path/to/test.apk
Run tests and grab results
--outputs-dir to store run results locally, useful for things like
exporting the JUnit XML report to your CI system or storing run logcat as a
ew-cli --app path/to/app.apk --test path/to/test.apk --outputs-dir out
Record a video of the test
--record-video to store a video recording of the test. It will be
--outputs-dir once the test has finished. When running tests
with multiple shards or devices then you will get a separate video per shard-device
Run tests on a specific device profile
By default emulator.wtf runs tests on a Pixel2-like emulator with API 27 (Android 8.1). If you want
to run on a different version or device profile you can use
--device to do so:
ew-cli --app path/to/app.apk --test path/to/test.apk --device model=NexusLowRes,version=23
Run tests with multiple test profiles
You can add repeated
--device arguments to run on a combination of devices.
ew-cli --app path/to/app.apk --test path/to/test.apk \ --device model=NexusLowRes,version=23 --device model=Pixel2,version=27
Discover available device profiles
You can list all available device profiles by invoking
ew-cli with the
Fail tests if they exceed a certain time
You can limit maximum test runtime with
--timeout to ensure the tests don’t get stuck for
too long. For example, to run tests up to 10 minutes:
ew-cli --app path/to/app.apk --test path/to/test.apk --timeout 10m
Run tests with orchestrator while clearing package data
You can use Android Test Orchestrator to run the tests - this will create a new
app VM from scratch for each test. Slower to run, but will ensure no static
state leakage between tests. Add the optional
--clear-package-data flag to
clear app persisted state between each run. Read more about orchestrator
ew-cli --use-orchestrator --clear-package-data --app path/to/app.apk \ --test path/to/test.apk
Grab coverage data
--with-coverage flag to capture test run coverage data and store the
results (one or more
.exec files) in the path specified by
ew-cli --with-coverage --app path/to/app.apk --test path/to/test.apk \ --outputs-dir out
Run tests with shards
The following example runs 3 separate shards and stores the outputs from each in a separate folder under
ew-cli --app path/to/app.apk --test path/to/test.apk --outputs-dir out --num-shards 3
Reduce test run time to 2 minutes
The following example will split your tests into shards and run them in parallel on multiple emulators so that the whole test duration would be close to 2 minutes. The actual test duration will get more closer to the target time the more you run your tests as it’s based on historical test duration data.
ew-cli --app path/to/app.apk --test path/to/test.apk --outputs-dir out --shard-target-runtime 2m
Add additional files to the device before test
Sometimes you want to add data like fixtures to the device to be consumed by the device. The following command pushes
fixtures.json file so it’s readable at runtime in
ew-cli --app path/to/app.apk --test path/to/test.apk \ --other-files /sdcard/fixtures.json=fixtures.json
Feed command-line arguments via a YAML file
Instead of passing all arguments via command-line you can pass them in via a YAML file instead. The file should contain
named groups of arguments (which can be composed via a special
include key). The keys inside each argument
group are the same as they would be on the command line.
atd: device: - model: Pixel2Atd version: 30 myapp: app: path/to/app.apk test: path/to/test.apk pr-check: include: [atd, myapp]
Results in the exactly same
ew-cli invoke as:
ew-cli --app path/to/app.apk --test path/to/test.apk \ --device model=Pixel2Atd,version=30
Application APK to test.
Test APK, containing Android instrumentation tests (i.e. Espresso).
Your API token. Can alternatively use the
EW_API_TOKEN environment variable
to pass this in.
Specify device(s) to run test with, use repeated values to test on a combination of devices. Possible keys:
device- the device profile to use, one of
version- the API version to use, currently supported values:
Fail if the test runtime exceeds the given timeout value. Values are in the format
of number + suffix where suffix is either
h (seconds, minutes or hours).
2h. Defaults to
--test-targets "<type> <target>"
Run only a subset of matching test targets, these will be forwarded to
AndroidJUnitRunner. See full
list of configuration options here.
- run all tests in a class:
--test-targets "class com.example.Foo"
- run a single test called
--test-targets "class com.example.Foo#bar"
- run all tests in a package:
--test-targets "package com.example"
- run all tests annotated with
--test-targets "size medium"
- run all tests in a package annotated with
--test-targets "size medium package com.example"
Path to college test run outputs to (JUnit report, logcat, any pulled directories). Tip: when sharding, use a different outputs dir for each shard.
Specifies what to download into the path specified by
summary- machine-readable summary about test results and outputs, similar output to
merged_results_xml- merged JUnit XML from all emulator instances (devices and shards)
coverage- coverage files gathered from tests
pulled_dirs- pulled directories from emulator instances
results_xml- all JUnit XML files, separate per emulator instance
logcat- logcat files, separate per emulator instance
captured_video- captured test video, separate per emulator instance
A list of additional APKs to install, in addition to those being directly tested.
Any sort of random data you want to send to the device before the tests are run. In the form of
remote-path must start with either
/data/local/tmp. Separate with
commas if you want to send multiple files.
Add this flag to use Android Test Orchestrator.
Clear package data (any persistent state) between app test runs. Only works
Collect test coverage execution data and store it in the outputs folder.
Only makes sense if you also specify
Add repeat attempts of devices and/or shards where there were test failures.
Maximum number of flaky test attempts is 10. The test attempts will be started
in parallel, e.g. with
--num-flaky-test-attempts 3 an extra 3 attempts will
be started in case of a test failure.
Whether to repeat the whole failed shard (all) or only the failed tests (failed_only) in case of flaky tests. (default: failed_only)
Split the test run automatically into multiple shards so that the target runtime for each shard would be around the given time target. This is done based on historical test run data on a best effort basis and subsequent test runs will sharded more accurately.
Split tests into multiple shards so that every shard would take around 3 minutes:
Splits your tests evenly across multiple devices. Emulator.wtf will try to
balance the number of tests in each shard. Individual test times are not
taken into account so this can lead to uneven shard times but should still
provide better results compared to
Splits your tests across multiple emulator instances by runtime. Emulator.wtf will try to assign tests to devices based on their historical runtime. This will give you the most even spread between shards for large test suites.
If historical data is not available then falls back to
sharding so that each emulator instance has roughly the same number of tests).
Splits your tests randomly across multiple devices.
--test-targets-for-shard [shard X targets]
--test-targets-for-shard multiple times to spread tests into shards
manually. Possibly options to split: by
class or single test
To specify all tests in a package, use
--test-targets-for-shard "package com.foo".
To specify all tests in a class, use
--test-targets-for-shard "class com.foo.MyTestClass".
To specify a single test method, use
--test-targets-for-shard "class com.foo.MyTestClass#myTestMethod".
The arguments can be repeated in a comma separated list, e.g. the following
argument will run both classes
com.example.Bar in a
--test-targets-for-shard "class com.example.Foo,com.example.Bar"
To mix argument types, separate them with a semicolon
;, i.e. to run all tests
in the package
com.example and also class
--test-targets-for-shard "package com.example;class com.foo.MyTestClass"
A comma-separated list of key-value pairs that are passed to AndroidJUnitRunner.
A comma-separated list of directories to pull from the device and store in
the path specified by
--outputs-dir. The path in
--outputs-dir will have
the same relative path as the absolute path on the device, i.e.
--directories-to-pull /sdcard/acmeapp/screenshots --outputs-dir out
will pull the contents of
/sdcard/acmeapp/screenshots on the device to
Print machine readable test result into
STDOUT, useful when wrapping
ew-cli with your own scripts.
Suppress any logging (sent to
STDERR), use this together with
--json to only get
json output in
List available device models. Works together with
--json to get the models list
in a machine-readable fashion.
Configure a HTTP proxy host to use for all requests.
Configure a HTTP proxy port to use for all requests
Set the HTTP proxy username to use for authentication
Set the HTTP proxy password to use for authentication
The path to your com.android.library modules APK
Max time to keep cached files in the remote cache followed by an unit (d, h, m or s), with the maximum value being 90d and minimum value being 5m (default: 1h)
Enable/disable recording video during the test
--no-file-cache / --file-cache
Don’t use / use remote file cache to skip uploading APKs or test data that hasn’t changed.
--side-effects / --no-side-effects
Indicates that the test run has side effects, i.e. it hits external resources and might be a part of a bigger test suite. Adding this flag means that the test will not be automatically retried in case of errors.
Defaults to ‘–no-side-effects’.
--no-test-cache / --test-cache
Don’t use / use remote test cache to skip running tests if the exact same test was run before
Display name of the test run in the web results UI
Source control repository URL of the current run, on popular CI integrations this will be guessed from env variables.
Commit identifier of the current run (hash), on popular CI integrations this will be guessed from env variables
Run the test asynchronously, without waiting for the results. This shines when used together with our GitHub integration.