You can now filter your test runs on the Test Run tab by flaky test.
Test run analytics. See a historical view of your test runs and gain insight into how your tests are performing. Filter by time period, environment, project and build.
Flaky tests analytics. See a historical view of your flaky tests and gain insight into the stability of your tests. Filter by time period, environment, project and build.
Changed UI to show all tests (previously only show last test) in a test run when a test or test run is re-run.
Performance improvement: cached calls to reduce page load time
Added project setting for number of default runners. All tests runs for a given projects will default to this unless overidden at test run time.
Added a new alert for 'When test run completes and has flaky test'. This can be setup in Alerts
Added ability to collaborate and leave messages for your team on a test run.
Added ability to follow a test run. Once you follow, you can receive slack or email notifications when a test run moves from queued to submitted status, when a test run completes and when a new message is posted to test run.
Better align screenshots with specific tests. Added option in Project settings that you can turn on which will print the test name to the test output and Testery will match that against the test.
Added npm to the default image that test runners use
Performance improvement: changed UI to only load updated test results on test results page
Test run timeline view! Checkout the test run timeline view to see how each test executed. Easily identify any patterns that may be causing issues in your tests or environments. ( see below image)
Added filters to Schedules tab making it easier to find what you need. Filter by project or environment.
Bug fix: we fixed issue running latest version of chrome on Windows servers.
Added support for running Lighthouse
Flaky tests! Testery will now identify when a test is flaky by adding a tag so you can take action. Flaky tests are a nuisance and can cause the team to not trust the results. Testery now helps you make it easy to identify a test that failed once but passed the second time. (see image below)
Added ability to re-run failed tests. There is a setting on a test run now that you can turn on and when tests fail the failed tests will automatically re-run. This helps account for flaky tests and saves your team time.
Added ability to create a Testery account using email (instead of just Github and Bitbucket)
Added a new user invite. When you add a team member they will now receive an invite making the process much more smooth getting into Testery.
Display commit for project that triggered the test. If a test run is triggered by another project deploying and your test code is in a different project, you will now see both projects commit info.
Added test run execution priority. Give a pipeline stage a priority and all test runs within the stage will default to that priority. You can override this at a test run level. Now, more important environments / tests can run before others that are less urgent.
Added commit info for each card on the Environment Dashboard. You can now easily see what code was in the environment at the time the test ran and who committed.
Added ability to filter test results by test status.
Added a not run category in test results so you can see if a test didn't run for some reason.
create-deploy from the CLI now supports the same environment, project and commit but with different build ids.
Added pipeline stages. Pipeline stages represent stages in your development pipeline (dev, test, qa, prod, etc). Each environment can be added to a pipeline stage allowing you to group environments into stages. You can view your projects and test runs by pipeline stage.
We now have an Environment Dashboard!!! See a birds eye view of all your test runs across environments. If you have setup pipeline stages the stages will display in columns with each test run /environment combo in the column. When you click on a test run on the dashboard it will take you into the test run to see more details.
Added a link to your environment. Edit your environment, add the url to your environment and team members can easily access the environment from within test runs.
We made test_run_id available as an environment variable.
Added filters to more easily navigate the environments page when you have a lot of environments.
Added ability to change your password in Settings.
Bug fix: fixed issue when manually running a test run when using "latest deployed version".
Add tags for Cypress.
Added ability to run Cypress tests parallelized by test (you can't do this in Cypress Dashboard at this time!)
Added ability to give scheduled test run a name
Updated configuration to support webdriver.io file being stored in any location.
For Cypress.io, it's a common issue with the framework where chrome will fail to load. We added extra support for this scenario and will automatically re-run the test if chrome fails to load.
Added a "Run Now" button to Scheduled test runs. If you click the run now button the scheduled test run will run immediately.
UI Design: Enhanced how tests display within a file/feature so they are grouped better.
Updated Slack alert to send to notify for all environments and not just a single one.
Bug fix: Fixed issue where a test here or there would get run twice and count as 2 tests in results.
Scheduled test runs is live! You can now schedule a test run to run regularly at a certain day/time or when a deploy happens. (If running a test when a deploy happens you have to send us your deployment information.)
Display branch name to the Source Info box on a test run.
Added support for Bitbucket Server.
Added ability to manually upload test artifacts. A project can now be created with no artifacts associated.
Bug fix: When parallelized by file and re-running failed tests, only re-run the tests in the file that failed.
Bug fix: enhanced how screenshots are associated to a test run
Bug fix: Screenshots from failed test are also getting attached to next test run at times
Test runs page got a redesign! You can now view project name, include/exclude filters, regex, test passing % without having to click into results.
Allow multiple test runs to be queued up that have different regex filters (given all other fields are the same)
Fixed the rerun failed tests action to account for parallelize strategy when parallelizing by file/feature.
Added ability to configure/override timeout for test runs. You can do now set a default timeout for a single test and the test run at the project level and override at the test run level.
Added ability to parallelize by file for the Nightwatch.js runner
For NUnit tests, allow user to specify to parallelize tests by file or test. When running tests by feature separate out test results by individual test for easier viewing and report errors appropriately.
Allow user to open test run results in a new window
Allow user to copy the test result name
Display test output while test is running
Do not clear regex box when the field loses focus