What does your process look like when you are writing an automated test?

I am curious about the different workflows people use when writing an automated test. This information will be helpful as we iterate and improve the user experience in Cycle Cloud.

  1. What assets are you looking at or referencing as you write the test? Is there a requirements document of some sort guiding you on what is important? What documents or artifacts do you reference as you write the test? What tools or software are you integrating with or getting the information from?

  2. What language are you writing the automated test in (if you are not using Cycle and CycleScript)? Python? Java? JavaScript?

  3. As you write the automated test, do you execute it from time to time as you write it, as a way to test it? What is this process like? Are you in a debug mode or are you just executing the tests per the normal process?

  4. If you say “yes” to number 3, where do you see the results of the tests you execute as part of automated test creation? Are they in a pop-up window? Do you navigate away from the test you are writing for a moment to see the results, or are you able to look at the test you are writing and the results on the same screen?

2 Likes

When I’m testing a web application, my focus is on identifying priority of what needs to be tested so I can find out there is an issue/unaccounted for change as quickly as possible.

I’m using a framework I’m building out with the help from Node, Jest and puppeteer (along with support libraries). These are built using the Javascript language with Typescript support.

When forming the automation I like to start with identifying the areas of the application in comparison to the user flow. So utilizing the Page Object Model design pattern I am able to build out my assets and structure. Selectors and functions which are specific to a “page” within the application (ex: Login page) are in a single file. I will then import that file into the Test for usage in validating element visibility, and page specific functions which take the “busy work” of performing an action via automation and encapsulates that process for increased readability.

As I’m writing the test, I will execute in order to validate the flow is accurately performing as intended. This way I don’t get to the end of the test, without double checking how it runs, and find out I made bad assumptions or mistakes in code which cause test failure, race conditions or false positives. I build the test in blocks to validate each piece is working as intended. When finished, the entire test is executed at least 3 times to validate consistency. If it fails once or more out of those three times then I dive into that area and debug to identify the cause and remedy. This is repeated until I can get at minimum 3 consecutive passing executions of a test.
Depending on the application however, the time window the tests are executed could play into whether it passes or fails so that judgement should come into play when deciding whether a test is valid or not.

Currently I execute my tests on my local machine via terminal window embedded onto Visual Studio Code. However I am able to run it from any terminal and get the same output in that terminal window. In it’s basic form it works.
The only area where I would Identify is lacking would be readability of the test execution output… I don’t need to see all of the logging that takes place on every execution. Having something which can quickly present the tests which passed and the tests which failed, along with a link to the exact point of failure within the test, would be beneficial in helping debug tests and identify exact tests which fail.

1 Like

This is good stuff, @SethTarrants! Some requirements I am hearing in your words:
You need the ability to execute a test immediately and review and drill into results on the same screen as the test you are writing (for troubleshooting).

Do you also need the ability to stop a test immediately, while you are writing a test?

Thinking about you building a test block by block, it sounds like it might be nice to be able to run just one Scenario. We can do this with the @wip tag today, but perhaps there are improvements we could consider one day - e.g., breakpoints.

That is a good callout I didn’t mention. Yes, I do cancel tests once I begin execution and typically that is because I forgot to make a change and/or reset the SUT state to ensure a clean test run against the intended code change.