Coping with changing object id’s

Sometimes you might be running automation on more than one branch on code. For example, it’s pretty common to be running some smoke tests on the ‘Development’ or ‘Integration’ branch and also be running tests on ‘Master’. So how do you cope when an object’s id is set to one thing on one branch and another thing on another branch. Well it’s easy – here’s an example using calabash and also one using watir

If I have an input field with an id of “important_x01” and my tests reference the field in a method as follows:

 

//calabash example
def important_info_input
   query("* marked: 'important_x01'")
end

//watir example
def important_info_input
  @browser.text_field(id: 'important_x01')
end

 

Then let’s say that there is a change to that object id on the integration/development branch and it is now called “i_info”. So, until this change is promoted to Master this object will be known on different branches as two different things. Well the easy way to handle this is to allow the code to return whichever of these objects exists:

//calabash example
def import_info_input
 if element_exists("* marked: 'important_x01'")
    query("* marked: 'important_x01'")
 else
    query("* marked:' i_info'")
 end
end  

// watir example
def important_info_input
 if @browser.text_field(id: 'important_x01').exist?
   @browser.text_field(id: 'important_x01')
 else
   @browser.text_field(id: 'i_info')
 end
end

 

Why automation isn’t King!

Automation should be King in development but it’s not and that’s generally because the standard of a lot of the suites out there is poor and that is due to unrealistic expectations of the tool by those performing the automation.

Some companies have a thousand tests, but the tests are flaky and they perform really badly, they fail now and again and no-one knows why and no-one really cares and as a result the test suites are useless and no-one really believes that automation works.

Tools like Watir and Capybara are breeding grounds for these flaky tests. They provide a level of abstraction that insulates the new automation engineer from the level of subtlety that stable automation requires.

Their level of abstraction makes people think that they can just define some elements in a page class, write a scenario in a feature file and then use nothing but a bit of watir and rspec direct in the step_defintions like this:

site.my_area.my_page.my_checkbox.when_present.click
site.my_area.my_page.my_button.when_present.click
site.my_area.my_other_page.my_field_1.when_present.set('my_value')
site.my_area.my_other_page.my_button.when_present.click
expect(site.my_area.my_important_page.h2.text_,to_eq(some_value)

But I’ve got news for you, it doesn’t work like that! Not in real life. In real life you need to do some magic behind the scenes, because maybe you’ve got some javascript still loading, or maybe this page is really slow to load (and the performance is not going to get fixed) and maybe on your crap test environment the button even needs to be pressed twice to get it to load? So you need belt and braces for pretty much most things.

 For some elements you might find you need to fall back into Selenium commands; you’ll be doing everything you can to avoid using x-path as a locator; you might even need to fall back directly into javascript to interact with the DOM. You’ll probably need to add some robust error handling for when a page hasn’t loaded due to network issues; you’ll want to do set up and tear down of data; you’ll need to be thinking about what you’re verifying and why and what you should be verifying stuff against. You might be reading in copy text files for site verification or grabbing data from the CMS API; there is a whole world of stuff that you should be doing…you should be making your tests independant; flexible; extensible; reusable; performant; robust; and self-verifying.

So why are your tests failing?

  • Using brittle selectors such as x-path
  • Using fixed sleeps instead of enhanced waits
  • Using the same test data while running your tests concurrently
  • Not having error handling in your API calls
  • Badly written automation code – not taking care with each element
  • Making your tests too long
  • Not making your suite robust enough to handle network issues
  • Failing to version your tests causing the wrong tests to run against the wrong branches

If automation was as simple as banging out straight forward lines of Watir or Capybara then everyone would do it; they’d do it quickly and it would be cheap; but it’s not that simple. It requires consideration and design and most importantly it requires TESTING!

“Flaky test suites are sloppy. They are worse than no tests at all!!”

So next time you come across flaky test suites, disable those suites immediately and fix them so that they are no longer failing intermittently. If we all take care and we all do it right then people will see that automation does work and automation will be King.

Scenario Outline Madness

One of my pets hates is seeing test suites which perform really badly because of the overuse of the Scenario Outline construct.

Seriously, you need to think about why you are using them? Do you really need to run this entire scenario including the Background Steps (if you have them) all over again?

Here’s an example of where it is not required and how to change it:

Feature: Personal Details

Scenario Outline: Verify that a user is prevented from entering an invalid phone number

Given I am logged in

When I goto my personal details

And I enter an invalid  <telephone_number>

Then I see an <error_message>

Examples:

|telephone_number| error_message|

|abd|Please enter only numbers|

|12|Please enter a valid telephone number|

|44++ 22|Please enter a valid telephone number|

——————————

Now why would you want to re-login every time here? Why would you not do this:

Feature: Personal Detail

Scenario: Verify a user is prevented from entering an invalid phone number.

Given I am logged in

When I goto my personal details

Then on entering an invalid telephone number I see an error message :

|telephone_number| error_message|

|abd|Please enter only numbers|

|12|Please enter a valid telephone number|

|44++ 22|Please enter a valid telephone number|

——————————

I mean ok, I accept that two actions are included within one step i.e. edit the telephone number and verify the error message, but we have multiple actions in one step all the time …for example, ‘Given I am logged in’ often covers a whole host of actions e.g. visit the home page; click on login; visit the login page; enter your login details; submit your login details; verify that you’re logged in, so it’s fine – honestly it is!!

I can understand why people are reluctant to this, it down to semantics and the constraints people feel when using Gherkin, it’s those WHEN and THEN keywords, because naturally you’d want to say: “When I enter an invalid telephone number then I see an error message” and people are shit scared of that “then” right there in the middle and become convinced that there is no other way to write this test than as a Scenario Outline.

Just making this simple change to this test will make it run 3 times faster – remember, you’re not testing login here,  you are testing that the correct error message is displayed – therefore that is the only bit you need to re-run!!

 

So, when do you use a Scenario Outline?

So, when do you use Scenario Outline? Well, I believe it should only be used when you actually need to rerun the whole scenario. So, for example, let’s say that you want to check that a users details have persisted after logout, then you do actually need to re-run everything, so you do need to use a Scenario Outline.

Scenario Outline: User details have persisted after logout

Given I have logged in

And I have set my personal details <name>, <telephone_number>, <date_of_birth>

And then logged out

When I log back in

Then my personal details are still set to <name>, <telephone_number>, <date_of_birth>

Examples:

|name|telephone_number|date_of_birth|

|John|0772219999|18/01/1990|

|Jo|077221991|19/02/1991|

——————————

 

Cannot install Ruby Gem – certificate error

I just encountered a problem trying to install first the rails gem and then the watir gem.

I executed:

$ gem install rails

it failed with an OpenSSL error, a bit later I tried to install watir using:

$ gem install watir

I was surprised to see that it also failed with

# Unable to download data from https://rubygems.org/ - SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed (https://s3.amazonaws.com/production.s3.rubygems.org/latest_specs.4.8.gz)

I wondered it might have something to with the fact that I am using rvm to manage my ruby environments, it turns out that it does indeed. It seems that the certificates in rvm must be updated manually every so often.

If you check the status of the certificates in rvm by running:

$ rvm osx-ssl-certs status all

you will probably see

# Certificates for /etc/openssl/cert.pem: Old.

to update them, run:

$ rvm osx-ssl-certs update all

(Note – you may need to enter your password.)

Hey presto, your problem should be resolved!