cuke_sniffer_report

Cucumber Best Practice

For those of you have difficulty understanding what a good cucumber suite looks like it might be worthwhile taking a look at the cuke_sniffer gem. It’s a neat gem that searches your cucumber project for smells and gives you not only a score but a detailed list of the smells and exactly where they are. […]

Coping with changing object id’s

Sometimes you might be running automation on more than one branch on code. For example, it’s pretty common to be running some smoke tests on the ‘Development’ or ‘Integration’ branch and also be running tests on ‘Master’. So how do you cope when an object’s id is set to one thing on one branch and another thing on another branch. Well it’s easy – here’s an example using calabash and also one using watir

If I have an input field with an id of “important_x01” and my tests reference the field in a method as follows:

 

//calabash example
def important_info_input
   query("* marked: 'important_x01'")
end

//watir example
def important_info_input
  @browser.text_field(id: 'important_x01')
end

 

Then let’s say that there is a change to that object id on the integration/development branch and it is now called “i_info”. So, until this change is promoted to Master this object will be known on different branches as two different things. Well the easy way to handle this is to allow the code to return whichever of these objects exists:

//calabash example
def import_info_input
 if element_exists("* marked: 'important_x01'")
    query("* marked: 'important_x01'")
 else
    query("* marked:' i_info'")
 end
end  

// watir example
def important_info_input
 if @browser.text_field(id: 'important_x01').exist?
   @browser.text_field(id: 'important_x01')
 else
   @browser.text_field(id: 'i_info')
 end
end

 

Scenario Outline Madness

One of my pets hates is seeing test suites which perform really badly because of the overuse of the Scenario Outline construct.

Seriously, you need to think about why you are using them? Do you really need to run this entire scenario including the Background Steps (if you have them) all over again?

Here’s an example of where it is not required and how to change it:

Feature: Personal Details

Scenario Outline: Verify that a user is prevented from entering an invalid phone number

Given I am logged in

When I goto my personal details

And I enter an invalid  <telephone_number>

Then I see an <error_message>

Examples:

|telephone_number| error_message|

|abd|Please enter only numbers|

|12|Please enter a valid telephone number|

|44++ 22|Please enter a valid telephone number|

——————————

Now why would you want to re-login every time here? Why would you not do this:

Feature: Personal Detail

Scenario: Verify a user is prevented from entering an invalid phone number.

Given I am logged in

When I goto my personal details

Then on entering an invalid telephone number I see an error message :

|telephone_number| error_message|

|abd|Please enter only numbers|

|12|Please enter a valid telephone number|

|44++ 22|Please enter a valid telephone number|

——————————

I mean ok, I accept that two actions are included within one step i.e. edit the telephone number and verify the error message, but we have multiple actions in one step all the time …for example, ‘Given I am logged in’ often covers a whole host of actions e.g. visit the home page; click on login; visit the login page; enter your login details; submit your login details; verify that you’re logged in, so it’s fine – honestly it is!!

I can understand why people are reluctant to this, it down to semantics and the constraints people feel when using Gherkin, it’s those WHEN and THEN keywords, because naturally you’d want to say: “When I enter an invalid telephone number then I see an error message” and people are shit scared of that “then” right there in the middle and become convinced that there is no other way to write this test than as a Scenario Outline.

Just making this simple change to this test will make it run 3 times faster – remember, you’re not testing login here,  you are testing that the correct error message is displayed – therefore that is the only bit you need to re-run!!

 

So, when do you use a Scenario Outline?

So, when do you use Scenario Outline? Well, I believe it should only be used when you actually need to rerun the whole scenario. So, for example, let’s say that you want to check that a users details have persisted after logout, then you do actually need to re-run everything, so you do need to use a Scenario Outline.

Scenario Outline: User details have persisted after logout

Given I have logged in

And I have set my personal details <name>, <telephone_number>, <date_of_birth>

And then logged out

When I log back in

Then my personal details are still set to <name>, <telephone_number>, <date_of_birth>

Examples:

|name|telephone_number|date_of_birth|

|John|0772219999|18/01/1990|

|Jo|077221991|19/02/1991|

——————————

 

Calabash Android HTC One M8 issues

I find it interesting to see how different phones handle calabash tests. For example a smoke test which runs in 12 minutes on a Samsung Galaxy S4 Mini might take 20 minutes on an HTC One M8…but this doesn’t make any odds to the overall test suite as all the test are running in parallel anyway…what does cause issues though is the HTC one M8’s inability to sometimes capture the first touch event.

On  normal day my automation suites will run about 30-40 jobs. These tests are really really stable…and if they fail then we can be sure that the developer has introduced or bug or very very infrequently has caused an unexpected automation breaking change. However, this wasn’t always the case. All of the jobs were stable from the get go with the exception of anything that ran on the HTC M8 One – that phone is the pits!!

Here are the issues I encountered and how I resolved them:

  1. The HTC M8 suffered with timeouts via the app from services when other phones didn’t
  2. The HTC M8 took longer to display objects on screens than other devices
  3. The HTC M8 suffered with random failures at the start of scenarios

To resolve this I did the following:

  1. Built in retries at the screen level when the ‘services can’t be reached’ msg was detected in the app using an appropriate back-off protocol of i.e. 1,2,4,8,16,32,64 seconds
  2. Made sure to set a timeout of 30 seconds on problematic screens e.g.
    wait_for_element_exists (element_name, :timeout => 30)
  3. Utilised the cucumber rerun command in the jenkins job to rerun any failing scenarios e.g.
     --format rerun --out rerun.txt
  4. Sent an adb command to the device to ensure it was properly awake and didn’t have the dimmed screen:
    # Sends a command via adb to the current device to press the Enter key
    system("#{default_device.adb_command} shell input keyevent KEYCODE_ENTER")
    

    This was enough to undim the screen and make the phone ready to accept the first touch event.

For interests sake here are some times for the same suite of tests running on different devices. I would be interested to know if others experience the same differences and what causes the performance to be so radically different between Kitkat and Lollipop and then between the Samsung Galaxies and the Nexus / HTC One?

Vodafone 890N on Kitkat => 12 mins
Galaxy S4 mini on Kitkat => 12 mins
Galaxy S5 on Lollipop => 16 mins
Galaxy S6 on Lollipop => 17 mins
Nexus 6 on Lollipop => 20 mins
HTC M8 One on Lollipop => 23 mins

Cannot see Android device via ADB

So, the plug came out of the back of the CI server…and at the same time 2 of the Samsung phones upgraded themselves and I had switched from one usb hub to another and replaced the usb cables….and suddenly I couldn’t see the Samsung S4 mini or the Samsung S5 via adb and all my calabash automation tests in jenkins were failing and the world was collapsing around my ears…for at least an hour! Was it the hub, the cables, the restart, the upgrade?

After a good 15 minutes stopping and restarting adb and plugging and unplugging cables I decided that it must be something to do with the new ports I was using…so I ran:

$ lsusb

and could see the devices connected – so the usb ports were working.

So then I ran :

$ ls -l dev/bus/usb/001

and for some reason plugdev was only showing for one of the devices through adb write permissions were missing:

crw-rw-r--  1 root root    189, 128 Apr  8 11:46 001
crw-rw-r--  1 root root    189, 129 Apr  8 11:46 002
crw-rw-rw-+ 1 root plugdev 189, 135 Apr  8 16:59 008
crw-rw----+ 1 root audio   189, 137 Apr  8 16:16 010

 

So, what had changed…I checked my android.rules file; checked that ubuntu hadn’t upgraded after the restart and then double checked the phones where I discovered a new informational message…’connected as a media device’!!

Well, somehow both samsungs had connected as Media Devices when they restarted and they need to be connected as Camera devices – so I changed both phone to connect as a Camera device and as soon as I made the change I could see them when i ran

$ adb devices

and all my jenkins jobs were able to run again.

 

Calabash (RunLoop::Xcrun::TimeoutError)

While I remember, for those of you with older macs who keep encountering the dreaded timeout error when running calabash ios against the simulator:

 Xcrun timed out after 30.27 seconds executing
   xcrun instruments -s templates
  with a timeout of 30
   (RunLoop::Xcrun::TimeoutError)
  ./features/ios/support/01_launch.rb:27:in `Before'

You can change the default RunLoop timeout by adding a override in your support/env.rb file. For example to increase the timeout to 150 seconds, add this:

RunLoop::Xcrun::DEFAULT_OPTIONS[:timeout] = 150

Give it a try!!!

 

 

—–UPDATE—–

Note: RunLoop errors using calabash-ios are the most frustrating errors to debug when using physical devices. Sometimes there are other issues which cause this error, such as

A) Forgetting to put the calabash ios server in the ipa;

B) Having a typo in the one of the environment variables (BUNDLE_ID, DEVICE_TARGET, DEVICE_ENDPOINT).

C) Not having the device enabled for development

If you encounter RunLoop errors and you can’t understand why …and you are sure that it’s not one of the first 3 reasons then:

  1. Check UIAutomation is set to ON on the device and that you don’t have an iOS update alert on your screen
  2. If the Developer row is missing or UIAutomation is already turned on, try the remaining steps. If there is an IOS update alert, do the update or delete it (note it will download again and you will have the same issue unless you block apple updates in your router – if you want to do this then google it)
  3. Unplug your device
  4. Plug it into a different computer
  5. Open Xcode and view Devices
  6. Ensure there is no yellow warning triangle, if there is then allow xcode to copy the symbol files across to your device
  7. Open Instruments and try to record a session using the device
  8. Unplug the device
  9. Restart the device
  10. Ensure the Developer row is present in settings and that UI automation is turned on
  11. Plug back into your original machine
  12. Open Xcode – check for yellow triangle and repeat symbol copying (or you might need to update xcode – xcode will tell you)
  13. Open instruments and try to record
  14. Set UIAutomation to ON again
  15. Retry your tests.

Note that this is process that you may find yourself going through everytime you either update xcode, your device iOS version, switch machines, plug a new device into your CI…get used to going through this ritual if you are in charge of an iOS device library!!

See cucumber scenarios by tag – solution

I was complaining the other day that I couldn’t find a little utility which would allow me to see my cucumber scenarios by tag so I wrote one.

I’m just about to put it up on GitHub and I’ll package it as a gem as well then add the link to it from this site.

Basically, it’s as simple as providing the directory for your feature files; a filename for the output and the tagname that you want to search for:

dir = '/Users/user/GitHub/my-calabash/features/'
filename = 'tag_info_out.html'
tagname = '@manual'
generate_report(dir, filename, tagname)

and then it presents the results in an html file:

report

Ok- so the formatting leaves a lot to be desired and I will work on improving that, but it’s so handy to be able to just input a tagname and then get a report of all the scenarios that would be called using that tagname.

The next incarnation will produce a report for multiple / concatenated tagnames.

Mobile App Automation and Localisation

A common pitfall when beginning automation is to use the text for objects instead of their id’s.

For example, you have a button which says “Go to Dashbord”, so you have something in your class which looks
a bit like this:

def go_to_dashboard_btn
 "widget.type marked:'Go to Dashboard'"
end

But you also have a screen which asks the user which language they would like to use the app in and you select
a language which is not English. Suddenly your “Go to Dashboard” button “does not exist”. Why? Well because it’s the
text string has been localised and the button now says something like: “Vamos a Dashboard” (forgive my spelling).
You can see the different text values if oyu query the object while the app is using the different localised string
So if you query this object (query(“widget.type.button”) when using the app as an English user you will see:

 
 [0] {
 "id" => "goto_dashboard_btn",
 "enabled" => true,
 "contentDescription" => nil,
 "text" => "Go to Dashboard",
 "visible" => true,
 "tag" => nil,
 "description" => "widget.type.button{426535e0 V.ED...",
 "class" => "widget.type.button",
 "rect" => {
 "center_y" => 707,
 "center_x" => 186,
 "height" => 64,
 "y" => 675,
 "width" => 118,
 "x" => 127
 }

And if you query this object (query(“widget.type.button”) when using the app as an Spanish user you will see:

[0] {
 "id" => "goto_dashboard_btn",
 "enabled" => true,
 "contentDescription" => nil,
 "text" => "Vamos a Dashboard",
 "visible" => true,
 "tag" => nil,
 "description" => "widget.type.button{426535e0 V.ED...",
 "class" => "widget.type.button",
 "rect" => {
 "center_y" => 707,
 "center_x" => 186,
 "height" => 64,
 "y" => 675,
 "width" => 118,
 "x" => 127
 } 
 
 
As you can see the text string changes, so you need to make sure that you use the accessibility id.
So now your methods look like this:
def go_to_dashboard_btn
 "widget.type marked:'goto_dashboard_btn'"
end

If you make sure that the developers provide accessibility ids for everything then localisation won’t affect your
automated tests.

Note this general approach applies to both Calabash and Appium (and other tools)

Jenkins can’t find android device

Sometimes, we see the problem where the jenkins is no longer listing the android device when trying to run our automated tests.

We try adb devices and there is nothing coming back. We unplug and replug and nothing happens.

Well…I worked out what was causing so this so that I no longer see it…and I’ll tell you about that later, but in order to get things running again, here’s what you generally need to do.

You need to kill and restart the adb as root on the jenkins server and THEN you need to unplug and replug the device(s). Do the following (on an ubuntu server)

sudo su
enter password
cd Android/Sdk/platform-tools/
./adb kill-server 
./adb start-server 
./adb devices 
exit

So, what’s causing this? Generally it’s two calabash jobs running at the same time that causes adb to get confused as hell. To prevent this either build a rake task or install a build blocker plug-in.

Cucumber error – invalid byte sequence in UTF-8 (ArgumentError)

Ever get an invalid byte sequence in UTF-8 (ArgumentError) when trying to run your calabash-android project?
Here’s a thing to check – have you got 2 apks in your apk folder / prebuilt folder?
That’s usually the cause.

 

Hope that helps someone out there save 30 minutes trying to work out why your test isn’t running.