Snapshot testing with FBSnapshotTestCase

If you had asked me two weeks ago about snapshot testing, I’d definitely tell you it’s stupid. And I think I had my reasons… I don’t regret it. But I guess today’s opposite day, because my oppinion turned 180 degres. I haven’t already made my mind up, but it seems like snapshot testing is quite promising.

What is snapshot testing

I’m sure you’ve already heard about it, probably from your hipster friend. Basically, snapshot testing is a way to automatically test the aesthetic part of your UI. It consists of creating a view from your app and comparing it against a reference image. It they match, all is good and people are happy. But if even one pixel is off, women scream and children cry.

I think that’s exactly what designers expect to happen when something is one pixel off.

Man, snapshot testing is lame

Just a few sentences in and many of you probably already scream at the monitor and have foam forming in the corner of your mouths. And I know what you’re thinking.

  1. How can the screen be exactly the same as the picture?
  2. How can I generate an image that’s exactly identical to my UI?
  3. How can I make these tests repeatable with a pixel-to-pixel precision?
  4. Isn’t comparing images totally slow?
  5. How do I know what exactly is different between the two images?
  6. How do I live with myself knowing my test fail for no reason… which they will?
  7. When I change my UI’s appearance tomorrow, who’s going to give me new reference images? Not me!

Ofcourse, these are all good points. I feel the same way :). But let me show you how one little framework attempts to answer all those quiestions.

Introducing FBSnapshotTestCase

FBSnapshotTestCase is a framework developed by a company you might have heard of. It’s called Facebook.

Maaaan, snapshot testing… Facebook… This article sucks!

It probably does… but not because of FBSnapshotTestCase.

Aaaaaanyway, what the FBSnapshotTestCase library does is not a lot, but it’s enough.

An XCTestCase subclass

The first thing that deserves credit is the fact that the FBSnapshotTestCase class inherits from XCTestCase, essentially making comparing snapshots a unit test. And yes, I know this totally violates many of the core principals of unit testing and TDD, but still it’s useful to get all the goodies from it. So what kind of goodies are we talking about:

Xcode integration

To Xcode, your FBSnapshotTestCase suite is just another unit test collection. So you get all the IDE support for it like:

  • CMD+U to run tests
  • UI for showing test results
  • Easy continuous integration

Reliability

Since you are not running your app with real data, you can isolate different screens and even views. That makes it easy to test different components in isolation using hardcoded data. This sagnificantly reduces the possibility of random failures. Go Facebook!

Isolation

I’m repeating myself a little here, but I cannot stress enough how useful is to test components in isolation. You are not limited to testing the whole screen. Just instantiate a view and see if it looks like it should. Allocate a view controller, inflate a Xib, load a storyboard. It’s all on the table.

Recording mode

One question I’ve always asked myself when thinking about snapshot testing is… Who the hell creates all those reference images and how come they match the app 100%?. The answer is recording mode…

FBSnapshotTestCase lets the app generate reference images for itself. The class has a property named recordingMode. If set to YES, instead of comparing images, it will make a snapshot of your view and store in the in the reference images folder. Essentially, whenever you’ve verified that your view looks good, you can run your test in recording mode and it will become a reference for all subsequent runs.

Well, it still sucks that you need to change some property in order to generate reference image. And then, you will probably forget to remove the recording mode and suddenly tests start failing… But anyway, life’s not perfect.

Playing nice with animations?

So what if you’ve done some animations that go along with your views – a snapshot will only capture the initial position.

Not necessarily. Most of the time, animations are fine. Since they mostly work on layers, UIView’s drawViewHierarchyInRect method is smart enough to render the view’s real… uhm… final position. Thus most of your animations are safe and you will be able to get an image of your UI after they are done.

performSelector: ?

Oh, you got me there… If your views rely on NSObject’s performSelector for delaying actions, you might (and will) run into problems with FBSnapshotTestCase. Well, theoretically you can try to work around it…

Runloops?

For the above mentioned performSelector and other time-based issues you might try the following:

[[NSRunLoop currentRunLoop] runUntilDate:[NSDate dateWithTimeIntervalSinceNow:1.0f]];

This will effectively activate the runloop for 1 second and force it to execute everything scheduled on it. However, I wouldn’t recommend using it unless you have no other choice. And let’s be honest… you always have another choice :).

The problem with running the runloop is that it will autimatically increase your tests in length. For the above example, that’s one extra second. And it’s a secred TDD rule to have tests that are fast… really, really fast. The idea behind is that developers should be running test all the time during implementation. And if tests take a minute to complete, they’re are just not going to do that.

Even though FBSnapshotTestCase is not exactly unit testing and running snapshot tests every 2 minutes during development is not that useful, it still helps to follow TDD rules and keep execution times to a minimim. Spoiler alert – 5-10 seconds should be more than enough for all your FBSnapshotTestCase suits.

Changing code to make it more… testable

As you can see, most of your UI classes can be snapshot tested without modification. But if some of them fall victom to the symptoms we talked about in the last chapter, you will be faced with a decision:

  1. Change your production code to allow snapshot testing
  2. Skip testing for that particular item

The second option might be tempting, and depending on how important and risky that component is, it may be the best solution. But still, the first option is worth a try!

To change production… or not to change production

It’s an old phylosophical question… Should you change production code in order to make it more testable, or should test code try to adapt to production code. Some developers agree to the former, some to the latter. Test-driven developement for one thing has the core value of shaping your production code according to your tests. On the other hand, many programmers would argue that if a testing platform forces you to change your application, it’s obviouly not doing its job.

So the truth is out there (X-files anyone?). If you are dumb enough to ask me, the answer depends on some additional factors.
First of all, I agree that the testing “harness” shouldn’t interfere with production code. That being said… there’s an exception! So:

The testing “harness” shouldn’t interfere with production code… unless it forces you to change it for the better!

And that’s exactly where TDD comes in. Yes, it forces you to change the way you write production code, BUT only to make you write better structured programs.

I like to consider non-testable or hard to test classes as [code smells][Code smell definition]. If you find it hard to put into a test harness, it’s probably because it’s too tightly coupled to something or it tries to be more than it should ([Single responsibility principle][Single responsibility principle definition] violation). So it might be worth fixing that, instead of trying to make your tests work around it.

Granted, this holds true more about TDD and less about Snapshot testing with FBSnapshotTestCase. If you’re not able to create the reference image you want, that doesn’t mean your class is not well structured.

Coping with unwieldy animations

This is going to sound terribly underwhelming but a universal way of doing that is to… not have animations in the first place. I would like to bring a common pattern from UIKit to your attention. For example, the method for changing the value of a UISwitch:

- (void)setOn:(BOOL)on animated:(BOOL)animated;

You can specify if you want the animation or not. Normally, the purpose of this is that if you want to change the value while your view is not on screen and the user is not going to see it, you might choose to have it not animated.

So being able to turn off animations can be a cool thing to have. Even if you don’t use it right for now. (And yes, I do realize writing code you’re not using violates some core principals of TDD, but hey, I also like to live life dangerously :) ).

Adding FBSnapshotTestCase to a project

Get the framework

There are two ways to add FBSnapshotTestCase to your project. The easy way… and the way I did it. The easy way is CocoaPods. Yes, FBSnapshotTestCase supports CocoaPods so chill. So if you’re one of those guys that use CocoaPods, you’re all set. But the thing is, I don’t love myself enough to use it. What I did was to compile FBSnapshotTestCase myself. In fact, that was kind of simple. The folks in Facebook already have a target in their Xcode project that generates a .framework file. So all I did was build it. The trick is that you might want to compile for multiple architectures. But even that is easy. If you are nerdy enough to be interested in that, you can read my Compiling libraries article.

Reference directory

One thing you need to do in order to setup FBSnapshotTestCase correctly is to set the FB_REFERENCE_IMAGE_DIR variable. Again, there are a couple of options. The lazy way is to set it directly in code somewhere. But lets face it – that’s not how the cool kids do it. Instead (and this is what the authors recommend) they would set it as an environment variable. As most of you probably already know, that’s done in the “Edit Scheme screen”.

Scheme settings image

One important note here is that you need to make sure the ” alt=”” /> @implementation LoginSnapshotTests

- (void)setUp {
    [super setUp];
    self.recordMode = NO;
}

- (void)tearDown {
    // Put teardown code here. This method is called after the invocation of each test method in the class.
    [super tearDown];
}

- (void)testLogin {
    UIStoryboard* loginStoryboard = [UIStoryboard storyboardWithName:@"Login" bundle:nil];
    LoginViewController* loginController = [loginStoryboard instantiateViewControllerWithIdentifier:@"myLoginVC"];
    UIView* loginView = loginController.view;
    loginController.usernameField.text = @"dev@monologue.com";
    FBSnapshotVerifyView(loginView, nil);
}

I hope that the setUp and tearDown methods need no introduction? They are run before and after each test in order to prepare the environment for the test. In OCUnit, all methods returning void and starting with the word test are considered tests, hence the name testLogin. What follows is a function body so simple even the most hardcode TDD fans wouldn’t complain (well, maybe we can move the UIStoryboard stuff in the setUp method).

As discussed above, you can use self.recordMode = (NO|YES); to toggle between generating reference images and actual testing.

First of all, we are opening the storyboard to create an instance of the login view controller. Nothing interesting there, but what happens afterward is really important. We are (somewhat artificially) calling the getter for the view controller’s view property. This will force the instance to call the famous loadView and viewDidLoad methods to fully initialize its UI elements. If you fail to do so… well, lets just say you’d be disappointed with your reference images.

With that out of the way, you can do whatever you like in order to make your view controller display some data. Even if you don’t, I guess it would still be a valid and useful test, but I like to fully supply my UI with data in order to increase test coverage (inless I have time to make a separate test for every situation).

Lastly, we do the actual comparison. It’s most of the time the same line:

FBSnapshotVerifyView(yourView, nil);

Common Snapshot testing with FBSnapshotTestCase issues

Table view cells in a storyboard

Going back to our testLogin method, we can conclude that what a typical snapshot test involves is:

  • Creating a view using a storyboard, xib or good old fashioned code
  • Making sure subviews are created, positioned and layed out
  • Filling the view with contents
  • Making a photo (Cheeeeeeese!)

One case where this is not as clear is with table view cells. In the port-storyboard era, many cell are created directly in their table view controller, making it unclear how to get to programatically.

But I’ve figured it out for you. There are several ways to get a reference to the cell you want, and some of them result in a view that’s not properly layed out. So to save you a little time, here’s the code that does it for me:

- (void)testStoryboardCell {
    UIStoryboard* myStoryboard = [UIStoryboard storyboardWithName:@"MyStoryboard" bundle:nil];
    SomeTableViewController* tableController = [myStoryboard instantiateViewControllerWithIdentifier:@"RandomTableViewController"];
    MyTableViewCell* cellISoDesire = [tableController.tableView cellForRowAtIndexPath [NSIndexPath indexPathForItem:0 inSection:0]];
    FBSnapshotVerifyView(cellISoDesire, nil);
}

So what I found to be a good way to test storyboard cells is to initialize the whole controller (I doubt you can get around that) and call cellForRowAtIndexPath: to make it create the cell itself.

Another option would be to make the table dequeue a cell, but that doesn’t work reliably. Particularly, sometimes the layout is incorrect even if you call layoutIfNeeded. I assume that’s a way to force the view to apply its constraint, but I haven’t found it yet.

Getting back to my solution, the drawback is that the table view will start calling it’s data source methods so you need to implement them in your test harness. Well, you might get around that if you are using static cells. But most of the time you’ll need some test data anyway.

Keeping reference images in sync

Creating and modifying reference images is dead simple with FBSnapshotTestCase. But you still need to remember to keep them in sync. If you change your UI even a little, you need new images. And sometimes this is a little subtle. Sometimes, you don’t even know the aesthetics changed. Actually, having a test fail in a case when you change your app’s appearance without knowing is a really good thing. That’s what snapshot tests are for. But it can get annoying if something moves just one pixel but suddently several tests fail. And if you rely on CI, you builds start failing.

So, apart from always remembering to run tests locally before commiting I don’t know what to recommend. It’s hard to admit, but even after being properly setup, Snapshot testing with FBSnapshotTestCase still require some maintainance.

Jenkins… Hudson… or whoever that guy was

A real cool thing to do with FBSnapshotTestCase is to add it to your CI jobs. Every time someone commits, the project is built and snapshot tests run. If they fail, another job starts that prints that “someone’s” resignation letter and denies him/her access to the building. Oh, and also publishes a tweet that they suck. Or is it just my company that does that?

There is one problem that might occur though… Since snapshot testing with FBSnapshotTestCase is for the most part only available for the simulator (because your reference images typically reside on your Mac and the iOS device cannot access them unless you copy them into the bundle), your CI server needs to run the iOS simulator. Typically this is OK, but I personally incountered a problem with our Jenkins.

It would run the tests fine for a while but when you try again in a few hours, the job would fail, saying that the simulator failed to respond in time.
The problem is that the Simulator is a GUI application and services (daemons) have a little trouble starting those. Now, I really struggled fixing this, but in the end, I fould a solution. It turns out you have to change Jenkins from being a daemon to being an Launch Agent. The idea is that a Launch Agent has the permission to start GUI apps.

In order to setup Jenkins as a Launch Agent you can follow this article. Basically you need to move Jenkin’s plist from /Library/LaunchDaemons/ to /Library/LaunchAgents/. And I suspect this is not specific to Jenkins, so you can apply it to whichever CI you’re using.

Testing on multiple devices

What I really find useful about snapshot testing is running the cases for multiple screen sizes. I don’t know about you guys, but I personally mostly use a single device/simulator while developing so I always risk having layout issues on the iPhone 6+ ot 4s for example. To tackle this, I’ve setup a build machine task that runs my snapshot testing with FBSnapshotTestCase on several simulators.

This is actually fairly easy using either xcodebuild or xctool:

xcodebuild -project MyProject.xcodeproj -scheme "SnapshotTests" -destination name="iPhone 5s" -destination name="iPhone 6 Plus" test

What I’m trying to say is… Yes, you can chain destinations to execute on several devices one after another.

Try it out, it’s free!

And just like that, you’ve learned everything I know about snapshot testing with FBSnapshotTestCase. I hope I’ve convinced you it’s not lame to test using images. And I hope that just like every teenage girl and every hipster, you’ll start taking pictures of your UI and showing them to your frinds. You can even apply filters, I guess.

Seriously though, in the past I’ve always considered snapshot testing terribly unefficient and useless. But with a simple framework like FBSnapshotTestCase, it has become a breeze and by far the easiest way to automate UI testing. So stop acting prejudice and try it out!

UIAutomation in Xcode

Testing UI automatically

As you know, UI testing is tricky to do automatically, especially on mobile. On the desktop and in web development, there’s Selenium for instance,  but smartphones, being embedded devices, are hard to automate. Even test driven development leaves some to be desired when it comes to the user interface. Most people have to rely on manual testing. However, there is hope! There are frameworks, such as FoneMonkey that allow programmers to write scripts for functionally testing their applications. But more importantly, though many people don’t know it, such a tool is already integrated into Xcode itself – you can find it in the Automation section of the Xcode profiling tool.

What is UIAutomation?

UIAutomation is part of the “Instruments” application that comes bundled with Xcode. It allows developers to use JavaScript in order to write scripts that simulate actions on the user interface. This allows a large portion of the UI testing to become automatic. Using UIAutomation’s language, users can create touch events (both taps and complex multi-touch gestures), locate visual components such as views and buttons, examine the view hierarchy, create screenshots, check error conditions using predicates and log test results. While it does not provide the functionality of a unit test, it can definitely help you achieve a considerably higher degree of automation. As I mentioned earlier, there are other tools for automating user interface testing, but they cannot achieve the amount of integration and ease of use provided by UIAutomation. FoneMonkey, for instance, requires you to create a separate build target containing all of it’s libraries, whereas with UIAutomation you only hit the “Profile” button (instead of the “Run” button) and you’re all set. Now, in FoneMonkey’s defense, it is clear that Apple will never allow third parties to have the amount of access needed to create a fully integrated testing suite, so needless to say, the stock option, provided by Xcode will always be the winner in this category.

How does UIAutomation work?

This section is dedicated to some core UIAutomation principles. Though you can skip it if you only want to see some examples, I would recommend reading it so that you know how all parts fit together.

The first thing we should cover is the way UIAutomation identifies component in the user interface. A major strength of this tool is that your script doesn’t normally consist of  taps at specific coordinates. In fact, most of the time (when you’re not working with complex touch gestures) you don’t have to bother with coordinates at all. What you do is, you specify a view component, and you simulate a touch event to it. This makes your scripts very flexible in the sense that, if you change the layout of your screens or maybe modify it to fit another resolution like the iPad’s, you don’t have to rewrite your scripts. As long as a given component is still present on the screen, the script will find it and continue going. So how does UIAutomation achieve that?

All components in an UIAutomation script are identified by their accessibility label. These can be set both in Interface Builder and programatically. Normally, this label is used when providing accessibility features to an application, but here it doubles as the “name” of your view in the automation script’s scope.

Note: You don’t always need to have the accessibility label set. There are other ways to identify a view. The label just makes it more consistent.

Setting accessibility labels in Interface Builder

In Interface Builder, each view’s accessibility label can be easily set in the “identity inspector” – it is the third tab in the right assistant editor:

Accessibility options in Interface Builder
Accessibility options in Interface Builder

It is important to make sure the “Enabled” option is set. Afterwards, you can choose an appropriate name for your view. You should also think of something that you can be actually useful for accessibility, while you’re at it.

Setting accessibility labels programmatically

If you’re not using Interface builder, you can add accessibility labels to your views with the following code:

testedButton.accessibilityEnabled = YES;
testedButton.accessibilityLabel = @"Big red button";

That’s it.

How to start UIAutomation?

As I said, UIAutomation is part of Xcode’s Instruments tool. In order to start it, you need to run your application using the “Profile option” (accessed in Product -> Profile, in the toolbar or by pressing CMD + I). This will rebuild your application and launch the Instruments app. You should be presented with something like this:

Instruments
Figure 1: The UIAutomation “New” window

As you can see, the Instruments app provides a wide range of tools for tweaking your application. If you haven’t already, I suggest that you take the time to get to know (and love) them. But for now, let’s choose the “Automation” option and start writing scripts.

UIAutomation's home screen
Figure 2: UIAutomation’s home screen

This is where the magic happens. By the time you see this screen, the application should have started running. UIAutomation can be used on both the simulator and on a real device, though the simulator has a few limitations (it cannot take screenshots) but more on that later. In the screenshot above we can see that Apple’s GenericKeychain example has been running for 7 seconds now, and there is no script currently automating. It even tells as that an error has occurred while trying to run it. This is just Xcode’s fancy way of saying that there is no script to start currently.

As you can see, there is quite a lot going on in UIAutomation’s window. There are timers, timelines, recording buttons and a lot more. However, at least for now, all you’re going to need is the trace log, where you’ll be looking at your script running (and failing miserably), and the source code editor, where you’ll be writing your JavaScript. If that scares you as much as it scared me, don’t worry – the Javascript we will be using is fairly straight-forward and self-explanatory.

How to create a script in UIAutomation?

In my opinion, this is where UIAutomation starts to fall short. It’s user interface is just not intuitive enough. It supports running only a single script, several open files are not indicated well enough in the view and switching between the source editor, trace log and editor log is a mess. It’s not a big deal, but being bundled with the Xcode IDE, I expected a lot more.

Anyway, let’s create a new script. In the middle of the left pane, you will see an “Add” button. Clicking it will show a drop down menu allowing you to create, import or open a recent script. Figure 4 depicts just that:

The UIAutomation window. "1" creates a new script and "2" runs the selected one
Figure 4: The UIAutomation window. “1” creates a new script and “2” runs the selected one
Show the slightly unintuitive way to navigate between script, editor and trace log perspective
Figure 5: Show the slightly unintuitive way to navigate between script, editor and trace log perspective

After you do that, you should be presented with a source code editor and the following statement already inserted for you

var target = UIATarget.localTarget();

Figure 4 also shows the way scripts are run. Press the “Play” button at the bottom of the source editor and UIAutomation will immediately start executing. It is important to note that your application will NOT be rebuilt or re-run – the script will start from wherever state you left your app at. So make sure you navigate the user interface to whatever screen you expect in your JavaScript.

How to write UIAutomation scripts?

The reality is that UIAutomation is not as well documented as other iOS frameworks and tools. Apple’s documentation only shows a few generic examples and does not go into great detail explaining UIAutomation’s features. However, using this article and the Javascript API’s reference, you should be able to learn quickly. Here, we will go over all important principles of writing UIAutomation scripts and after that you should be able to start creating your own by looking up the classes in the API reference.

Let’s start from the beginning. What is that strange piece of code UIAutomation put at the start of my newly created script? “UIATarget.localTarget();”, most commonly seen as “UIATarget.localTarget().frontMostApp().mainWindow()” is the way you get a reference to the screen, currently displayed on the device. It should be the the view controller you plan on testing.

Accessing elements in the view hierarchy

The most important task for your script will be accessing different UI components. As mentioned earlier, UIAutomation is not about tapping on certain coordinates, but identifying views and changing their properties. If you are in doubt where a certain component is located within the view structure, you can use the following code to print it in a convenient manner:

element.logElementTree();

This will print all components in a tree-like hierarchy inside the trace log and even create a screenshot of each view. This makes debugging your script very visual and user-friendly. Unfortunately, creating screenshots is not available when using the simulator.

You can access an element’s “children” using several methods provided by the element’s class. Here’s some examples of such methods:

  • tableViews()
  • cells()
  • textFields()
  • secureTextFields()
  • buttons()
  • tabBar()
  • etc.

Since you have a reference to the currently displayed view controller, you can start going down the view hierarchy. You can also go upwards, accessing an element’s superview:

element.parent();

As you can see, most of these methods return an array, containing all components that match the requested criteria (for instance, all buttons in the view). If you are unfamiliar with JavaScript, here’s how to access an element of this array:

element.buttons()[2];

Now, you might say that just hardcoding that “2” in not a very flexible thing to do. What if a new button is added… or the others rearranged? Well, remember that thing about the accessibility labels above? You can also refer to components by them:

element.buttons()["Login Button"];

Here, we specifically said that we wanted the button, named “Login button”. In my experience, the name does not necessarily have to be the accessibility label. It can be the button’s title label value. In fact, if you don’t want to modify your existing source code, you might just use that. However, button titles are subject to localization and possibly branding and you might be better off using the accessibility (if it’s not localized as well).

I would like to emphasis something that caused me frustration a while ago. There is a difference between a text field and a secure text field. You cannot access a secure text field by calling “element.textFields()”, you have to use “element.secureTextFields()”.

Generating touch events

Naturally, the most important job of a user interface automation framework is to generate input events. Fortunately,  this is made easy with UIAutomation. I guess generating tap gestures is good enough for most people. Here’s how it’s done:

UIATarget.localTarget().frontMostApp().mainWindow().tabBar().buttons()[4].tap();

As you can see, it is just a matter of calling the “tap” method of an element. The code example above sends a single finger tap event to the fifth button in a tab bar.

UIAutomation can do a lot more than simple taps. It can perform complex multitouch gestures. I’m not going to go into details about them, but you can read about them from Apple’s documentation.

Introducing delays between UIAutomation actions

First of all, maybe you should know a little more about the way UIAutomation schedules actions. It is actually very simple – tries to execute an action and if your application is not in a state that can accept this sort of event, it will wait a specified amount of time. If it is unable to complete the action and this timeout expires, the test fails.

This timeout can be changed during runtime using the following code:

UIATarget.localTarget().pushTimeout(15);

And to revert that change, all you do is:

UIATarget.localTarget().popTimeout();

In the example, we changed the timeout between actions to 15 seconds and then reverted to the default value, which should be 5 seconds. Five seconds, in my experience seems to be good enough for most cases, but I guess that if you will be waiting for data coming from the network, you might want to consider increasing that value.

Logging information in UIAutomation’s trace log

Naturally, UIAutomation provides support for logging arbitrary information in the trace log. Most actions that your scripts do are already logged, but it is always nice to be able to provide additional information for debugging. Especially since UIAutomation does not allow you to run multiple scripts, you will need a way to log which test you are running in order to know what went wrong if the script fails. Available here is a reference of all logging methods supported. Overall you would be using “logFail” and “logPass” to indicate the test result, “logStart” to mark the beginning of a new test and “logMessage” (or any other method with different log level) to write important data into the trace. Unfortunately, unlike it’s counterpart from Objective-C, NSLog, UIALogger does not provide a printf-like signature, but you can use the following to, at least, concatenate objects into the log message:

UIALogger.logPass("Hello " + "world");

I just had to include a “Hello world” reference. Also, the Javascript language includes a “sprintf” function, but I guess it is not available here.

Another useful feature of UIALogger is the ability to create screenshot. This only works on iOS devices and not the simulator. Try this:

UIATarget.localTarget().captureScreenWithName("ScreenShot");

An image of the currently displayed screen should appear right into the trace log. Very useful if a test fails and you want to save whatever state the user interface was in while it happened.

Figure 6 show what a typical trace log looks like.

 

A trace log showing several failed tests
Figure 6. A trace log showing several failed tests

Verifying test results

Since UIAutomation is a testing platform, obviously it is going to need a way to verify that the application being tested is acting the way it should. Unfortunately, there isn’t an integrated way to verify test results. For the most part you will have to come up with your own pass criteria and perform checks manually.

Example: Did the application login? Well, how do you define a successful login? The isLoggedIn method returns YES? It does, but you’re not debugging and you don’t have access to the isLoggedIn method in your script. Hmmm…

In the example above your verification will have to check that your user interface shows the screen that is displayed right after login. Something like “The client zone screen appears” or “the tab bar is now visible”. Choosing a criteria is not always easy. You have to choose something that will ALWAYS be true for every successful login, but false for every failed attempt. Also it has to be something that will still work after same UI changes are made. For the most part test result verification looks something like this:

var logoutButton = UIATarget.localTarget().frontMostApp().mainWindow().buttons()["Logout"];

if (logoutButton()) {
    UIALogger.logPass("Application was able to login");
}
else {
    UIALogger.logFail("Login failed");
}

Here, the script tries to get a reference to a button named “Logout” inside the screen’s view hierarchy. The test is considered to be have passed if such a button exists.

To make your life easier, UIAutomation has support for using predicates. It also uses the same syntax used for NSPredicate:

var textField = screen.textFields()[0].firstWithPredicate("value is ‘Username’");

Here, predicates are used in order to find the right text field. Of course, the example is overly simplistic but using predicates, you will be able to verify complex conditions.

Handling alerts

While testing on a real devices, execution might be interrupted by unexpected events such as receiving a call, a local notification or even a low battery warning. Also, your application will probably be using some alerts of it’s own. UIAutomation provides functionality for handling these alerts. Intercepting an alert can be done in the following way:

UIATarget.onAlert = function onAlert(alert){
    var title = alert.name();
    UIALogger.logWarning("Alert with title ’" + title + "’ encountered!");
    return false;
}

Every time an alert is shown during the execution of your script, this function will be called. If it returns false, the alert will be dismissed automatically by tapping the cancel button, and you can perform some additional actions provided that you require some additional processing:

UIATarget.onAlert = function onAlert(alert) {
    var title = alert.name();
    if (title == "This error") {
        alert.buttons()["Ignore"].tap();
        return true;
    }
    return false;
}

This demonstrates that you can decide not to ignore the alert altogether, but decide which option to choose and tap that button.

Conclusion

UIAutomation is available with Xcode 4.5 and later and is a tool that definitely deserves your attention. Use the comment section to let me know – will you be using UIAutomation in the future and do you plan on teaching your QA team write automation scripts?

Thanks for reading!