Since we started the gradual adoption of BDD (over a year and a half ago) testing is increasingly becoming a key piece of the development process at ustwo.
During that period we have interviewed a fair amount of candidates, but most of them fell rather short of what they would need to do on a daily basis.
So what’s testing like around here?
The biggest hurdle for most candidates is the technical one. Testing automation requires writing a fair amount of code and if not planned carefully it turns into spaghetti pretty quickly.
We do a lot of native mobile applications (mostly iOS and Android, but sometimes Windows as well) and the automation tools for mobile are nowhere near as mature and stable as their web counterparts. Sometimes this means you have to roll up your sleeves and contribute, which we have done both for Calabash Android and iOS.
Mobile apps also bring another interesting dimension to testing which is hardware: camera, GPS, accelerometer… if the application is using them then we will need to mock their responses or settings. For example, if an application is using the user’s location we need to test how it behaves when the user has turned off location settings.
Testers are also likely to be testing for all the main platforms at some point or another, hence the need to be able to find their way around their languages (Java, Objective-C, HTML, JS, CSS), IDEs and editors (Xcode, Eclipse, IntelliJ), automation tools (Calabash, Robotium, Appium, Selenium), tools, deployment mechanisms, signing processes, etc.
At ustwo we mostly use GIT as our SCM (we have talked about branching strategies in the past) so testers constantly need to switch between branches, bisecting and blaming to find out when and where a bug was introduced.
Last, every app in the world these days needs data from external sources, either fully under control (the client’s backend) or 3rd party services (something like Stripe). To fully test how the application under test behaves, particularly around corner cases, we need full control over the server the app is pulling data from.
Since the mock server is being booted from the same Ruby code that executes Cucumber, it’s very easy to force specific server responses. These could be “normal” server responses, but also any kind of server error (400, 500), unexpected JSON, broken JSON, running the server in “slow” mode (delay all requests by a certain amount of time), etc.For the same reason as above, we have full access to the log of requests to the server. At the moment we are only using these as a “visual” cue (after a while you very quickly spot weird request patterns), but it wouldn’t be difficult to automate this process, ie, write a script that takes the server logs generated during a scenario and compares them to an expected set of calls.Since the server runs on the machine that is executing the tests we know that the only app hitting it is the app under test (much harder to guarantee this on a staging server, if not impossible).
Ummm, like, ain’t gonna happen!
Does it sounds like a lot? We know that the skills required to implement all of the above are much closer to those of a developer so maybe we should get developers to implement their own UI test automation?
That is certainly a route we are exploring, similar to how developers write their own unit tests, they can also write their own BDD implementations, hence bringing down the requirements for testers back to “normal”.
Still, even if developers implement their own TDD and BDD tests, we would prefer technical testers (devsters?) since they need to understand the whole enchilada and chime in when deciding whether something is going to be BDD’ed or TDD’ed, review the developer’s implementation, etc.
Believe it or not the technical side of things is just half of the work. The other half is more “traditional” testing work such as making sure we are using testing time wisely, exploratory testing, etc.
There is also a very important task that some testers are not used to - leading scenario writing sessions.
Say a team is developing an iOS and Android app. The team is made out of 4 devs (2 per platform), 2 testers (1 per platform), 1 interaction designer, 1 visual designer and 1 coach.
Since there are 2 devs per platform, only 2 stories should be opened at the beginning of the sprint. The first thing to do then is writing tests for those stories. The team gets together and reviews the material available for them (ACs, maybe some mock ups or wireframes) some discussion starts and typically the tester will be capturing the conversation in the form of BDD tests. The tests are displayed in a screen, the whole team looking at them and commenting as they see fit.
It’s in these sessions where magic happens, where we’ve seen the biggest benefits of BDD. A ton of simple miscommunication problems are solved on the spot and because the majority of the team is present, we massively decrease the chances of having knowledge gaps. We also reduce the need to use other communication channels (chat, email, Basecamp) to effectively repeat what’s been said on the test writing session.
These sessions take about 30 minutes per story and cover mostly the happy path scenario. Once the session is over everyone in the team has a clear understanding of what the story is about and how to fulfil its business requirements.
Testers still have the job of going all OCD about edge cases and creating scenarios for those.
Please note that we haven’t mastered all these things ourselves and that we are learning as we go along. Also, it’s up to each team to self-organise and spread these tasks evenly and assign them to the most suitable person. At the end of the day the whole team is responsible for the quality of the product.
Get in touch
If any of the above sounds interesting or total bollocks we are always happy to talk geeky, just get in touch.
Always happy to ramble about software development over a beer!