21 December, 2010

The Sound of Testing

I believe in The Testing Zone  - it is the closest I ever get to my testing dream world. For me the enchanted doorway that lets me into the zone is music. But selecting the proper soundtrack for the test session is a delicate task.
 
Rapid, exploratory regression testing requires something upbeat. My preferred choice is The Sisters of Mercy in general and the track This Corrosion in particular. Rammstein usually does the trick too.
 
Exploring new functionality or previously uncharted territory needs something a bit less intense, but still catchy – Johnny Cash and Get Rhythm is always a good choice here.
 
Careful investigations of not-understood behaviours beg for something slow and subtle, such as Ane Brun.
 
When selecting music it is also important not to neglect the E-factor. ‘E’ as in total embarrassment that is. Some music really grabs me and it might happen that I starting singing out loud or – heavens forbid – making small dance moves. That kind of music is banned in the lab. Sorry Fischerspooner, home use only!
 
I have not started practising it yet, but I also think every new delivery deserves a theme song. For the next one I think I will pick Verdi’s Nabucco for that special grandiose feeling. It is bound to set the right mood!

03 November, 2010

Click OK to crash

I - of course - use a risk-based approach in my testing, meaning that checking the exact phrasing of error messages is very far down on my prioritised list of test tasks. As long as the message is relevant and understandable I am happy enough.

I have an old favourite error message that it is hard to compete with. A couple of years ago I was using a free software that was really fantastic, but would occasionally and unpredictably crash, in which case the simple yet suitable error message 'Click OK to crash' was shown. There was an OK button, the only thing you could do was to click it, and yes - the program crashed.

The error message completely failed to give me any clues as to why the program crashed, but it certainly told me very clearly what was going to happen, and it put a smile on my lips, eventhough I would loose my unsaved work. In my opinion simplicity in error messages is definitely preferable, unless you are actually going to put some useful information in there. Do not try to obscure the fact that you messed up by using tech lingo or providing a pointless novel.

I also remember painfully well working on my thesis late one night when I encountered the king of all error messages: Kernel panic. No ambiguity there - I panicked too.

If there is a point hiding in here it is this: error messages are important, spend some time on them. Think usability, not debugging.

When I dream

Trying to fall asleep last night I had all sorts of thoughts popping up in my head, and since I know not to trust my brain around bed time I jotted down some notes and a little sketch to review in the morning.

The sketch was certainly more glorious in my head - there were no stick figures, but real people and it was all very colourful and alive. I could really feel the cape billowing in the wind. Anyway, it captures how I like to view The Software Tester. What is missing though is the tester's attribute, what should be held in the outstretched hand? Suggestions are welcome.



Oh...I've been a developer and I do project management. There.

28 October, 2010

Physicists - Testers in disguise?

My background is in science. I have spent 11 years (a third of my life, believe it or not) studying mathematics, statistics and most importantly - physics, experimental astroparticle physics to be specific.

I have been trained
  • to be sceptical
  • to question
  • to think analytically
  • to think logically
  • to be curious 
  • to try to understand how things work rather than accepting stated facts
  • to explore
All of these things I think make very good qualities for a tester too.

My research consisted of searching for a signal in a data set made up mainly of background noise. Feel free to read my thesis. In order to do my research I had to write my own software. Since the results of using my software to process the data were going into my thesis I had to test that the software was behaving as I expected it to in an attempt (futile maybe) to minimize the risk of making a complete fool of myself.

I claim that testing in a wider meaning of the word comes naturally to experimental physicists, even when talking about software testing. The life of any experimental physicist consists of
  • data acqusition
  • data analysis, more often than not using some homemade software
  • publishing results from data analysis
Publishing (preferably interesting) results is the basis of your career, if you do not publish you do not exist. Imagine what would happen (and does happen) if you publish results you later have to retract because your software is found to have severe defects. Physicists are aware of what is at stake - and unlike what is generally the case in the software industry, every mistake is going to hurt the physicist personally.

Hence physicists - and all other scientists with integrity - test their software tools meticulously to make sure they understand how they work and that  they work as expected. It is not a strict, structured testing that ISTQB would approve of, but the physicists have their hearts in the right place. They want things to work and be reliable, and is that not really just what we all want?

SWET1 - Swedish Workshop on Exploratory Testing

Högberga gård, October 16-17, 2010 

Participants: Michael Albrecht, Henrik Andersson, James Bach, Anders Claesson, Oscar Cosmo, Rikard Edgren, Henrik Emilsson, Ann Flismark, Johan Hoberg, Martin Jansson, Johan Jonasson, Petter Mattsson, Simon Morley, Torbjörn Ryber, Christin Wiedemann

Several very nice accounts of SWET1 have already been given, but now that two weeks have past I feel ready to share my personal reflections on the weekend.
 
I had never participated in something similar before, and was unsure of what to expect. I did have rather high expectations, but the even so I was overwhelmed by the sheer intensity of the discussions. There was so much energy and so many ideas and thoughts flying around that by the end of the first evening I was suffering from a total intellectual meltdown and had to go to my room and install some mind map softwares to relax...

The whole following week my brain was infested by a swarm of ideas bouncing around inside my head, but by the second week it had started settling down and sinking in, and by now I feel fairly recovered.

Spending a day and a half talking about nothing but exploratory testing was of course very stimulating and inspiring. Everyone took part actively and contributed in a unique way. My fellow peers provided me with ideas, hints, tips, tool suggestions and general encouragement that really gave me a push forwards as a tester. 

The best thing though was the shared joy of testing.

Thank you everyone.


21 October, 2010

Spinning threads into yarn

Recently the approach I have had to my testing has been heavily influenced by session-based test management. I have made a test plan consisting of a high-level list of test tasks. The testing has been exploratory, performed in sessions on a given topic, e.g. a function. I have two problems with this:
  1. As much as I like lists, they make bad test plans - at least to me. There are always too many tasks so the list will be too long, covering several pages in a document, making it hard to get an overview. It is also difficult to depict relationships. I have tried different groupings and headers, and managed to create nightmare layouts that are impossible to read. A list is also highly binary - either the task is done or not, there is no equivalent to "work in progress".
  2. I would rarely be able to finish a session without interruption. Something urgent would come up and I would have to abort the session, and when restarting the conditions might have changed. As I discussed in the post on October 17th, I also tended to feel obliged to completing the session before I took on a new task, even though there might have been more important matters surfacing after the session started. In this situation it was of course hard to keep track of the status of the tasks.
The appeal of thread-based test management is of course that I can perform test tasks in parallel - it is not necessary to say that a test task is done. Instead I can scrape the surface of everything once and sort of work my way down to the insignificant details from there.

I have resolved to use a different approach for the next test period. This is what I have done so far:
  • I have installed the open source mind map tool FreeMind
  • I have created a mind map (I call it fabric) with
    • Error reports
    • Product heuristics
    • Generic heuristics
  • Since the fabric only contains short thread names, I have introduced knots that are (k)notes in simple text format that I link, or tie, onto the threads. The notes contain additional informtion such as hints, tips and reminders.
  • I have compiled the stitch guide. The stitch guide provides guidelines on how I think that my project should use thread-based test management. The guidelines are not rules, but suggestions intended to promote consistency.
  • I have a template for daily status reports. The report can contain anything I feel needs writing down during the day, but should at least contain the names of the threads that have been tested in some way. I am currently looking for a more fun name that "Daily status report".
  • The actual testing will of course be exploratory.
A fabric. Colours and icons are used to show priorities and status. The red arrow indicates that there is a knot tied onto the thread.
My plan is to use the first version of the fabric as my test plan. During the test period I will keep updating the fabric, and at the end of the test period the current status of the fabric will be my test report.

Note to the reader: This is my current interpretation of thread-based test management, and this constitutes my starting point. Hopefully it will evolve into something that I find useful, and turn out ot be an improvement compared to today. If not I will not hesitate to chuck it and try something new. I have big hopes though and cannot wait to get started!

17 October, 2010

Picking up a new thread

I have just spent the weekend at a very inspiring peer conference on exploratory testing (Swedish Workshop on Exploratory Testing, SWET1) in Stockholm.

There were many interesting presentations and discussions, but what is on my mind right now is James Bach's presentation on thread-based test management, http://www.satisfice.com/blog/archives/503 . I have been trying to adopt Session-Based Test Management (SBTM) for a while, but never managed to do any proper time-boxing since I would typically be interrupted in the middle of a session and have to abort or restart.

Quite often I will start a test activity, be interrupted and not finish the session, start a new test activity, not complete that session either for whatever reason and so on. Working this way makes me stressed since
  • It feels like I never finish anything.
  • I do not have an overview of what I am doing and what the status of the different tasks is.
I have also come to realize that in some cases I feel a bit hemmed in by the actual session. Even if the conditions change or something urgent comes up I still feel obliged to finish the session before I start a new activity. In those cases when I work in a chaos of sorts I think this perceived need to "be loyal" to my session reduces my efficiency.

So, I'm going to have a go at thread-based test management instead, and I start by making a mind map!