Thursday, February 19, 2015

Regulations on medical software development kill more than save


After a two years as a sw testing specialist in the medical device industry, I decided to quit my job. Reasons were many, but perhaps the biggest one is the insane situation where regulations have taken the industry.

As a summary, I have come under the belief that these regulations kill more people than save. This is because of:
- the unbelievable opportunity cost
- the delay in time to market
- killing of innovation
- restricting of competition
and because of the fact that regardless of all those regulations, you still can fake it all.

Actually it is easier to fake than follow.

The main idea in the regulations is that everything needs to be planned in detail, executed accordingly, and documented to be done this way. And by planned in detail, they mean that there needs to be a documented plan, in which all the actions taken needs to be specified. If you do something that was not in the plan, you are in trouble. If you don't do something that was in the plan - boy are you in trouble. The only way to create a plan that has everything you will do and nothing you will not, well documented, is a plan that is written afterwards. So the only way to do audit proof plans is to create them after stuff has already been done. Smart.

Everything that is done, also needs to be done following well established processes, of course. And you need to also prove that processes have been followed. This way, the audit proof way of creating processes is to add stuff into them that can be easily proven to have happened, and that do not generate too much load in documentation side when proving to have happened, This means that the processes are written for the auditors, and not the people who should follow them. So the manageable way to do the process dance is to have the written process generating the documentation, and an actual process generating the product. Efficient.

And when the auditors pop by, they always find issues (probably the easiest job in the world to find them), which need to be rectified, by including something new into the process, which again grows the part done for the auditors. If you actually find something that you should improve on, you must not write it to a process cause then you need to prove that you are following it, which means arbitrary metrics and more documentation. And also by including stuff into the process, you hang yourself on not being able to improve and change it efficiently, so better not to include it. Real atmosphere for continuous improvement.

You may at this point think that what is so bad in writing a documentation. I'll tell you what is. First, anything you write into a document needs to be carefully considered on how the auditors would interpret them, which is really burdening. Secondly, all documents need to be reviewed and authorized after EVERY change you do. Which is horrible. Thirdly, there is a 99% chance that no-one will ever read those documents, which is not the most motivating thing either.

I could rant about so many things, but as this is a testing blog I'll switch into that now.

Nothing, absolutely nothing is so badly misunderstood by the auditors than testing is. Testing for them means verification of requirements, in a preplanned, pre-reviewed, extremely detailed  test cases having descriptions on what to do, and what is the expected result. The tests must be run exactly as detailed, and objective evidence needs to be provided on each result. So if you go and execute a test case, and everything works perfectly but some step cannot exactly be followed the way it is written, you need to rewrite, re-review, and rerun the test. The only way to be able to survive this is to write the test case while executing the readily built system, and writing the step by step instructions. And even then, it is really hard to write, and really burdening to maintain. The tests are not designed or executed to find problems, but to prove that those tests can be run. And because of the huge amount of work put into them, you generally like to keep the amount of new tests as minimal as possible. As the actual tests quality or coverage is not that important either.

This has nothing to do with sw testing, but this is what auditors require. As a funny anecdote, while I was trying to push for more high level tests and more skills for the testers, we got a warning from the auditors stating we need more detailed test cases, detailed on the level that anyone could run them. What probably makes it even more painful is, that as in the medical domain there are a lot of scientists, there seems to be a general understanding that good testing means and requires exploration and investigation. But that is not at all what the auditors expect. And it is hard to keep on pushing and pushing to do a good job in testing, while what is expected from you is just a demonstration of bad checking.

Funnily enough, working as a sw tester I have nowhere else tested so little as I have tested here. Or perhaps funny is not the right word here.

Talk about the hell of a tester.

I must raise my hat to my fellow colleagues who keep on trying to do best possible work under these conditions. Who try to do everything keeping in mind the customer, while trying to please the auditors. I could not, and will now try to do my bit in changing the way auditing is being done on the medical domain from the outside. I hope that I can return again to the domain, either as a stronger person, or preferably into a domain where the regulations are actually working.

If any auditor by any chance would happen to read this, lets talk, please. What we need in this domain is more communication and collaboration, and focus on what it is we really are trying to do. Provide safe, working products to people who should be able to rely on them.

Thursday, June 19, 2014

Testing playbook #1



So it's summer time again, and my year long spell as a scrum master is starting to be over. So starting August I'll be a full time tester again! (Or, as much as you can be a full time tester when working in a scrum team..) I'll likely write up some experiences on being a scrum master later (summary: hard, abstract, frustrating, fun), but thought now to start writing up on testing again. I'll start by sharing some of the tactics I often deploy when approaching testing situations. So this should become a series, until I get bored on it.

So I'll start with my classic, my "master" tactic that often acts as a framework to other more subtle testing tactics. This is a rather common way for me to approach situations more on the exploratory and less on the confirmatory side, and/or situations where the thing (it may not always be the system) under test is not yet that common to me. It's rather simple:
1. Take a tour
2. Decide the target
3. Go with the happy flow
4. Divide and conquer
5. Give em hell

To open it up a bit, taking the tour means a shallowish walk through on the building blocks of the thing I am looking at. Aim is to get a general picture of what are the potential parts, interactions, scenarios, etc that I could be testing. After doing this for a while, I then decide my target.

Deciding the target is often really essential, and it has a few important aspects. Firstly, it should be somehow relevant to the general testing mission. Spending a lot of time on an unimportant piece is usually not a good idea (although spending some time on them often may be a good idea). The size of the target should reflect the amount of time available for testing before next break, in the way that you have to push yourself to be able to generate enough ideas for the whole time. So a rather small piece. Having too much options to choose from, makes my testing sessions often a bit too vague, and actually makes it harder for me to come up with good test ideas and observations. So make it small enough. Thirdly, it would be usually good that you are at least somehow curious about the piece. Makes me work so much better. Then you start with the happy flow.

Going with the happy flow means executing your target with the easiest/most common way that you can imagine. For some which test (happy, nasty, interesting, boring) you execute first may not make a big difference, but for me it does. In both testing and life, I have a strange habit of leaving the most interesting things usually to be the last things. E.g. if I eat a bag of candy having many different candies, I want to be sure that the last thing in the bag is (one of) the best tasting candy there is. Or the last piece of the steak I'm eating should be the juiciest, biggest piece. Why? Because otherwise I may lose interest. And coming back to the testing bit, after I lose interest I become much worse observer. So, generally I want to go with the happy flow first, which I usually hope to succeed rather well (although in many past projects in my life, I think I have never even been able to go past the happy flow cause even it has never worked).

Divide and conquer is something I do really often. Basically it means repeating the same thing over and over, while modifying only one (or perhaps a couple) variable at a time. Inputs and outputs are a classical example of this, change one input at a time, and take a look what happens to the output (while of course observing everything else too. We're testers anyway right?). Are the outputs behaving as expected? This btw is also my common way of trying to reproduce an intermittent issue. Try to do the exact same thing over an over again, and change as little as you can. With the non reproducables I usually get a bit obsessed in this, as in doing EVERYTHING exactly as when I first was when experiencing the issue, like, the position I am sitting in.. (btw, inventing a non reproducable issue, and then start trying to find it is a powerful testing tactic too. More on that on a later post)

Finally we get to the part of Give em hell, which naturally means giving the target everything you got in order to try to break it. It could mean extreme flows, abysmal inputs, scarce memory conditions, switching system time, breaking connections, etc. Basically here you do things where you expect the system to fail, and see how gracefully that happens. Problems you find here may or may not be that relevant, but boy they will be interesting :)

So that's my very common testing tactic. In football terms that is my 4-4-2. Next time I'll share my common tactic for starting to test something totally new that I now very little about before. Teaser; I do not start by reading the documentation.

Wednesday, September 4, 2013

Live and learn


In spring I wrote about a big change on my career, continuing to work as a tester but changing almost all other aspects of the working context, the aspects being:

From
- Customer acceptance testing role into Developing our own products
- Waterfallish into Agilish project models
- Business oriented & simple into Mission critical & complex domain
- 8000km from developers into a Few meters from developers
- Consultant into Employee
- Non-technical into Technical

So how has it been?

Comparing my new job between the jobs in the organizations I've previously worked for isn't really possible, or fair, due to the many contextual differences (trust me, I worked on a blog post comparing the differences for a long time until eventually scrapping it). But still there are huge changes that have happened to me as a tester, which I'll briefly discuss here.

Information over-flow
The biggest change has happened regarding getting information on the contents of the changes I am testing. I had got used to spending a lot of time digging through different communication layers in order to get some rational data about what, why, and how the changes were done. Now it's almost the opposite: very approachable developers eager to help me and extremely dedicated product owner sitting behind me. Mandatory descriptions on how changes are done and how those should (at least) be tested are documented and available. Full access to code repository to see the specific code changes done per a change. All kinds of documentation regarding the many aspects of the system we are working on. Now my problem is to try to solve which sources of information to use and when, and to understand the information I get.

Testability
Was previously just a word for me. I had gotten used to having testability built in through the user interfaces, and having little or no need for any "test features" (now that I am thinking, I would have had a need for these many times...). Now after working on algorithm development where there is initially little or no control to the inputs, and little or no visibility for the outputs they produce, the situation is really different. I have to be active in asking and suggesting ways how these variables can be modified dynamically during testing, and in trying to convince (myself and others) that those are worth the development effort needed. Today, test interfaces are my trustworthy friends, and logs are my lovers.

Authority
As a consultant, I had started to think that I am a software testing expert. I had started to think I knew the secrets of the software testing business. I had started to think, that I am to software testing what Neo was to the Matrix. Awakening from the day dream was pretty rough (actually it also kind of felt like Neo's awakening in Matrix looks like :) ). So currently, I think, I'm looking at the world of testing with a bit more open eyes, and open ears. (though I think it is already passing and my ego is starting to rise again, alas this blog post).


Agile testing
Hey, not that much has changed there. Of course there are differences due to the changes in the context, but testing is still testing: exploring, learning, and experimenting. Huib Schoots said it beautifully and I agree:

there is no Agile testing, but instead testing in agile context.

Live and learn.


Monday, June 10, 2013

All ketchup bottles are not red.


I often have problems in finding the ketchup bottle from our refrigerator. At some point I started to notice this, and eventually realized why I did not find it. It was because I was looking for something red, but the Heinz ketchup bottle isn't actually red but transparent instead. So after it has been used for a while (I have a three year old son so it doesn't last full long), it becomes mainly transparent in color. And when I start to look for the ketchup, I can't find it because I'm searching for something red.

Today I had a short discussion with my wife which wen't about like this:
Wife: Do you know where's the ketchup?
Me: The bottle isn't red, try looking for a transparent bottle.
Wife: Oh there it is.

You get the lesson already? Well I have another close experience of similar kind.

Last week I was taking part in Helsinki testing day as a test lab assistant. I was responsible for my own stand where I had created a few testing tasks for an open-source web shop. While trying to teach people hovering about some testing related lessons, I was also experimenting with the tasks. My devilish plan was that I had created tasks for testing same areas of the app but with a different story behind the task, wanting to see how it effects people.

One of the tasks was to test a product review function, out of which which I had noticed a small issue in slightly testing it. I had created two tasks:
1. a test case with execution steps for testing the function
2. a story that went: "We have received complaints from product review section. It seems same people are doing reviews on the same products many times, which seems a bit odd.. Could you please investigate what might cause this?"

The two people running the first task, didn't really find anything to report from it. The three people running the second one found multiple issues in the function, many that I hadn't, part of them pretty solid and some maybe less so. I think that this happened (yea I know that the statistical amount of testers isn't enough to make this kind of assumption but why let that get in the way of a good story) because in the second case they thought they knew that there was a specific issue in it.

So into the lesson then: If you think you know that there is a problem of certain sort, it's a lot easier to find the problem. If you are looking for red ketchup bottle, it's a lot easier to find the red ketchup bottle. But remember --- all ketchup bottles are not red.