Wednesday, November 1, 2017

It was a good day


Decided to write down today the stuff I (remembered to have) had done today. And as paper doesn't store so well, thought to write it here as well. 

So here in a semi random order:

  1. Reviewed code for a change now implemented that had planned couple of days earlier. Mainly to understand how it had been done, but dropped two comments still; one on a missing log event, and another one on the naming of a method. Also asked another team member explain me a thing I didn't totally understand (it was a lazy loading feature so now I know).
  2. Coffee
  3. Did some happy path testing for the change, looked good.
  4. Discussed through the status and next steps of a cool new feature started yesterday. It seemed devs where far with the implementation already and not that much was missing.
  5. Got pretty excited of the feature
  6. Identified and discussed a possible fix with a dev for a thing I had heard yesterday about from a customer. Replied back to customer
  7. Tested one new feature a bit and noticed one missing validation on it, discussed with the dev a bit about it and he wanted to fix it.
  8. Discussed with two testers a bit on something I was introducing to them yesterday, helped out on some test script running problems
  9. Lunch. Roasted beef, pretty good.
  10. Discussed status on three other things with three other devs, and also what could be next up on the todo list. They picked one, and agreed to have a little starting mob for it (dev booked one for tomorrow)
  11. Coffee
  12. Helped out an integrator with problems they had in using one of our API functions. They sent me some php code. Asked a dev to help me out, and based on that sent them a suggestion on what to change. Got response that it worked. (The dev had worked with php 10 years ago).
  13. Sent mail to a person in the organization suggesting that we should use Flowdock more for cross team communication, and stop using other tools for the same thing (lync, skype, hangouts, jira). Much frustration there.
  14. Issue in step 6 was fixed by a dev. Low risk, so he pushed it to production.
  15. Had a short discussion about test strategy on a project
  16. The two testers pairing on a thing reported of a problem they had while testing. Realized it was a potentially nasty problem, discussed with a dev about it.
  17. Noticed me and the dev are 5 mins late with a meeting with another integrator. Went in it, discussed a little about a new service of us they are taking into use, demoed them one of our new features that might be interesting for them. They were interested
  18. Dev said that the cool feature of step 4 would be pilot ready. Exciting! Can't wait us to tell our stakeholders about it :)
  19. Coffee + water (I always forget to drink water)
  20. Helped our customer support in three problem cases
  21. Dev said he had fixed the issue on step 14. Fast! Tried it and seemed to work. Production tomorrow likely? Another dev noticed another related problem on the scenario. Fi tomorrow maybe
  22. Tested a bit more the case in number 1. Learned nothing new really.
  23. Time to leave home. Plenty of exciting stuff to do tomorrow :)

Definitely not the most focused day of my life, but definitely a good one. 


Sunday, October 22, 2017

Back on the wagon



I reread some of my old blogs and was a bit sad not to be able to read anything from the past three years. So I thought I need to force myself to start writing more again. So here goes.

Past almost three years I've been working on a really cool team, with a lot of freedom (and some responsibilities) to do a lot of different things. And quite a lot has happened in our team inside the last three years. We have moved from
  • one month sprints TO sort of a one piece flow based kanban with a weekly cadence 
  • one big team TO (semi) self-organizing work groups.
  • lot of solo work TO work in the mobbing style
  • PM led Reviews TO team led show sessions
  • unappreciated monolithic architecture TO appreciated monolith + bunch of other services architecture
  • nebula TO AWS
  • + many many more things TO other kind of things
And I'm thinking of writing on all of these a bit. But also many things have happened in me. 

First of all I have stopped clinging on the thought of hands-on (testing) work to be the only important thing for me to work on. And give in to the thought that the most important thing for me to work on should be the most important thing that needs to be done. Be it communicating, coordinating, thinking process, planning, testing, analysis, coding, teaching, documenting, making coffee, or whatever is needed. Testing is great and a great "tool", but not the solution for every problem. As is not programming either. 

Also of course my thinking of testing has changed, and changes all the time. I gave a talk year ago where I discussed these kind of shifts on my testing skill needs in past 10+ years:

Used to be important to...Has moved into...
Figure out what has been builtBuild together
Test hard thingsImprove testability
Good bug reportsGood discussions
Act like a customerTest with customers
Write test casesUse data to help you while testing
Test requirementsTest ideas
Make test plansDo continuous exploratory testing
Estimate testing effortsEnable fast feedback
Block premature releasesEnable premature releases
Manual regression testsAutomatic regression tests
Do your best in a hard work environmentDo your best to improve the work environment

So a few more things to write about there. 

Also I just gave a talk on 10 tools that I use to aid me in my line of work, that was very fast and think all of those tools should get a post of their own.

And also I wantto start writing about the small big things that happen every day, while mobbing or testing or planning or whatever. That make working as a product developer so super interesting.

But now I'll just publish this, cause otherwise I will never get into doing any of it.


Thursday, February 19, 2015

Regulations on medical software development kill more than save


After a two years as a sw testing specialist in the medical device industry, I decided to quit my job. Reasons were many, but perhaps the biggest one is the insane situation where regulations have taken the industry.

As a summary, I have come under the belief that these regulations kill more people than save. This is because of:
- the unbelievable opportunity cost
- the delay in time to market
- killing of innovation
- restricting of competition
and because of the fact that regardless of all those regulations, you still can fake it all.

Actually it is easier to fake than follow.

The main idea in the regulations is that everything needs to be planned in detail, executed accordingly, and documented to be done this way. And by planned in detail, they mean that there needs to be a documented plan, in which all the actions taken needs to be specified. If you do something that was not in the plan, you are in trouble. If you don't do something that was in the plan - boy are you in trouble. The only way to create a plan that has everything you will do and nothing you will not, well documented, is a plan that is written afterwards. So the only way to do audit proof plans is to create them after stuff has already been done. Smart.

Everything that is done, also needs to be done following well established processes, of course. And you need to also prove that processes have been followed. This way, the audit proof way of creating processes is to add stuff into them that can be easily proven to have happened, and that do not generate too much load in documentation side when proving to have happened, This means that the processes are written for the auditors, and not the people who should follow them. So the manageable way to do the process dance is to have the written process generating the documentation, and an actual process generating the product. Efficient.

And when the auditors pop by, they always find issues (probably the easiest job in the world to find them), which need to be rectified, by including something new into the process, which again grows the part done for the auditors. If you actually find something that you should improve on, you must not write it to a process cause then you need to prove that you are following it, which means arbitrary metrics and more documentation. And also by including stuff into the process, you hang yourself on not being able to improve and change it efficiently, so better not to include it. Real atmosphere for continuous improvement.

You may at this point think that what is so bad in writing a documentation. I'll tell you what is. First, anything you write into a document needs to be carefully considered on how the auditors would interpret them, which is really burdening. Secondly, all documents need to be reviewed and authorized after EVERY change you do. Which is horrible. Thirdly, there is a 99% chance that no-one will ever read those documents, which is not the most motivating thing either.

I could rant about so many things, but as this is a testing blog I'll switch into that now.

Nothing, absolutely nothing is so badly misunderstood by the auditors than testing is. Testing for them means verification of requirements, in a preplanned, pre-reviewed, extremely detailed  test cases having descriptions on what to do, and what is the expected result. The tests must be run exactly as detailed, and objective evidence needs to be provided on each result. So if you go and execute a test case, and everything works perfectly but some step cannot exactly be followed the way it is written, you need to rewrite, re-review, and rerun the test. The only way to be able to survive this is to write the test case while executing the readily built system, and writing the step by step instructions. And even then, it is really hard to write, and really burdening to maintain. The tests are not designed or executed to find problems, but to prove that those tests can be run. And because of the huge amount of work put into them, you generally like to keep the amount of new tests as minimal as possible. As the actual tests quality or coverage is not that important either.

This has nothing to do with sw testing, but this is what auditors require. As a funny anecdote, while I was trying to push for more high level tests and more skills for the testers, we got a warning from the auditors stating we need more detailed test cases, detailed on the level that anyone could run them. What probably makes it even more painful is, that as in the medical domain there are a lot of scientists, there seems to be a general understanding that good testing means and requires exploration and investigation. But that is not at all what the auditors expect. And it is hard to keep on pushing and pushing to do a good job in testing, while what is expected from you is just a demonstration of bad checking.

Funnily enough, working as a sw tester I have nowhere else tested so little as I have tested here. Or perhaps funny is not the right word here.

Talk about the hell of a tester.

I must raise my hat to my fellow colleagues who keep on trying to do best possible work under these conditions. Who try to do everything keeping in mind the customer, while trying to please the auditors. I could not, and will now try to do my bit in changing the way auditing is being done on the medical domain from the outside. I hope that I can return again to the domain, either as a stronger person, or preferably into a domain where the regulations are actually working.

If any auditor by any chance would happen to read this, lets talk, please. What we need in this domain is more communication and collaboration, and focus on what it is we really are trying to do. Provide safe, working products to people who should be able to rely on them.

Thursday, June 19, 2014

Testing playbook #1



So it's summer time again, and my year long spell as a scrum master is starting to be over. So starting August I'll be a full time tester again! (Or, as much as you can be a full time tester when working in a scrum team..) I'll likely write up some experiences on being a scrum master later (summary: hard, abstract, frustrating, fun), but thought now to start writing up on testing again. I'll start by sharing some of the tactics I often deploy when approaching testing situations. So this should become a series, until I get bored on it.

So I'll start with my classic, my "master" tactic that often acts as a framework to other more subtle testing tactics. This is a rather common way for me to approach situations more on the exploratory and less on the confirmatory side, and/or situations where the thing (it may not always be the system) under test is not yet that common to me. It's rather simple:
1. Take a tour
2. Decide the target
3. Go with the happy flow
4. Divide and conquer
5. Give em hell

To open it up a bit, taking the tour means a shallowish walk through on the building blocks of the thing I am looking at. Aim is to get a general picture of what are the potential parts, interactions, scenarios, etc that I could be testing. After doing this for a while, I then decide my target.

Deciding the target is often really essential, and it has a few important aspects. Firstly, it should be somehow relevant to the general testing mission. Spending a lot of time on an unimportant piece is usually not a good idea (although spending some time on them often may be a good idea). The size of the target should reflect the amount of time available for testing before next break, in the way that you have to push yourself to be able to generate enough ideas for the whole time. So a rather small piece. Having too much options to choose from, makes my testing sessions often a bit too vague, and actually makes it harder for me to come up with good test ideas and observations. So make it small enough. Thirdly, it would be usually good that you are at least somehow curious about the piece. Makes me work so much better. Then you start with the happy flow.

Going with the happy flow means executing your target with the easiest/most common way that you can imagine. For some which test (happy, nasty, interesting, boring) you execute first may not make a big difference, but for me it does. In both testing and life, I have a strange habit of leaving the most interesting things usually to be the last things. E.g. if I eat a bag of candy having many different candies, I want to be sure that the last thing in the bag is (one of) the best tasting candy there is. Or the last piece of the steak I'm eating should be the juiciest, biggest piece. Why? Because otherwise I may lose interest. And coming back to the testing bit, after I lose interest I become much worse observer. So, generally I want to go with the happy flow first, which I usually hope to succeed rather well (although in many past projects in my life, I think I have never even been able to go past the happy flow cause even it has never worked).

Divide and conquer is something I do really often. Basically it means repeating the same thing over and over, while modifying only one (or perhaps a couple) variable at a time. Inputs and outputs are a classical example of this, change one input at a time, and take a look what happens to the output (while of course observing everything else too. We're testers anyway right?). Are the outputs behaving as expected? This btw is also my common way of trying to reproduce an intermittent issue. Try to do the exact same thing over an over again, and change as little as you can. With the non reproducables I usually get a bit obsessed in this, as in doing EVERYTHING exactly as when I first was when experiencing the issue, like, the position I am sitting in.. (btw, inventing a non reproducable issue, and then start trying to find it is a powerful testing tactic too. More on that on a later post)

Finally we get to the part of Give em hell, which naturally means giving the target everything you got in order to try to break it. It could mean extreme flows, abysmal inputs, scarce memory conditions, switching system time, breaking connections, etc. Basically here you do things where you expect the system to fail, and see how gracefully that happens. Problems you find here may or may not be that relevant, but boy they will be interesting :)

So that's my very common testing tactic. In football terms that is my 4-4-2. Next time I'll share my common tactic for starting to test something totally new that I now very little about before. Teaser; I do not start by reading the documentation.