Sunday, March 18, 2018

Retro for the past week

Keeping up to the deal with my colleague Mili for the total 52 posts during year 2018, this is my sixth one (Mili blogs here  I am not really in the mood of finishing any of my draft versions and do not have solid things on my mind that I would want to write about. So I thought to do a little retro of my past week, with the classic glad - sad - mad pattern. (nice machine to help coming up with other ideas to retro's here btw).

Here goes:

  • personal retro week 11
    • Sad
      • Had to spend very much time on solving issues and questions coming from support and stakeholders
      • A lot of people away from office this week
      • Meal on the lunch on new restaurant sucked
      • Few things on "my own backlog" stuck the whole week
      • Lost a query to a report I did couple of months ago
        that was asked from me again, need to redo
      • Could not do the mob programming sessions I had booked
      • Another problem with Git (root cause was that I had wrong branch as base)
    • Mad
    • Glad
      • Was able to squeeze in a few hours doing some programming on improving an existing feature
      • Tried one new restaurant for lunch
      • We were able to get quite a lot further with the goals in the workgroup I'm in
      • Was able to test and provide some feedback on some things done quite quickly after done by dev
      • Sent out a team stakeholder satisfaction survey, and got high grades on it
      • Came up with a maybe funny idea on a presentation I will give in two weeks
      • Survived one very hard meeting, and got good advice after it from my team members
      • Got help on a git issue once again
      • Tried kanbabflow as a kanbanboard for a workgroup I am in, so far like it
      • Got help from a team member to understand the reason why another one was maybe upset
      • Nice interactions with our support team
      • ( copying the mind map and pasting it as text comes out as a really nice bulleted list. And also the productivity beta version works nicely

Then a few possible action items for next week:
  • Arrange a meeting for showing how I regularly work with support support (with some example case), invite the whole team (or maybe two meetings people split to two) as optional in case someone would like to help there a bit
  • Push some of the things from my "own" backlog for the team to look at
  • Book new mob sessions
  • Start saving the queries I use more often to the useful queries file I have in git 
  • Choose some implementation task for next week too
  • Book a retro for our work group til the end of next week.

That was fun! And I feel a lot better from the past week, and looking forward to the new one :)

Sunday, March 4, 2018

Testers doing test automation - is that the most important thing for you to do right now?

I've been thinking quite a lot about tester's moving to do test automation.  Lately beause of these three things:

1. European testing conference, a great testing conference I attended couple of weeks ago. It is very cool due to many things; the way the speakers get compensated, the focus on the conferring side of conferences making it very easy for people to discuss stuff, the way the talks are chosen, and because there are also a lot of developers joining. So anyway when I was talking with several of the attendees, it was a bit strange how it was easier for me to talk about the product development as a whole with the developers, where as with the testers it more naturally moved into talk of automation and tools. Also on the open space, I think the majority of the topics pitched was automation or tool related. And quite little on the process or on the customer facing side of product development.

 2. In my company there are a lot of different products and product teams, and in an attempt to share some knowledge between the teams there are different guilds. Like devops guild, architecture guild, API guild, and a testing guild. So usually when the testing guild meets there are mainly testers from different teams participating, and a topic is introduced by someone with a little bit of followup discussion. And the three of the last four topics introduced have been quite automation/tool centric. (And the non automatic centric one was my topic)

3. This week I read a linkedin post from an old colleague titled Runnable Specifications are here, never speak about test cases anymore. In it he made several good arguments against the-step-by-step-testcase-execution by a tester and offered the role of executable specification writer in return. So kind of shifting left and doing automation.

So I don't have anything against test automation (well I do make a case against stupid automation but anything stupid is stupid so that don't really count), but I do have a little trouble understanding why so many testers think that's what they should be doing now. I mean, I do not try to argue that anybody should write and maintain test cases and then "execute" them manually. But that the only alternative to this is for the tester to automate those test cases? I don't buy it.

First of all, I'd say that most commonly there are a lot less "testers" than there are "programmers" on a product development team. And I'd say that commonly the people identified as "testers" are not as good in programming as the ones who have identified as "programmers". So I think the heavy lifting of test automation (which is also programming) should be done by people who know programming. Because they can, you know, program. And if they do test automation they might make the app more testable in the first place. And they also know how stuff works so that they can automate the tests faster. And if the same people work on the code and the test automation it ain't as likely to create bottlenecks and missing automation. And if it is ok already that programmers handle the unit and often also the integration level test writing - what is so different on the end to end level? Testing expertise comes in handy here of course, as testers might be better at providing stronger assertions, better test data, better test scenarios, etc. So teaming up might be a really nice idea (it almost always is ;) ). But who-does-what should be based on motivation and skills - not based on some label.

Secondly, when thinking of the biggest problems any of the products I have ever worked with, those would not have been prevented with more test automation. Many things might have been easier, sure. And many bugs would have been caught earlier, I believe. But the biggest problems? No. And this leads to a crazy statement.

There is more to product development than programming and testing.

I don't want to diss programming. It is an art. An art I ain't really good at (at least yet). But there's other stuff. There are alternatives to brainless test execution by person, or making the brainless test execution to be done automatically. I'll mention a few

Product management. This is what makes or breaks the product. A good idea done shitty, might give people value. But a shitty idea done in the most beautiful way with beautiful automated tests is still a shitty idea. It is like an ass made of silver - it's shiny but it's useless. So this is a place where we would need more emphasis! More brains to engage on making sure we pick the right stuff to do, that we focus on impacts, that we focus on not doing too much, that we pick the right level to prototype, that we do not overcommit too early, that we do stuff that is of value. And this is not the responsibility of a "product manager". It is the responsibility of everyone in the product team! And this requires work.

Stakeholder involvement and communication. If I would pick one thing that has caused most problems in all the stuff that I have been working on, it has been the lack of communication. Within team yes, but especially between the team and it's various stakeholders. We need people that not only consider what stakeholders need and want, but people that actually do the work of discussing, asking, asking again, demoing, listening, telling, managing expectations, with the stakeholders. Sales, marketing, support, customers, end users, integration partners, and who ever it is that has an interest on your product. They need to be heard, they need to know what's happening. And the team needs to do this. And this requires work.

Planning. We need people to make sure we are planning ahead - not too much! But not too little. Making sure we have an idea on the vision, that we think we know the next couple of steps, and a bit of the risks and alternatives. And that the team has always a shared understanding of what we are going to do. And creating this ain't easy. It requires work.

Process. It is easy to settle, continuous improvement is not. It is easy to agree to stupid procedures given, it is not easy to explain why you need to do differently. It is easy to believe that everyone is happy, it is hard to know if they are. It is easy to create silos, it is hard to break them. And it is easy to fallback to routine, it is hard to really start doing something in a different way. This requires work.

Monitoring and analysis. What is your definition of done? Acceptance tests pass? Documented and released to prod? That is not done, that is the start. Actually following what is happening in production, analyzing usage, digging for problems, thinking of improvement and making sure improvements get done is a super important part of product development. And it requires work.

Exploratory testing. Being there constantly looking for problems, looking for alternative ideas and things we did not think of, providing constant feedback to the team, and learning the whole product. It. Requires. Work.   

And a lot of other stuff.

Again, I don't want to say that (test) automation is not important. Hell, I would like to do more of programming myself too, because it is fun! And I guess it is easier to add that to employment adds and CVs. But there are also other things that a product development team needs to do.

So the next time you start writing those automated tests, I want you to think - is this the most important thing to do right now.  

Sunday, February 18, 2018

Mob programming - the heaven of a tester. Part2: What happens in a mob

This post continues from Mob programming - the heaven of a tester. Part1: the beginning.

I will next describe the formula of a rather common and a rather good mob development session. It consists of four parts; The initiation, The planning, The implementation, and The finishing.


1. Initiation

We started arranging the mobbing sessions by booking a few 2 hour slots for the week ahead where our aim would be to do some work in a mob. This was especially important in the beginning as the mobs did not seem to happen adhoc. Even if pretty much everyone always liked and wanted the mobs, people often did not arrange them by themselves. So this kind of forced us to do the stuff we like to do :)

Now after starting to make mobbing a habit, and starting to understand where it is especially effective, we have gotten better in initiating mob sessions ad-hoc. So these days it is the most common way to start implementing a solution for problems that seem hard to tackle, on something that is totally new, or on anything that appears to be big. Also we have started using the mob as a way to end arguments in a nice way. Like after arguing/discussing something too long in a meeting or in a flowdock thread, someone just suggests to mob it and then we go.

2. Planning

We most commonly start a session by drawing a picture of the thing we are trying to accomplish. If working on the same location we use a whiteboard, if remotely we use some simple paint tool. (Lately couple of our devs have been trying out using a iPad for easier drawing and it looks really nifty. But not much experience on that yet.) So someone graps the pen, and starts drawing the parts of what we are about to change. Simple drawing displaying the user interactions and the main components and their interactions in the scenario we are thinking of. Then we switch to a pen of another color and start drawing the new parts needed. And finally we decide which parts of those are kind of separate tasks and decide on the initial order in which to implement them.

Example result from whiteboard planning

Here it is often important to try to notice when you are starting to overplan on the whiteboard. The aim should not be to fully think of all the details, but to have a goal to start moving towards to. The details will get solved while implementing, and the goal might as well change while at it. You can keep in mind the phrase from Woody Zuill: "it's in the doing of the work that we discover the work we must do".

3. Implementation

In the beginning of our mobbing days we used to switch the person on the keyboard quite often, like every 7 minutes. But these days we often may just pick a person who will do all the writing during the whole session. I don't see a problem with this, as long as that person does not start writing something that has not already been discussed.

On the keyboard we usually (should run the existing tests, which we often forget, and instead move directly into) start by writing the tests, which is a smooth continual from the planning. We usually like to start with the user facing parts as there we really need to think of the inputs and outputs of the whole flow. This is hard, but usually leads to a smaller first increment than what you would get by e.g. starting the implementation from the db model.

Then after the tests are done (try to not write too many first - remember overplanning!) the writing of the actual code may be the easiest part of the session, partly because the hard business problems are already discussed thoroughly, and partly as there are so many capable coders present who know all the tricks. So we do not spend THAT much time in google...

4. Finishing a session

Our mob sessions last usually about 2 hours, and we would not do more than two of these a day. Mobbing is effective but also quite mentally consuming so doing too much of that may take its toll too.

So after the 2 hours are up we (commit and push if we haven't before which too often might be the case, and) agree on how to continue with the remaining tasks. If everything seem to be clear one person might just pick it up from there onwards. We might also agree that someone continues on their own on some stuff and then books a mob on the next more challenging things. Or then we may just agree to continue with the same people the next day.


There it is, the formula of a normal mob. "Why we like them so much and find them so effective" and "what is the testers role in a mob" I will go through on the next post on the series of Mob programming, the heaven of a tester.

Now I should go and book a few mobs. Recommend you do the same!

Friday, February 2, 2018

Mob programming - the heaven of a tester. Part1: the beginning

Have you heard of mob programming? Probably you have. If you haven't I stroooongly recommend you to take a look ( or listen ( to Woody Zuill describing the process.

My team has been doing mob development (we call it rather that than mob programming) now for two years, and it is superb. We don't do it all the time, but when we do it is inspiring to be part of a mob. And I think it is very efficient too! Also in my role I have several reasons to argue Why every tester needs a mob (and ever mob a tester), and I want to tell about what usually happens in our mobs, but those are future blog posts. This time I will tell the story of how and why we started mobbing.

I first heard of mob programming from Woody Zuill in Tampere Goes Agile conference 2014. It sounded kind of crazy, five people working on one machine, but at the same time many of the things I heard really resonated. So I was intrigued, but at that time felt that this was beyond my reach and thus didn't actively pursue it. But the seed was planted.

Forwarding a few years, we were thinking in our rapidly growing team of ways how to continue to work as a team and how to break some silos that had formed. So we came up with this idea of work groups. Kind of dynamically splitting into groups of 3-6 people who would take full responsibility on planning, implementing and releasing specific things. And here I now thought that mob programming could in some way fit nicely into this. 

But how to get there? I was the most experienced one with some reading and participation into a single workshop of Agile Finland, and wasn't too comfortable on leading with that. But then on another meetup, think it was tech excellence, I got to talk about this with Llewellyn Falco who was immediately ready to come by and facilitate a 2 hour try out session. And my great colleagues were all up for that. 

So we did it and people liked it. It was fun! Nobody was thinking of going full fledged extreme mob programming, but curious to learn more. Luckily, we happened to have a team gathering coming up, so we decided to do this there for two days straight - and we did. For two days we broke into groups of 4 people & 1 laptop to work on some features. First discuss why to do it, and then go straight into doing it. 

After the two intensive days we had a little retro and made a pretty much unanimous decision to continue to do this. Every week, at least once, each group would do a mob. So that decision has now held for two years - some weeks we do a lot of mobbing, some weeks little. But we do it. And it is the heaven of a tester

I am not saying that this might be for everything, or everybody. But I believe everybody should at least take a shot at this. And as a first step, I am more than happy to recommend contacting Llewellyn who is a brilliant coach for this. 

Just do it.

Wednesday, January 17, 2018

Answers to common questions asked from a "tester"

Here's another chat I have had many times. It starts with the golden and notorious question of "how much should you test". And it then drifts a bit from there.


How much should you test?

"Well, you know, I used to work once in this team doing a medical device where releasing frequently was not really possible due to the many constraints. And even doing a small release would cost a lot. And it being a class C (death or serious injury may occur) device all issues found on a released product would have been a pretty big deal. So this meant that we took unbugginess very seriously, and thus that I had a lot of time to spend testing the device.

And boy did I.

I spent sometimes almost like the entire week just in front of the machine testing, thinking of ways how it could fail, unrolling all the test techniques I got in me, learning the ins and outs of the product. And then one day when the new release was done.... several problems were found. So taken into account that we really bled our heart out for this and still problems were found, I think you shouldn't really test too much as you anyway won't catch all problems.

And then again sometimes especially these days we release stuff without spending close to no time in testing and no problems are found. But with this often I feel that without the testing, you miss out on the chance to learn, to get ideas, to make suggestions, and to give feedback to the developers and business. So you shouldn't really test too little, because it ain't only about catching bugs.

Then again if you consider testing is "done" after you have deployed to production, then think again! In production you have a beautiful chance to investigate if & how the new stuff is being used and if it is working as expected.  And be ready to react if/when something unexpected happens. Testing in production is often at least as (if not more) important as the testing you do before the release. And if you can't see or affect to what happens in production then there's something missing from the implementation, like proper logging, pilots, toggles, communication with customers, etc."

So what about the significance of test automation? That must play a role too on how much you should test? And what about risk? And what ab.. 

"yea yea I know. Of course it depends on the context, on the level of automated tests, and on the risk of the change. It always depends. Everything always depends. But you get involved, you do what is needed at the given time, you make mistakes, you try to do better next time. And you will figure it out."

How about being involved early then? Like testing requirements?

"Well.. I don't really like to talk about "testing requirements". Sounds like reading thru excels trying to find sentences that don't have the word shall... But of course you want to be involved on all parts of the product development. Starting from discussing why should we do this thing, what impacts would we want to achieve. Then discussing and working together on what do we do, and how do we do it. Pair or mob on the code, do code reviews, talk with the users and the whole lot. I think doing stuff together from the start is in the long run so much more effective. And so much more fun!"

Aren't you afraid of loosing the objectiveness then when testing something so familiar?

"Not really. I am more afraid about testing something I do not understand. Or of testing irrelevant things because of not knowing the implementation. I am also a lot more afraid of solo work than not being objective. And anyway, I think you can still be objective while testing by switching your approach while exploring. E.g. by running through different scenarios. And it is really great to do some mob testing here too."

What do you call yourself these days? A tester? A QA? A developer?

No one answers better than the prisoner: :)