Full disclosure - I’m the Testing Team lead for Rocky Linux as a community volunteer.
I experimented with playbooks months ago looking to solve an unrelated problem for which playbooks were not a fit. However, when it became apparent that there were problems with how we (Rocky) were tracking issues in testing new releases I realized that Playbooks might be a good fit (huzzah for using lessons learned from past failures! ).
Rocky Linux released 8.7 on Nov 14 and that was our first attempt at using Playbooks and we rolled a lot of lessons-learned into our 9.1 which released on Nov 26. I’m not saying we did anything “right” nor specifically “wrong” but that there’s a lot that I want to do better next time and I would really appreciate feedback on how we could do better. (also, not saying anyone /should/ join the Rocky Mattermost but that it is open so if someone wants to explicitly look at what we did and what issues we ran into then I am
@stack there and the
~rocky-release-v91 (archived) channels should be viewable by all as we try to be as transparent and open as we can be).
We try to be extremely through in our testing of every release. As such, we had 255 tasks in our checklist for 9.1 and we identified a bunch more this release that we want to do better next release. Why so big? Well keep reading. I probably need to re-architecture the Playbook workflow (and we are slowly but surely automating more and more which will whittle down quite a bit).
Here’s some of our “pain points” that I’d like advice on please. Again for clarity, I’m 100% open to the “that’s a terrible workflow - you should really change everything” advice so long as one can help me understand the change.
- There are some items that we don’t care who nor how many people verify, we just need someone to verify it. Those are easy tasks because the first one to “claim it” as they start the task can check off the item when they are done. The harder part is that we have a bunch of tasks that we /really/ want multiple people signing off on.
I thought a lot about this and tried a few things such as “(Primary) Task A” and (Secondary) Task A" but wowza did that get messy. (Again, a lot of checklist items.) It got even messier when the same checklist is needed for each of the four architectures! So the way I proposed we solve this was that I created a master template checklist, then asked each of the community members to clone the checklist and rename it to their name and the architecture that they were testing on. This “solved” part of the issue but it created other clutter.
Thus, in solving the primary/secondary issue, I ended up with a lot more checklists. Worse - many of the checklists can not be solved by everyone so they had to “skip” items in their checklist which made even more clutter to scroll through (because items can’t be deleted and skipped items are cloned as skipped items in the sub-checklists which I really don’t like) but it also meant that maybe I had two “identical” checklists with different tasks solved by different people each as the “primary” but then a “secondary” check came through and was able to complete all of them. This meant it was much harder to track who did what when.
Personally, in my ideal situation - I’d like a way to have each task checked / signed off on by two (or more!) separate people. I just don’t know how to do that without it getting really messy.
As mentioned in 1 - the skipped tasks got SUPER messy and cluttered making it hard for people to find information. The use of filters REALLY helped me out so huzzah for the filters, but it would be awesome if I as the owner of the playbook could delete tasks out. I just couldn’t figure out how and I do NOT want just anyone to be able to delete items at random.
A further issue with #1 that made the clutter/mess worse. Release Engineering cuts RC1 - the Testing team goes through the checklist of items. Many tests pass but we also find a problem. Release Engineering cuts RC2 to fix the issue. Not every test needs to be rechecked but many do. I personally didn’t handle this well. We tried keeping everything only in chat, but it got lost easily (another issue is listed below) so I switched to creating additional checklists but again I had to skip a bunch of items (which yet again - more clutter) if I cloned or I had to create additional checklists with specific items that were essentially duplicates of previous checks which added confusion… This was just a mess in general. I don’t yet know how I’m going to deal with this in the future as I feel like everything I did was wrong when dealing with partial lists / re-testing of specific items. I’d LOVE feedback on how others handle this situation. Especially since Testing team did their job well and we found a lot of issues before the release and thus we had FOUR Release Candidates in various stages of items that needed to be checked!
We had an issue where someone claimed a task, verified it, and checked it off. But then someone else (who was just a community member trying to help - not be malicious - but they were not the owner nor a team leader) removed that person, unchecked them, redid the work, and claimed credit. This was not a primary/secondary check but maybe if we had a better way of doing primary/secondary/multiple-verification then this wouldn’t have been an issue, but two things:
A. It really hurts that I, as the owner and team lead, missed the first persons contribution. No one wants to be in that position where they are trying to contribute and feel like their time/effort was wasted. That is awful for everyone involved. (We’ve since reconciled.)
B. I love that anyone can claim a task but I REALLY wish there was some way that only that person or the Playbook owner could remove/reassign a task. That would have prevented this situation.
Maybe there’s a better way? Maybe there’s a history function I don’t know about which I could separate out those that did work (and thus I want to thank and give credit to) from those that just chatted in the channel (or for many they just joined and lurked but didn’t participate in any way which is why I didn’t use the channel member list)? I dunno. But I want to make sure this situation doesn’t happen again and would love feedback on how to better track contributions so we don’t repeat this.
- Last (for now!) but not least. One of the things we identified is that Rocky needed multiple Playbooks. One for Release Engineering, one for Testing, and one for Documentation (also responsible for PR notices and social media posts/ect). But we don’t need them all at once either. Is there a way to have sub-playbooks that only kick off when needed? Or would a primary playbook be better where each playbook is kicked off when it is needed? If we do multiple can they all be in one channel or would that be way too much to have multiple playbook/runs in one channel? Thoughts on how to handle multiple teams (each with their own needs) working on the same “release”?
Sorry this is so big but I really would like to do this better next time and would really appreciate feedback from those already doing Playbooks for large teams / releases.
And I want to leave off with a HUGE shout-out to everyone on the Mattermost team and the Playbook team. Using Mattermost as a community tool to organize is amazing and the Playbooks I feel are a much better leap forward then the previous methods we tried. There’s things I’m obviously trying to improve on but the tools this time made things MUCH smoother.