Telerik blogs

(This is a continuation of “Behind the Scenes with the TeamPulse Team – Part 1) post

The Backlog

Backlog Management was a major feature in the 2011 R2 release. This is yet another example of where the team released new features to our dogfood server as quickly as they could in order to start realizing the value the feature provided to the team. In this case, the team immediately started to use the backlog view to prioritize the combined set of ALL stories and bugs, use custom views to help narrow down the large amounts of data we worked with on a daily basis and use tagging to help categorize items outside of our area tree.

image

Yes.. that’s right.. the team has special tags for my requests as well as requests that come from Vassil, the Co-CEO of Telerik ;-)

The backlog was very important to the team for a variety of reasons. First, it allowed the product owner to focus on overall prioritization – this means prioritizing bugs along with stories, risks and impediments. The “stuff” highest on the list became the next block of work that would be scheduled. The product owner, Steve, made use of the drag and drop prioritization ALONG with MoSCoW (Must Have, Should Have, Could Have, Won’t Have) priority classifications to help figure out what to do next. Of course, the team’s backlog changed almost daily – bugs continued to be entered via our forums and from internal testing, stories were re-prioritized based on sprint review sessions resulting in a very fluid and constantly changing target (why we absolutely LOVE agile). All of the extra filtering functionality along with Tagging came from internal teams at Telerik, especially the TeamPulse team.

I really wish prioritization was as easy as one feature having more value than another, but in reality – this is still something we wrestle with on the team. For the most part, the way the team viewed priority also took into consideration the “cost” (ie.. estimate) of that feature. A feature may seem very important, however, it could be so large that nothing else can get done in an iteration – well, the priority may go down as a result. The team found that only by looking at value AND cost/risk could you truly prioritize the backlog.

One feature in particular that was used a lot by Steve was Backlog Views. The idea behind backlog views came from the want to save the settings of the backlog grid (columns, sorting, filtering) so that you can quickly come back to that view at some other time. The TeamPulse team decided that it would be nice to make these “viewed” public – so that when Steve created a view he could share that view with the rest of the team. In fact, this is exactly what Steve did. During the R3 planning exercise (which, by the way, was very iterative and was done over the course of a number of weeks), a number of Epics were identified (more on epics and stories later) that fit into the theme of R3. Steve tagged all of these epics with the “R3 Epics” tag and created a view that let everyone involved in R3 planning quickly see what the working set looked like. We then used drag and drop prioritization and high level estimation to figure out sequencing and overall priority. In fact, according to Steve “.. it’s the only screen I really use”.

Steve and I weren’t the only people who used the backlog screen. During sprint planning, the team used the backlog quite a bit. They use this screen to decompose stories into more sub-stories, fine-tune prioritization, and to enter estimates. This area within TeamPulse really became a pivot for the entire team.

Managing Bugs

Yes – the TeamPulse team has bugs – just like every other software development team in the known universe. Bugs are a first class citizen in our world and as a result the team really embraced a “two phase” bug management technique – where bugs were first entered into the system, then triaged to assess and assign severity. In fact, the triage functionality in TeamPulse came directly from this need. In essence, anyone can record a bug resulting in the system raising a flag that indicates that “something needs to be triaged”

image

In this case (above image), 4 bugs needs to be triaged. At this point, the team attempts to assess the bug – they do this by trying to reproduce the bug, assign severity to this bug (we have a set of rules that help guide us in this step so we all have the same definitions of what a “critical” bug really is), assigned to the correct area, associated with the correct user story(s) and properly prioritized. Bug triage happens as often as possible. The team initially had a rule that as soon as the triage flag hit 10 – they would begin triaging. In reality, even 10 was too high – so this process became much more regular and is part of the daily cycle – especially for Petio, the TeamPulse team member who drives the quality and testing workflows on the team. By the way.. in order for bug triaging to be ultra-accurate, the TeamPulse team found that a developer should always be involved. Rob and Yordan, the dev team leads in Winnipeg and Sofia, were usually heavily involved with triaging the bugs.

image

How we Write User Stories

I’ve written many times about user stories and personas (here, here and here)– and we try very hard to follow our own best practices on our own projects. In many ways, user stories are truly a placeholder for conversation – they MUST NOT BE USED in the traditional sense of specifications. User Stories without conversation is just a document that can be misinterpreted. It’s the conversation that drives understanding and clarity – not documentation. That doesn’t mean we don’t document our user stories – however, the documentation is really there to either document our conversations or to add needed levels of details that we can’t forget.

Here are some examples of real epics and stories in our system. When we started to think about our Feedback Portal we just announced, we started with a high level story – an epic – looking like this:

image

Note, we’ve tagged this story as an epic, we also make use of personas in our system (helps us stay focused on need and value) and we’ve used Quick Linking to decompose the epic into some possible child stories (I say possible, only because they might not all get implemented). The team uses quick linking a lot – no context switching – yet able to rough out entire epic/story structures complete with full traceability without leaving the story itself. This proved to be a very natural and VERY fast way of stubbing out these conversations.

At the story level, we have a bit more detail (again, more of a documentation of our conversations than specifications)

image

.. and we also have acceptance criteria

image

The story is, of course, linked back to the epic naturally due to the fact that the story was sketched out from within the epic using quick linking:

image

Just a quick note on how we schedule epics and stories. In general, Epics are assigned to the release. In fact, the child stories are also initially assigned to the release iteration as well. During sprint planning, we’ll grab stories (not the epic) from the release iteration and assign it to the sprint – just in time.

DogFood

Dogfooding refers to the practice of using your own software as you develop it. The TeamPulse team is far from the first team in the world to do this; however, it’s likely one of the most important practices of the team. Dogfooding happens on a few different levels. First, we need to build the software – the team will work during the sprint to build the latest batch of features. At the end of the sprint we’ll have the typical sprint review and retrospectives. The very next day, the latest build (remember, the team strives for “done done” every sprint) is released to the TeamPulse dogfood server. This server is only used by the TeamPulse team, so if something goes wrong, we won’t impact any other team at Telerik. With that said, Telerik maintains a TeamPulse server for all other teams within the organization. This server is updated once every 1-2 sprints. In addition, we invite members from teams across Telerik to our sprint review meetings and we absolutely love it when they say “when can we have that?”

I can’t say enough about dogfooding. This has been a practice that we’ve really invested heavily in over the last release and has produced some amazing results. First, it helps to solidify the “done done” mentality since our team doesn’t want to use half-baked features. Second, we’re our own worst critic. As we use the application we find a host of ways we can improve it. The feedback we get from our internal teams works its way back into our production cycle very quickly where we actively track the enhancements as new stories. Sure there have been situations where we’ve updated our dogfood servers and went.. “oh boy.. that wasn’t very good” – and when that does happen – all hands on deck to get it fixed and we celebrate the fact that we found the problem early and have a chance to fix it prior to release. However, for the most part – the entire team really looks forward to “dog food update day”. Dogfooding has really helped us keep an even stronger eye on quality. We’re eating our own new features as soon as they come out.. we can quickly determine if something tastes bad and find ways of doing something about it before we feed it to our customers.

One final note about how we use TeamPulse internally on our dogfood servers. We all have different platforms. A lot of our team runs iOS. A lot of our team runs Windows on Apple hardware. We truly have a mixture of browsers (from Firefox to IE to Chrome) on our development environments and as a result really experienced TeamPulse from many different perspectives – not just leaving platform testing up to the QA processes.

Pair Programming (um.. not really)

I really wouldn’t classify how the team works as Pair Programming – rather “really collaborative” programming. Developers don’t sit in cubes or behind doors – they continually collaborate with each other using the highest bandwidth medium available to them to tackle each other’s problems. For the most part the pairing is done during design – not necessarily during coding. This doesn’t mean we couldn’t improve in this area – but for the most part, the team works quite well in this way – especially due to their close physical proximity to one another in each of the team locations.

image

One other practice that the team promotes is “cycling” – where feature development is cycled across developers of the team. Here’s an example. Email notifications started with a developer in Winnipeg. By the end of the release, 2 other Winnipeg developers and one Bulgarian developer worked on notification features you see in the application today. Having all of your eggs in one basket is dangerous, and this helps ensure that we’re spreading implementation knowledge across the team and geographies.

next up…

In part 3, I’ll talk about our sprint review and retrospective processes, how we handle quality, and more about our Microsoft TFS environment and usage.

Stay tuned…


About the Author

Joel Semeniuk

is a Microsoft Regional Director and MVP Microsoft ALM. You can follow him on twitter @JoelSemeniuk.

 

Comments

Comments are disabled in preview mode.