SOFTDEV: Transitioning from waterfall methodologies to agile…

So you’ve read the literature and you’ve decided your team needs to move to a more agile and iterative process. Great. Good for you. Now what?

If you’re a small business, it can be relatively easy to paradigm shift so long as the senior management are on-board, and the team is willing to try new things. Your first job is to sell this change to everyone internally. Agile development is centered around self-driven teams. After you convince the team to give it a go, I recommend you start with either the sales team or the guys that complain the most about your current deliverables (for example customer support or systems engineering). In small businesses, it might be best to go directly to the customer and explain this improvement you are making to get more transparency and customer focus, even if it is an internal process change – your customers will be intrigued. Critical to your shift is “Iteration” and fully completing a task including test.

If you’re in a larger business, you likely have a set methodology you are already following and moving to Agile development, SCRUM, Kanban or ScrumBan is going to be a process of changing program management’s mindset to iterate and deal with uncertainty. You are not going to plan and estimate the entire project up-front but iterate towards a point where the product is defined. Manufacturing companies struggle with this most because they have fixed systems requiring resource planning beyond just human resources and if they are manufacturing several products already, they must plan for alpha, beta and general availability to coincide with inventory levels and possibly retooling factories. In this environment where your software team is an essential part of the creation (but not the entire product) you can still move to a SCRUM model but it has to adapt within the larger costs for the project.

Moreover, significant resources are going to be assigned continuously on software projects unlike most hardware development projects. A manufacturing company finishes the development of a new product and moves on to another product. Software has continuous develop/release cycles and requires dedicated resources as a core team, with supplemental resources during a major product revision.

Square peg, round hole?

As a software manager at a semiconductor company, I faced this problem of fitting software development SCRUM into the program management waterfall methodology which was evolved to limit mask set creation (effectively the “tooling” for a new chip in the same manufacturing technology) because those mask sets cost millions to create and sample and must be handed to a Fab which adheres to complex scheduling and inventory processes. At most the company wanted an Alpha and R1 maskset and ideally, the R1 maskset directly. The same was true for the evaluation boards created on which the new processor would be evaluated by customers. The chips, the software and evaluation boards needed to be created before certifications and general availability. It is easy to see in this environment why a waterfall methodology is being used. The company is working towards better pre-silicon testing and with better simulators and emulators may be poised for an Agile shift, but I didn’t want to wait for that to be reality.

So how do you fit a scrum software development framework into a program management driven organization insisting on waterfall methodology? Well, for runtime software, we prioritized the drivers to be written or modified based on the hardware IP block changes within the silicon and built our scrum board backlog using those priorities. We automated a duplication of drivers that did not require significant change, created a process that could scrape HW designs directly for things like registers, pin signals and clocking data so we could concentrate each sprint on iterating with the updates to the hardware designs. By sprinting this way, we had time to create the primary elements for simulator/emulator testing and ensure that alpha software was available by the time alpha hardware/board samples arrived. In later sprints, we added the examples and demos needed to show off the silicon’s capabilities. You could say we were doing a pseudo agile process, but I will tell you that iterating in this manner allowed us to reduce driver costs and support more silicon products being developed in parallel while achieving more alignment with the hardware team because they were engaged and gave focus to each sprint.

Scrum had even more impact when we decided on creating new development and configuration tools to support the silicon. We divided a relatively large tools team into multiple scrum teams each with a mandate to build tools capable of being cloud accessible, with versions for use on a desktop as well with data interchange between the online and desktop tools. These new scrum teams were smaller, but iterated quickly and included virtual scrum demos. These tools teams were producing a strictly software product not necessarily aligned to hardware schedules and could use all the mechanisms of Scrum and even some of the capabilities of Kanban. But we still needed to fit into the waterfall methodology of the Program Management team.

To fit the phase gates of the typical waterfall system (concept, feasibility, planning, development, test, release, post-release) we decided to map the concept phase into our storyboarding and initial backlog timeframe and we went through the approvals from Concept to move to the Feasibility phase. When entering Feasibility, in the past, software teams would review a completed Product Requirements Document (PRD) and do the planning and scheduling up-front, detailed designs and estimates were then locked into a plan of record and committed to schedule typically 9-18 months hence with a 3-6 month Alpha/Beta cycle. After working with our software program manager we developed a plan to do a series of sprints BEFORE we reached the phase gate for planning to development. This met some resistance in management at first, but we argued that by the time we’d iterated several sprints we’d know what we wanted and could try several interface and mechanisms with transparency through demos after each sprint.

One of the most difficult changes for the organization was the fact we built our “requirements” along with each backlog item. In essence the collection of the backlog items that made it into sprints, became the detailed plan and revised the requirements, and that iterative development of requirements was very difficult for our Program Management team. We built user personas and stories up-front during the concept phase as we were building our backlog items and we grouped backlog items into epics that in concept represented our requirements but even these evolved as we did our initial sprints where we involved more people in our sprint demos.

Be transparent

My weekly report invited EVERYONE in the company to join the sprint demos with advertised date/time information. Naturally, it would have been a problem if everyone had attended but the message was transparency and it worked. There were 30-50 people sometimes attending these Virtual Scrum Demos including management, sales, field application engineers (who gave customer perspective), applications engineers (who support customers) and even hardware engineers along with the entire scrum team, the scrum master and product owner. We often had three continents represented on these calls. These Virtual Scrum Demos were hugely popular because for the first time in the company, they were involved in the development process. By the time we had iterated 6-8 times, we had a very good prototype of what we wanted to release and had clarified the backlog items needed to be completed. We then took the remaining backlog items and prioritized them into remaining sprints, mapped those sprints to silicon and development board deliverables for release testing and went to the planning/development phase gate.
clipart line of running people
Not surprisingly, as we finished the remaining sprints we got less and less attendance at the Sprint Demos. We could tell when it was time to release by the attendance roll call we made and the requests from our sales field application engineers “Can I give it to customers? Can I use it? When will it support xyz silicon?”. It was very easy to know we would have a successful software release.

One area we were still addressing when I left the organization is that of testing. While developers in the scrum team would test their sprint work and do integration testing before each sprint, we did not have release testing integrated. In larger companies that are used to Waterfall methodologies, test engineering is a separate role. In our case, the test engineers for software were located half-way around the world from the developers. We were in the process of addressing that and adding them to the SCRUM team locally. They would join the scrum daily stand-ups, and work on test plans and automated testing in parallel to the development team. Our biggest challenge in evolving to scrum was test engineering frustration over us completely revamping the UI and functionality during early sprints. I think this is especially true of new product development and less for revisions to an existing product (since backwards compatibility is always an objective). Having local, integrated test engineers in the process should make a world of difference.

To conclude, we adapted Scrum software development to both the runtime software and the software tools creation process, but fit this agile development into a program management driven waterfall model and it has been very successful. Was it pure Scrum? No. Did it adhere to the principles of Scrum? Yes. More importantly to me, it gave the team transparency within the company which made the software much better for our customers.


itsjustsoftware is my blog at blog.hemstreetsoftware.com. Please comment on this or any post I would love your input. Also, if you haven’t already please subscribe to this blog on the left panel. If you are going through software development changes on your team, let me help you. Use the Contact action on hemstreetsoftware.com to get in touch.

SOFTDEV: Measure what you develop; make it part of your process

Every software methodology tries to incorporate estimates/actuals and variance reports. By the time you get to SCRUM or very agile methodologies, you should be measuring your team’s velocity. You might be looking at the backlog count & estimates, the tasks per sprint, time for a sprint and the number of sprints … but you are measuring.

In my career, one of the biggest lessons is measure everything. If you don’t measure your project (a) you can only guess at completion dates; (b) you will not improve; (c) you will have issues with management and/or customers. I’m not talking about measuring the number of lines of code or anything that crude – I’m talking about measuring what’s important to your organization.

I have heard it argued that the difference between a software development team and a software engineering team is that engineers track and measure their results scientifically. You might argue that the creative side of software design is more an art than a science but software development can be measured and you (your team) will be better off for doing that measurement, and reviewing that data to improve the next time through a software cycle. Measurement is good. Even Art is comparatively measured, so be creative and chose the metrics that fit your team, but take the time to measure.

For improving development of requirements, I suggest you measure the number of tasks per requirement (as defined by an Epic or Story). During your sprint/release retrospective meeting review the metrics and decide if requirements are clear enough or need to be broken into separate pieces. This will improve the product owner’s ability to define the requirements and will clarify the outstanding backlog items related to this requirement. You can also review the requirements outstanding during the sprint retrospective or at the end of the sprint demo meeting to determine if they are still needed and relative order of importance. Reducing requirements and measuring that over the course of the sprints to release gives you a good measure of the focus and respective understanding of your audience.

If you start with 20 requirements for a product and only deliver on 10, or more typically, start with 10 and go to 20, perhaps your method for collecting those requirements should change. One way to do that is with better Voice of the Customer requirements gathering or appointing several of the technical sales team to represent your primary audience in coordination with your product owner, so you avoid feature creep. Beware however, that during development insights can be gleaned from showing a sprint demo’s results that lead to innovative ideas – you don’t want your process to stymie innovation – revising a requirement during iteration is one of the best characteristics of Scrum!

Speedometer photoI like to measure the backlog items outstanding vs sprint items accomplished. Scrum is all about velocity. Completing more tasks of the same difficulty as compared to earlier sprints, is a great measure of your velocity. Of course, this can have a subjective component unless time estimates vs actuals on tasks are tracked and measured.

For improving sprint planning, your retrospectives should always review the task estimates vs actuals. As you are planning a sprint, you should add in-context estimates (estimates based on what has been done before, and what dependencies exist for this task in this sprint). Another related metric is the delta between initial estimate in the backlog item and the in-context estimate during the sprint planning. The accuracy of those estimates are essential to improving the sprint plans and managing scrum team members’ ability to estimate improves their skills as software engineers.

There is also a whole category of testing metrics that are necessary – test failures, escape rates, blocking issues, static analysis results, code reviews, code to test failures (for weak areas in the code), repeat failures (the number of develop/test cycles for a task), build failures, etc. All these are essential to good quality code and can be an objective way to look at effectiveness of your team.

Warning: metrics should always be used to improve your team and product releases, not as a weapon for performance discussions. If you start evaluating your team’s performance, salary or bonus on the process metrics you will almost always see your team “play” the metrics which will have a terrible effect on your team and product release. Rather, use the metrics above to establish a path for each member’s success and remember that your primary goal should be to produce high quality innovative product that thrills customers. If you know your team well enough you will assign the right backlog items to the right people and over time raise the performance of every member.

While this post does not include a comprehensive list of metrics for software development teams, I have listed the ones I’ve found to be most effective.

I’d love to hear from you on your metrics used to build great products – please feel free to comment on this post and don’t forget to subscribe.

 

SOFTDEV: Top 10 software practices

SOFTDEV: Software Development Methodologies

One of the categories in this Blog is about Software Development Methodologies. From a single person project to hundreds of developers there are some key similarities and some significant differences. Also, the evolution of the art of software development and the science of software engineering have changed the processes used. Finally, the tools we have to work with have evolved and improved significantly.

I’ve been programming since 1974 (see BIO – Programming in School). Sure my initial projects were in high-school and structured around basic programming fundamentals but even in my teacher’s classroom, there was structure and a set of expectations. My teacher insisted on commented code or you got a zero grade and she encouraged us to write the comments before the code … and remember, each line of code was a keypunch card back then.
We’ve come a long way since the 1970’s, even if you are a project team of one person, but we have not outlived the usefulness of good software practices:

  1. Understand your user’s requirements;
  2. Design your code before you write your code;
  3. Comment your code well;
  4. Make your code readable to others.

I’ll add to those basic principles some practices that bring us into this century and more into a team development perspective:

  1. Build tests as you build your software;
  2. Use a version control system (CVS, Subversion, git, etc.);
  3. Define a build system that versions the builds tied to the version control system;
  4. Iterate fast, and demonstrate working, tested code, even if it’s not the entire feature or product;
  5. Always make your project measurable;
  6. Communicate your completion estimates weekly to management, to customers or the intended users.

There is a lot written about software methodologies like Waterfall, Agile, SCRUM and Kanban among others and they encompass the majority of the principles listed above along with the concept of self-organized teams to get things done, on-time, on-budget, with happy customers and employees.

I want to explore some of the more challenging aspects of software development methodologies from my own experience and talk about the challenge of transitioning from one process to another for continuous improvement.

BIO: Programming in School

I was a very fortunate student going to a great high-school with the privilege of access to a IBM 1130 computing system on the second floor.

IBM 1130 photo
Photo by Martin Skøtt
Once you showed the teacher you were competent, she gave after-hours access to the computer room and a bunch of us nerds hung out and learned to program. First in assembler, then in Fortran. If you were real good you got to switch the language pack in a large DASD (which was replaced by floppy disks). Only the teacher bootstrapped the computer with the paper-tape. As I recall she was worried we’d tear it. I learned to type on a keypunch in that classroom and to program. I made a personal swimming log program (I was a competitive swimmer) and I made a cribbage game I could play with the computer using dip switches on the console. Okay they were rudimentary but I was a programmer before push button telephones were common.

Even doing a project on your own, you establish programming styles and expectations so others can read and understand your code. My teacher insisted on commented code or you got a zero grade and she encouraged us to write the comments before the code … and remember, each line of code was a keypunch card back then.

Even when I got to University of Toronto, most of my class projects were done on keypunch cards. Only in my upper classman years did I use a PDP11 with VT100 terminals and 3270 green screen terminals with mainframes. In 1980 I got my hands on an Apple II and from that summer on, I’ve had personal computers in my dorm, home and office. I think I had the first PC in our dorm in 1980 and I did all my senior papers on that Apple II (yes, with the shift key modification).

VT100 Terminal photo
Photo by vegms

IBM 3270 photo
Photo by vaxomatic

If you’ve ever read the Outliers by Malcolm Gladwell, you know about the 10,000 hour rule. Despite newer studies to the contrary, I still believe that rule to be true. I had more than 500 hours more than every other Computer Science student when I started College. By the time I graduated with 2 summer internships, I think I must have been close to 8,000 hours. I was promoted to project lead on a 50 person project when I was just two years out of school and by then I easily had 10,000 hours. I won’t go as far as to say I’m a Guru in anything but I am certainly accomplished, in part because some educator negotiated to get a computer put into our high-school and a high-school freshman nerd found his groove.

Hello world!

I’m creating this site to provide a forum for discussing software subjects from my perspective. I have four main categories or subjects:

  1. Product Management
  2. Internet of Things
  3. Software Development
  4. Interests

Checkout my About page and the post category “My Background” for more about my philosophy and thinking…

 

Cloud, Mobile, PC, or embedded Projects – it's just software!