Posts Tagged SoftwareDevelopment

Who’s At Fault, Developer or Tester?

A few articles that I read recently and one specifically in the recent past is the motivation for this post. When a defect is identified in production, it is not uncommon for somebody to question why it happened (of course, if nobody is questioning, that would be a bigger issue)? Also, it is not that uncommon to hear developers pointing fingers at testers. In turn a tester may say it's your code and you messed it up, my dear developer!

No software can be 100% bug-free, we all know that. But the kind of issues that we are talking about here are the ones that are avoidable. The kind of issues that you see and say -- ah! as a developer I should have taken care of proper null checks OR I shouldn't have made that assumption about a scenario, etc. As a tester you may think I should have used a different set of inputs OR I should have better tested the boundary conditions, etc.

Several years ago I was first shocked when a developer friend of mine said 'why should I test, that's the job of the QA group'. That was his response to my innocuous  inquiry on 'you just committed some code to the repository, are you done with unit testing?'. Situation, based on my own experiences, has improved from those days.  There is some awareness in terms of unit testing and in terms of the developer's responsibility.

Having said that, there is still a belief that QA is more responsible for the quality or somehow testers can magically achieve quality. Don't get me wrong, there is significant benefit that can be achieved with testing. However, a false expectation to start with is -- if you don't do your job well in terms of designing, implementation, code reviews, etc. but you expect the quality can be improved downstream by testing (and more testing). On the flip side it may not be fair to a developer if a tester says -- the primary artifact is the code, and it is your problem not mine.

I'm fortunate enough to work with some wonderful testers. Some times I wonder whether it is some kind of "genetic trait" that they could come up with some of the scenarios that I would have otherwise missed. At the same time I have seen some mediocre testers who are laid back shrugging off any kind of responsibility. To be fair, we have this kind of people in every functionality of the software development.

The following points may make some sense in this context:

  1. Competence: Without competent people any set of constraints or processes would fail. You may not have super stars in your team but you need a majority of people working with little or no managing, knowing their job responsibilities well enough. This may sound too basic to get into this list, but in reality competency is one of the core issues of software development. For testers, it is very important to train and provide enough support to help them catch up with the pace of new technologies used in the development.
  2. Work as a team: A whole-team approach is a more desirable one. The individuals involved have only one target set -- achieve better quality. Find issues sooner in the development process and effectively address them. If an issue arises at a later point of time it is everyone's responsibility. In Agile Testing book Lisa Crispin and Janet Gregory aptly said:

    When the whole development team takes responsibility for testing and quality, you have large variety of skill sets and experience levels taking on whatever testing issues might arrive. Test automation isn't a big problem for a group of skilled programmers. When testing is a team priority, and anyone can sign up for testing tasks, the team designs testable code.

  3. Management failure: If there is more frequent bickering in terms of who's at fault, a possible reason could be that the managers are somewhere losing their grip on setting constraints. As Jurgen Appelo pointed out recently:

    The good-to-great companies built a consistent system with clear constraints, but they also gave people freedom and responsibility within the framework of that system. They hired self-disciplined people who didn't need to be managed, and then managed the system, not the people.

  4. Root-cause Analysis: A very important step in my opinion that is frequently been discarded; many places it is not even considered. Regardless of who made the mistake or which part of the process has leaks -- the focus must be on learning from those failures. Only when you understand what went wrong you can plan accordingly on rectifying it. I have written earlier about root cause analysis. Here is another nice article on Five whys that I stumbled on recently. Do not treat the symptom, treat the underlying cause which will take care of the symptom.

Please feel free to provide your inputs (in the comments section) as to what worked for your organization, how you perform root cause analysis, and anything else that adds value to this discussion.

photo credit: purpleslog


Joel vs UncleBob: Relevance of TDD and SOLID principles

This is really a clash of two titans, both of whom I greatly admire -- Joel Spolsky (Joel on software) and Robert Martin (uncle Bob). I follow both of them for many years, their writings, on Twitter, podcasts and so on. Uncle Bob was not so happy about the discussion on Stackoverflow podcast #38 and he responded in kind.

My thoughts:

  • Unit testing is extremely important. Every minute that a developer spends writing a testing is well worth hundreds of minutes he would spend later in the maintenance phase. Having said that, if the concern is whether 100% code coverage is too much? Perhaps it is, determine what gives best bang for your buck and go with that. If the question is whether they are useful at all? Absolutely. Taking the same reason as cited in the podcast -- you have a large code base and now you need to either modify or add new features. Once you make the change:
    • how do you know you are done? I run the unit tests that I wrote for the new functionality and pass them.
    • more importantly, how do I know whether I broke any existing functionality? I run all the tests that I have (assuming I have decent code coverage) and make sure all the tests are passed.
    • unit tests along with the acceptance tests, if run more often, can provide extremely useful information about any deviations from the requirements. Rapid feedback, isn't it one of the core principles of agile?
  • You don't have to follow TDD to the tee, or any methodology for that matter. But the spirit of unit testing, in my opinion, is extremely important to know the health of your code.
  • SOLID principles, as Uncle Bob said, are engineering principles. These principles guide you in building the software that is flexible, maintainable and reusable. This is even more important in agile environments where you develop software in tiny increments.
  • Enough emphasis must be given to code hygiene. With out that, it is just a matter of time that the system gets polluted, and not sure how you can keep the customer happy for long. I agree that the focus should be on the customer and just don't architect or refactor just for the sake of it. Focusing on the customer means -- designing flexible, extensible architecture (using design principles) so that you spend less time in constant-maintenance mode.
  • Another common argument against writing tests is time to market. Goal is to get the software out as quickly into the market as possible, fair enough, but at what cost? As software craftsmen we need to pledge for not shipping shit (must read).

I repeat, I have great regard and respect to both the individuals, and hope Joel can elaborate on some of the points that are discussed in his future podcasts and clear the air.

Tags: , , ,

Eliminate Waste – The Toyota Way

I'm currently reading The Toyota Way, love it so far. More detailed book review will follow in a separate post but for this one I would like to concentrate on the waste (w.r.t software development) and eliminating it. Book describes Toyota Processing System (TPS) and much more. Lot of it can easily be applied to the software development.The core principle is the same regardless of the industry --
The first question in TPS always is what does the customer want from this process? (Both the internal customer at the next steps in the production line and the final, external customer).
For every step of the process the focus is very much on what additional value is being provided to meet customer expectations.Here are the wastes described in the book along with some of my thoughts as applicable to the software development:
Working on projects/features/tasks that adds no value to the larger goals of the organization is considered waste. Scope management is extremely important while the project is in progress. It is not that uncommon that wants are inserted into the project scope that are separate from the needs. These wants increase significant effort with design, code, test, etc. These efforts, which are unnecessary to start with, opens up chance for more design and coding, means more scope for defects and maintenance.
Waiting (time on hand)
Lack of coordination among different groups cause team members wait for the next action. We may have seen many times developers waiting for completed requirements, testers waiting for the next build so that they can start testing. Some times even a minor change takes forever to be applied with significant overhead in the change control process. Acknowledging these issues itself is a battle that is half-won. For example, developers waiting for a central build to happen to know the issues with their code is counter-productive. They need to know that sooner; setting up a continuous integration system could help alleviate that concern.
Unnecessary transport or conveyance
Based on my understanding this point relates to overhead involved in performing development activities. For example, it may be a company policy to produce UML diagrams upfront. Architects spending significant time on this just for the documentation sake is not adding much value to the evolving design. The focus, as discussed earlier should be on value and value alone. Identify the overhead, eliminate it. Find different (lighter) ways to communicate. (Among many options Modeling in color using post-it notes is a useful design technique).
Over processing or incorrect processing
At macro level -- company's vision is not translated into tangible value-added goals by the middle management, individual architects/developers spending time working on tangential items that adds no value to the customer. Over processing has some overlap with overproduction mentioned above. Incorrect processing is more common in a heavier process environment with longer feedback loops. (On a lighter note, a cartoon describing what customer actually wanted and what was actually built).
Excess inventory
Each group working independently, developing their own components for way too long and try integrating only during the final phases of the project. This would usually result in more work or the work in a wrong direction. Incorporating rapid feedback mechanism is the key. Again, a close-knit cross-functional team works much better minimizing this waste.
Unnecessary Movement
Unnecessary distractions could fall in this category. As is often quoted from Peopleware -- developers work their best when they are in the "flow" or "in their zone". Each distraction wastes significant development time. Needless to say but as much as possible developers must not be pulled into unnecessary meetings as they should concentrate as much as possible on the stuff they do the best.
This is a well known waste in the software development. A defect by itself is not such a bad thing, but the concentration should be on finding it sooner and as much early in the process as possible. Patching the fixes is not the best way to build the quality. The quality must be built into the process. Test automation is an absolute must (my experiences in this regard are documented here). It is equally important to learn from the earlier defects, recently posted my thoughts on root cause analysis
Unused employee creativity
This is applicable to employees in any industry. It is extremely important to instill a sense of ownership in the team. In the author's own words -- losing time, ideas, skills, improvements, and learning opportunities by not engaging or listening to the employees is not acceptable.

Tags: , ,

Root Cause Analysis: Found a Bug, Now What?

A bug is identified, now what? Fix it. Of course that is an obvious answer. This post is intended towards a better approach of addressing a bug with long-term gains in mind. Addressing a bug is a given thing, but what you do as a follow up act is important. A bug can be reported at any stage of the development process, we will consider one that is reported after releasing the product to the client.

Assessing the root cause of the issue is the key. The metrics available from this exercise are so valuable that the time spent is an extremely valuable investment that an Organization can make. Wikipedia defines Root cause analysis as:

Root cause analysis (RCA) is a class of problem solving methods aimed at identifying the root causes of problems or events. The practice of RCA is predicated on the belief that problems are best solved by attempting to correct or eliminate root causes, as opposed to merely addressing the immediately obvious symptoms. By directing corrective measures at root causes, it is hoped that the likelihood of problem recurrence will be minimized.

How can we apply this concept to Software development? Extend the life-cycle of a bug. Once a bug is fixed don't just close it in the tracking system. Move it to something like a Root cause analysis stage. Only after completing the analysis close the bug. Okay, so what consists of analysis? It actually depends on what phase of SDLC the bug is more associated with. More often than not a bug is associated with more than one phase -- like an issue that is missed during design/implementation phase and also missed during the testing phase.

Requirements Phase

In general, a bug or defect is something that is not compliant to the agreed upon requirements. So if an issue is reported and the requirements explain the behavior to be something different, user-misunderstanding of the requirements could be the issue.

  • User misunderstanding: This is relatively easy one, but still requires some action from the teams involved. Leadership may have to verify if the user training material needs any update based on this experience.

Design/Implementation Phase

Whether developers or their managers like it or not, in many Organizations more "blame" is attributed to them if a bug arises. That is debatable, but what is more important is to pro-actively address as many issues as possible in the given time frame. They need to make good use of the tools available at their disposal. Perform static analysis, code reviews, maintain good code coverage with unit tests (More on each of these in separate posts). One most important aspect, unfortunately not given utmost priority, is automated functional/integration tests. There are many tools available, including open-source free options that can be used. The tools being too expensive to afford is no more an excuse. My thoughts on some of my experiences evaluating these tools are published here.

Coming back to RCA, when a defect surfaces the teams must identify which areas can further be strengthened. For example,

  • Code Review: Perhaps doing better code reviews could have addressed this issue long time ago. Static analysis can be automated with tools like PMD, FindBugs, etc. If you already have these processes in place then add an additional rule in PMD (or similar tool) related with the issue in question.
  • Unit Test: If you determine that a JUnit (or some XUnit framework) test case can better track the issue -- the next action is to write a unit test and add it to your test suite. The assumption is that you run the unit tests more often in the spirit of Continuous Integration. If you already have a unit test that did not catch the issue, fix it.
  • Functional Tests (automated): With the advent of open-source tools developers can do themselves a favor by writing an automated test or ten. If they get into that mode, tests can be written considerably faster and saves significant time later on in the SDLC. The concept is very similar, if you determine that an automated functional test could have identified the issue sooner, by all means add one.

Testing Phase

Ideally, testing group should be the first line of defense when a bug arises. An introspection from the testers on how they could have identified the issue (during the testing phase) would certainly help.

  • Regression Suite: Regardless of the processes followed testing teams usually will have a set of test cases and test plans that they use to certify a build is good for release. When a defect is reported, regression suite is what you would examine first and make any necessary updates.
  • Acceptance tests: Ideal to automate as explained above, and perform the similar actions as described above. All that it requires from the leadership is to identify if this is the area that has to be strengthend for achieving maximum results.

Is it a Reactive Approach?

It appears that the approach is more reactive in nature, you are performing certain actions only after a bug is reported. Wouldn't you like to find the issues sooner than later. An analysis of this sort, if done correctly, can provide you with certain patterns, trends and metrics. For example, while adding a test scenario you may identify a related scenario or another variation of the issue, and you address that by adding additional tests or adding another unit test. You may also identify a particular pattern emerging, say more issues are being reported in a specific module or certain kinds of design issues are emerging more often.

As explained in Wikipedia:

RCA, initially is a reactive method of problem solving. This means that the analysis is done after an event has occurred. By gaining expertise in RCA it becomes a pro-active method. This means that RCA is able to forecast the possibility of an event even before it could occur.

It would be intersting to know how many Organizations take this kind of long-term approach in their development process. The emphasis should be more on improving the processes than to "scapegoating" individuals by taking short-term band-aid approaches.

Tags: ,

Regarding ‘Collective Stupidity’

I've stumbled on this post by Bruce Eckel. The question Bruce posed was:

Why does a company full of smart people make stupid decisions? How do we keep it from happening?

My thoughts:

  • It all starts from hiring process. We just cannot treat a software developer as a resource or an entity that can be replaced and expect the same output and same quality as an earlier 'resource'. Innovation is the key to success, the need really is to find an employee not just with the 'qualifications' on paper but the one who is eager and enthusiastic about solving problems and think differently from the defined set of rules. This is easier said than done, but I think the organizations should continue to refine their hiring processes towards this goal.
  • There is a lot of emphasis put on number years of experience. I'm sure many of us have seen some professionals with 10+ years of experience but it may not reflect that much when they write the code or when they make technical decisions (more on this later). Number of years of experience by itself cannot be a criteria, somebody said jokingly that 10 years of experience could be doing the same one year job 10 times. Seriously, if you keep doing the same stuff day-in and day-out you are not learning much in terms of new skills.
  • Technical people should take technical decisions. It would be interesting to get this statistics, if available anywhere, but I believe there are many mid-level Managers who take decisions that they don't understand what the implications are.
  • Bruce suggests that may be having small teams is the answer. I see where he is coming from, any given day a small team is better in terms of managing and providing direction. However, what I think many companies are lacking is good mid-level managers to effectively translate the broader company vision to more granular team goals.
  • Collective vision and common goals are important for a company to succeed. There are many smart people in any average company. However, if different groups of the company push and work towards their own agenda the bigger picture is lost.
  • If key members of the organization at different levels commit to this Uncle Bob's pledge, I'm sure collective stupidity can be minimized.


- "I am a professional -- a craftsman!"-- "No matter what pressures are on me."-- "No matter how I've had to bend the rules."-- "No matter what shortcuts I've had to take."-- "No matter what the gods, or managers, have done or may do."

-- -- "I WILL DO THE BEST WORK I CAN POSSIBLY DO."-- -- "Anything short of my best is shit."-- -- "I _ WILL _ NOT _ SHIP _ SHIT."