Monday, 25 March 2013

6 Testing Mistakes to Avoid Like the Plague



 “It’s okay to make mistakes,” they’ll tell you. “As long as you learn from your mistakes.” Not the worst advice you could get, but a wiser person would tell you that it’s far better to learn from someone else’s mistakes. Wouldn’t you agree?
If so, you’re in luck, because organizations all over the world continue to make the same software testing mistakes over and over again – all so that you don’t have to. While there are far too many to list here, we wanted to share a few of the more common gaffes. Here they are, in no particular order:
Mistake #1: Testing too late
How do you know if you’ve waited too long to test your apps? One sign would be if users are publically complaining via social media about defects they found on their own. When it comes to quality, the sooner you can involve the test team, the better off you are going to be. As you’ll see throughout this post, many of the great testing blunders occur for this very reason. Teams that see testing as the “last line of defense” not only misunderstand the role of QA, but they put themselves and their business in dangerous territory. Brian Marick explains the advantages of testing early:
Tests designed before coding begins can improve quality. They inform the developer of the kinds of tests that will be run, including the special cases that will be checked. The developer can use that information while thinking about the design, during design inspections, and in his own developer testing. Early test design can do more than prevent coding bugs.
Mistake #2: Testing with amateurs
In an ideal world, every testing project would have two types of experts: a testing expert and a domain expert. The reasons for having the testing expert on hand should be obvious (to readers of this blog anyway). The great testers understand, regardless of the product, where flaws and vulnerabilities are likely to be found. They are, as James Bach has said, professional skeptics. Domain experts on the other hand might not know much about testing per se, but they will immediately pick up on inconsistencies and shortcomings in the product from a feature/functionality perspective. Unfortunately, many companies are making the mistake – the BIG mistake – of having neither a testing expert nor a domain expert on their team. Don’t be one of them.
Mistake #3: Testing without a scope
“Just look for bugs and tell me what you find.” Famous last words! While this type of “improv” testing can often yield quality results, it should never be standard operating procedure. Unfortunately it is inside many companies. A note to testers: If this is the direction you’re asked to go in, here is some good advice from Jon Bach:
There are *always* requirements, but they aren’t always written. They are both implicit and explicit.  Some are hidden, some may be obvious.  Some take time to emerge, others hit you right away.  You find them and they find you. You’ll know you found one when you sense a user has an unmet need.
Techniques for testing without requirements:
  • Ask stupid questions
  • Read the FAQ
  • Look at competing products
  • Read out-of-date documents
  • Invite a developer to the whiteboard, then diagram your assumptions of the architecture or some states
Testing is an active search, not just for bugs, but for requirements.  Approach your requirements-gathering the same way you approach testing.  You have to go find the bugs – they often won’t reveal themselves to you on their own.
Mistake #4: Testing “one and done”
A lot of time and energy goes into a software launch. So it’s only natural to relax a bit when the application is released to users. It may seem as though all is said and done from a testing perspective, but it ain’t. As we’ve seen time and again, many of the more serious bugs and issues crop up well after a product launches, often times through no fault of the engineering team. Maybe the application isn’t synching with a recently updated third party app. Maybe a browser setting changed. The list is endless. Whatever the circumstance, companies would do well to avoid the mistake of failing to run regression tests.
Mistake #5: Testing in a controlled environment
These days, users are likely to consume your application under all sorts of various conditions; an endless combination of devices, operating systems, browsers, carriers and locations. Why then do so many companies only test their applications within the confines of a highly controlled test lab? Why are they not moving a portion of their testing into the real world? There was a time when companies could get away with 100% on-site testing – and many did. Unfortunately, many continue this practice (old habits die hard, after all) resulting in production bugs that could have easily been found by testers in-the-wild.
Mistake #6: Testing too fast/slow
The Agile testers can’t seem to catch their breath, while the waterfall testers look like they’re about to die of boredom. Our last testing mistake has to do with the pace of testing – specifically, why many organizations can’t seem to keep it consistent. Much of the problem (again) boils down to the misperception of QA. When testing is seen as the last step in a long process (usually with a looming deadline) it’s natural for testers to feel rushed. When the role of QA is properly defined and agreed upon, testing will never be rushed and it will never slow to a crawl. Instead, it will become an integral part of the organization, not merely a department being waited on.


Wednesday, 29 February 2012

Ad-hoc Testing: An important Process of Software Testing


Most of the software professional doesn’t like the term called “Ad-hoc Testing” because it implies a lack of testing process or purpose, but this really plays a very important role in Software Testing Life Cycle. “Ad- hoc testing” is a process which comes under Black Box testing and is the least formal method of testing. During Ad-hoc testing, testers doesn’t need to execute test cases, not bounded to test the functionality assigned to him/her, here they just use their intuition or experience. The tester has to find bugs without any proper planning and documentation solely based on just his intuition. If carried out by a skilled tester, it can often find problems that are not caught in regular testing cycle. Sometimes, if testing occurs very late in the development cycle, this will be the only kind of testing that can be performed.

When I was a tester, I was told to focus on the functionality assigned to me and make sure that customer should not find any issues while using the functionality, the term called “FUNCTIONALITY OWNER”. But I wanted to do something different as I got bore to perform the same activities for every product testing along with their release cycle, so I started doing testing of the entire product instead of testing only those functionality assigned to me just after 2 rounds of System Integration Testing using my knowledge and without using any test cases. The result was excellent. I found good issues which were with the product from the very beginning. After analysing those issues it was found that those information was missing in the requirement document.

Based upon my findings, I approached to my Test Manager along with my test lead and try to convince him that if the entire testing team will perform one round of Ad-hoc testing just after finishing the System Integration Testing cycle then there is a chance to get more issue which we might miss during the regular testing cycle. It was very hard to convince him as in the Testing Approach / Strategy document; no information about the Ad-hoc testing cycle was mentioned. We were already completed 2 rounds of System Integration Testing cycle and waiting for the new release / build to perform regression testing followed by Acceptance Testing by Product Management team. Test Manager also wanted to have one round of Ad-hoc testing based upon the issue found by me so he has taken those issues to the Steering Committee meeting and tried to convince the entire stake holders and requested to include one week on Ad-hoc testing. Somehow we got the approval from Steering Committee to include one round of Ad-hoc testing and in this way, we the entire testing team has been asked to perform one round of Ad-hoc testing based upon our testing skills, knowledge of the product and intuition only. After one week of Ad-hoc testing, the result was:
1.       20% Defects (Showstoppers and High priority) of SIT Cycle 1 and 2 has been found.
2.       10% Defects (Medium priority) of SIT Cycle 1 and 2 has been found.

One of the best uses of ad-hoc testing is for discovery. Reading the requirements / specifications Use cases rarely gives you a good sense of how a program actually behaves. Even the user documentation may not capture the “look and feel” of a program. Ad hoc testing can find holes in your test strategy, and can expose relationships between subsystems that would otherwise not be apparent. In this way, it serves as a tool for checking the completeness of your testing. Missing cases can be found and added to your testing arsenal. Finding new tests in this way can also be a sign that you should perform root cause analysis.

Finding new tests in this way can also be a sign that you should perform root cause analysis. Ask yourself or your test team, “What other tests of this class should we be running?” Defects found while doing ad-hoc testing are often examples of entire classes of forgotten test cases. Another use for ad-hoc testing is to determine the priorities for your other testing activities.

It has also been found that ad-hoc testing can also be used effectively to increase your code coverage. Adding new tests to formal test designs often requires a lot of effort in producing the designs, implementing the tests, and finally determining the improved coverage. A more streamlined approach involves using iterative ad-hoc tests to determine quickly if you are adding to your coverage. If it adds the coverage you’re seeking, then you’ll probably want to add these scenarios to your existing test cases.

A good ad-hoc tester (Ad hoc testing can be performed effectively and efficiently by the experienced testers) needs to understand the design goals and requirements for these low-level functions. What choices did the development team make, and what were the weaknesses of those choices? As a tester, we are less concerned with the correct choices made during development. Ad-hoc testing can be done as black box testing. However, this means you must check for all the major design patterns that might have been used. This allows you to narrow the testing to many fewer cases.

An important element of any IT strategy is to ensure deployment of defect free systems. Among other benefits, ad-hoc testing helps minimize significantly the total cost of ownership (TCO) of applications. However, organizations quickly discover that despite their best intentions and efforts, their QA team has hit a ceiling from a defect leakage standpoint. It seems as if an invisible hurdle is preventing the QA team from achieving its true potential - deploying defect free systems.

Now a day’s ad-hoc testing finds a place during the entire software testing cycle. Early in the project, ad-hoc testing provides breadth to testers’ understanding of the application and helping them more effective and quality test cases, thus aiding in discovery. In the middle of a project, the data obtained helps set priorities and release schedules of the software program / Application. As a project nears the ship date / acceptance testing by business users / product management team, just after completion all of your testing cycles as mentioned in your Test Approach / Strategy document, ad-hoc testing can be used to examine the quality of the application.

A primary goal of ad-hoc testing is to uncover new defects in the product / specification. In the hands of a skilled tester, it can be highly effective at discovering such problems. Regression tests and ad-hoc tests complement each other in remarkable ways. When you find a defect with a regression test, you'll often need to use ad hoc methods to analyze and isolate the defect, and to narrow down the steps to reproduce it. You may also want to explore “around” the defect to determine if there are related problems. On the other hand, when you find a defect with an ad-hoc test, you'll probably document it in your defect tracking system, thereby turning it into a regression test, for verification after the defect is fixed.

The major benefits of ad-hoc testing are:
  1. No planning and documentation needed. It can be added successfully during beginning of the project, mid or before release of the product.
  2. The important bugs are found which helps you in discovering missing test cases - can find holes in your original testing strategy that can be incorporated later.
  3. Gives more understanding of how a software application or feature behaves which may not be known by reviewing specification documents or use cases.
  4. Gives better understanding of the testing priorities, for example if an ad hoc test runs very well, you can decide testing in that area can be deferred to the later stage.
  5. Easy to start and Implement.
  6. It helps to save a lot of precious time. Sometimes it’s after spending valuable time on preparing and explaining tests that the program’s design is changed. With Ad-hoc testing precious time is not wasted on planning and documents.
      The success of Ad-hoc testing depends upon the capability of the tester who carries out the test. The tester has to find bugs without any proper planning and documentation solely based on just his / her intuition.


In my testing carrier, wherever I work, I have always given importance of ad-hoc always put a round of Ad-hoc testing just after completion of System Integration Testing cycle if the new version of product is getting tested. I have also used the same before the regular testing cycle for a brand new product (first version of product) followed by performing the same before sending to product to either release or sending for the acceptance testing by product management team just after finishing the regular testing cycle. I always mentioned 2 type of testing in my Test Strategy / Approach document which are Formal and Less Formal Test like:



Formal Test
Less Formal Test
Testing Type
Duration
Testing Type
Duration
Build Acceptance Test
1 day


System Integration Testing Cycle 1
2 weeks*

Initial Regression Testing
2 days*


System Integration Testing Cycle 2
2 weeks*
Ad-hoc Testing
1 week
Final Regression Testing
1 week
Bug Hunts
1 day
Performance Testing
8 weeks*
Alpha testing
1 week
Acceptance Testing by Users
2 weeks*


Regression Testing
1 week

Smoke / Release Testing
1 day


* duration depends upon the size of the project and may varies. The standard duration is mentioned hers.

      
      Last but not the least, if you are planning to establish TMMi framework within your organization, then you must implement a round of Ad-hoc testing during PA (process area) 2.1 of Level 2 which is “Test Policy and Strategy” and PA 2.2 which is “Test Planning”. I am sure if you implement the ad-hoc testing during Level 2 you will easily achieve Level 3 – “Defined” to Level – 5 “Optimization”. 
      
      The same article has been published at: 
      http://qa.siliconindia.com/qa-expert/Adhoc-Testing-An-important-Process-of-Software-Testing-eid-134.html


Friday, 17 February 2012

Defect Driven Development Continue......


Defect Driven Development (DDD) sounds similar to TDD (Test Driven Development) which is a major function of Agile. But in fact DDD is not fall under Agile Rule. As per my experience I can say that the main principal of DDD falls under Level 5 of TMMi, which is “Defect Prevention” and “Quality Control”. DD actually comes in the picture when the entire functionality as per requirement has been implemented with 2 rounds of SIT (system Integration testing) and you are in the process of creating Final Candidate Build / Release followed by inviting business users for their Acceptance testing. Just 3-4 weeks before testing a FC Build / Release, you do “Triage” of open defects with the help of Business, Development and Test Managers and prioritise the defects which needs to be fixed before FC Build / Release gets prepared.

Even if TDD eliminates 75% of the defects – and how likely is that – the situation is qualitatively the same. You still need to expose the product to testers and field users – “Defect-Driven Development.”

Defect Driven Development (DDD), is also a software-development methodology based around using a project's defect tracking system to track not just defects, but also software features. The central concept in DDD is that from a user's perspective, the biggest defect in a piece of software is that a “necessary feature is missing”. Whether those feature works as it should is critically important, but actually is a secondary concern in the project lifecycle.
The primary advantage of DDD is that it addresses the need for both project governance and transparency, which are generally quite difficult to achieve without rigid project lifecycle methodologies and considerable overhead, usually at the cost of productivity. Because DDD uses a project's ubiquitous defect tracking system that developers already use, but in a novel way, it is a lightweight methodology that can be easily adopted without the need to introduce significant new processes or tools.
I have personally used this methodology in my previous company Cincom and found that this really helps in fastening the Final Release Cycle with Quality and meeting the 100% customer’s expectation. I formulated this approach because, the engineering teams work under very demanding circumstances including wide geographic distribution (UK, US France) and native language differences, and we needed better ways to track project progress and feature implementation before the final release cycle. My product management team has created a virtual image of Mrs. Smith (who was doing manual work before started working on software product) while writing the use cases and the same was used during the analysis of Defects Review also like How Mrs. Smith will use the functionality....what will be there in her mind before using the same.....etc.
We should think of defects as good things that help us achieve what users really want and need.

Assumptions and Prerequisites

There are a few assumptions and prerequisites needed to make DDD work. On the whole, the list is quite short, and covered by most defect tracking systems in widespread use:

  • Project uses a defect-tracking system.
  • Developers are familiar with and committed to using the defect-tracking system.
  • The defect-tracking system allows for assignment of defect to an individual, who can then track his assigned defect easily.
  • The defect-tracking system allows defects to be designated as dependent on or blocking another defect, and can automatically help users traverse dependency graphs of defects.
  • Pre-emptive notification (for example, email) of changes to defect status or addition of new defects.

How It Works


  1. Setup a Defect Triage Team. The member should be Development Manager, Test Manager and Product Manager / Business Development Manager with BA’s (If any).
  2. Create a list of all Open Defects with Priority and Defect ageing by running a query.
  3. Circulate the defect list to the Defect Triage Team.
  4. Setup a meeting. If the entire team is at one location then books a meeting room with System connected to the internet and having Defect Tracking Tool installed else setup a conference call with the Defect Triage Team.
  5. Start reviewing the defect which has priority as “Critical”.
  6. Discuss every defects with customer prospective, analyze the same that what happen if Mrs. Smith will come across with those issues and then take a call   
            a.  Whether the defect is a real defect or an enhancement.
b. In case of Defect, is Priority Ok and customer will really suffer or     the same should be of lower priority and can be defer for the next release.
c. In case of defer, update the defect with the status a “Defer for next release”.
d. In case the defect is really a critical, assign the defect to the Development Manager. Once the meeting will be over Development Manager will discuss those defects with development engineer and identify the ETA followed by assigning the defect to the development engineer and update the ETA.
e.  In case of Enhancement, assign the defect to the Product Manager / Business Development Manager for further action. If Product Manager / Business Development Manager wants those enhancements should be fixed then create a document for the same enhancement which has estimation from Development Manager and Test Manager. If those enhancements can be implemented without impacting the release date then start auctioning on the same and put the information for those enhancements with the Steering Committee pack / deck and discuss the same in details during the next Steering Committee meeting.
                       i.   After estimation, if you find that the same is impacting the release date then take the same to the CCB (Change Control Board) for further action followed by discussing the same in Steering Committee meeting. If stake holders are agree with changed release date then take an action else defer that defect for the next release.
f.  Do the same activity for High, Medium and Low priority defects also and follow the steps 6.
g. At the end of Defect Triage meeting, you will see a major change with the status of Open Defect.

Make sure while Triaging a defect the TEAM is aware about the Product Release date and all the action should be taken accordingly. It will be to keep a release date a week earlier as a contingency during the defect triage meeting to mitigate any last moment change.

Effects

Overall defect counts start high and go down

Summary

At this point, the general concept of DDD should be clear. In summary:
  • DDD is not really very different than the process most projects already follow. The difference is primarily in timing of and perspective in using the defect tracking system: The defect tracking system is used from the very beginning of the development process to track features.
  • Clear owners are defined for each deliverable, whether that's an overall feature, code, designs, icons, docs, or something else relating to the project.
  • Because defects are dependent on other defects, owners have a natural reason to talk to colleagues and resolve issues rather than go off on their own.
  • Defects can be refactored into sub-defects as needed in a right-sized manner for the project. Sometimes, a feature can be wholly implemented by one person, so only one defect is needed. But in the cases where multiple people are involved, that can be expressed with clear ownership.
  • The information in the defect tracking system can be mined to create a dashboard of overall project status.
  • Defects filed by customers, QA, and others can be related back to the parent feature. This means that both unimplemented features and defects appear in the same dashboard. This is critical for getting an overall picture of the completeness of the project.
  • Defects can be assigned and reassigned as needed to gather more information, or to transition work from one person to another, even across widely separated time zones and organizational boundaries. The overall project status remains the same, however.


Consistency of implementation

Because features and defects are in the same place, the developer's primary task list is the set of defects assigned to him. This puts him on the same pages as his manager, architect, and tech lead; there is no sudden changeover between feature implementation and defect fixing, as they are aspects of the same thing. And because the features in the tracking system really describe constraints and deliverables, engineers are still free to innovate like before. The defect tracking system gives him a place to capture his thinking, ask questions, and communicate about what he's doing beyond a very loose notion of "done" or "not done".

Better engineering and QA relationship

In traditional engineering organizations, the engineering/QA relationship is adversarial in nature, or tends to become that way, because the roles are distinct and responsibility is tightly bounded. Part of this comes from the fact that developers may feel caged by having to work on defects instead of new features, and QA engineers may feel disenfranchised when voicing user concerns that are ignored.
DDD has the potential to remove the adversarial component from this relationship in two ways. First, defects are part of the standard development process from the beginning, rather than coming in near the end. Developers have less reason to react negatively to issues filed there because they have been socialized to it from the beginning of the project, and they are accustomed to the idea of both missing features and defects as contributing to user dissatisfaction.
Second, QA engineers can be much more involved in the ongoing feature work because it's well-documented and there is a means by which to be heard. They can comment, add new features and requirements to the set, and even work on developing features rather than feeling powerless until a feature has been implemented. They can feel that their voice has been heard throughout the process, and so there is less reason for them to have to say, "I told you so" when something is implemented or designed poorly.
By making features transparent to developers, QA, and management, and futhermore by giving a concrete means to discuss features and design openly before completion, DDD can offer a formal channel for user advocacy to flow into the design process, not only from QA but from anyone with access to the defect tracking system.

Advantages

There are numerous advantages to the DDD approach:
  • Development transparency from the beginning of the development process. As features are completed, the defects are closed. As defects are filed, they can be related back to the feature using the dependency mechanism, and the feature can be reopened as a visible indicator of status.
  • Issue and task tracking are automated. The defect tracking systems have excellent methods for assigning defects to developers in an easy-to-track way and notifying interested parties of changes in state.
  • Full organizational involvement. Engineering managers are good at making sure defects get closed; the key to this is giving them something they can track. Furthermore, docs and QA need a way to track their deliverables as they relate to project features. Because anyone can either add themselves as interested parties to a defect, or can file their own defects with dependencies, this becomes much easier.
  • Cross-organizational coordination. When working across teams and geos with large numbers of features and designs, it becomes imperative to track design decisions and status in a more formal way. Emails are easily lost, and if the right aliases are not used (and archived), then design decisions are all too easy to remain detached from other organizations.
  • Better design tracking. All features have a unique defect number, and all discussion (or summary of discussion) can be added to one place for all to see. Design discussion will still occur face-to-face and over email, but relevant decisions can be recorded and then easily tracked in the defect.
  • Easy prioritization during traige. All defect tracking systems have the concept of a priority which can be used to indicate the importance of a feature in the final product.
  • DDD is very effective when used in iterative development. For example, the umbrella defect for each iteration can contain all the tasks which need to be done before that iteration is complete. When a feature task is complete, QA can add blocking defects on the iteration, so that the release iteration will be stable and closed, thus providing a usable set of functionality that has been tested.
X                                                       
                                                           XXXXXXXX

Friday, 10 February 2012

Defect Driven Development

I am getting questions / query about "Defect Driven Development", some of the testing professionals are showing their interests towards this topic, hence I have decided to to write down a document for the same.

But for the time being here is some information.......


"Defect Driven Development (DDD) sounds similar to TDD (Test Driven Development) which is a major function of Agile. But in fact DDD is not fall under Agile Rule. As per my experience I can say that the main principal of DDD falls under Level 5 of TMMi, which is “Defect Prevention” and “Quality Control”. DD actually comes in the picture when the entire functionality as per requirement has been implemented with 2 rounds of SIT (system Integration testing) and you are in the process of creating Final Candidate Build / Release followed by inviting business users for their Acceptance testing. Just 3-4 weeks before testing a FC Build / Release, you do “Triage” of open defects with the help of Business, Development and Test Managers and prioritise the defects which needs to be fixed before FC Build / Release gets prepared."

Guys....wait for some time...I will publish the complete document on the same topic very soon.....

Thursday, 9 February 2012

Test automation strategy: Getting started


You’re the QA director or CIO at a growing software organization. You’re sold on the idea that test automation is necessary in order to keep your team on track, provide quick feedback and manage technical debt. But like many software organizations, you have none at all. Where do you get started?
The whole-team approach
The history of test automation is littered with failed “test automation projects.” Test automation sounded like a good idea, so the company bought a vendor GUI test automation tool, gave it to the test team, and waited for the magic to happen. Here’s the news: automating tests is software development. It must be done with the same care and thought that goes into writing production code. And it needs a diversity of people and skills: testers to know the right things to test, customers to provide examples of desired behaviour, programmers to design maintainable tests and more.
Test automation succeeds when the whole development team treats it as an integral part of software development. It’s not any harder or easier than writing production code. Learning to do it well takes time, effort and lots of small experiments.
Change is hard, and people are motivated to change when they feel pain. Start by sharing the pain of manual regression testing. Your product can’t be released to production without regression testing, right? Ask the testers to work with customers to identify the most critical areas of the application. They can then write manual regression testing scripts to ensure those areas work.
Automating the manual regression test scripts provide major motivation for automating tests!
Overcome barriers to automation
Your cross-functional team has all the skills needed to overcome barriers to test automation. Get together for some time-boxed brainstorming meetings to identify the impediments standing in the way of automating tests. In my experience, the best approach is to identify the biggest obstacle, then try a small experiment to get around it.
For example, when my team wanted to try test-driven development (TDD), which involves automating unit-level tests, we couldn’t figure out how to do that with our nasty legacy code. First we tried brown bag sessions at lunch where the whole team tried writing unit tests. Then we brought in an expert to do training on it. That helped, but finally we just had to budget time to get traction on it.
Should you tackle automating unit tests with test-driven development? Would GUI tests be the easy win? It’s helpful to outline a strategy for automating tests.
Plan your automation strategy
There are so many tests to automate and so little time; it pays to focus on the automation that will provide you with the best Return on Investment (ROI). Mike Cohn’s Test Automation Pyramid provides a good guideline.

Unit and component tests verify our code design and architecture, and provide the best return on investment. They are quick to write, quick to run and provide a good safety net as part of a continuous integration process. They form the solid base of just about any test automation strategy. If your team has no test automation, learning to automate unit tests is the obvious place to start.
The next-best return on investment is the middle layer of the triangle, at the API or service level. These tests generally take the place of the user interface. They pass test inputs to production code, obtain results, and compare actual and expected results. Writing and maintaining them takes more time than unit tests, and they generally run more slowly, but they provide crucial feedback.
Today’s test libraries and frameworks allow us to automate and maintain GUI tests more cheaply than the old-school record/playback tools. Even so, they run more slowly than unit or API-level tests, and may require more frequent updating.
In my experience, a two-pronged approach to start automating tests on a legacy system works well. While the team masters TDD and grows its unit test library, you can get some quick wins with simple GUI tests covering the most critical functionality.
The test automation pyramid represents our long-term goal, but you won’t get there overnight. Most teams start with an “upside-down” pyramid, more GUI tests than anything else. You can “flip” the pyramid over time.
Choosing tools
We don’t have a wide array of tools at the unit test level; you just use the flavor of xUnit that goes with your production programming language, such as JUnit for Java or NUnit for .Net.
For API and GUI level tests, start by experimenting with how you’d like to specify your tests. Does a given/when/then format work well for your business experts? Perhaps your business domain lends itself to a tabular test format. Or, you might have a domain where specifying inputs and expected outputs in a spreadsheet work better. Some businesses prefer working with a time-based scenario.
Once you’ve found a format to try, look for a test library or framework that supports that. If your team has enough bandwidth, you might be able to “grow your own.” Use retrospectives to evaluate whether your experiment is having good results. If not, start a different experiment. This is a big investment. The right choices now will mean big returns in the long term.
Collaborating
Automating tests is coding. Automated test code deserves the same respect, care and feeding as production code. It makes sense for the people writing production code to also write the test automation code.
Testers are expert at knowing the right tests to automate. Other team members contribute expertise that helps us get timely feedback, with data and scenarios that represent production. It just makes sense to have everyone on the team work together to automate tests.
Get started
Pick one problem to solve, and do a small experiment to overcome it. If an automating unit test seems too daunting, try “defect-driven development”. For each defect that your team fixes, first write a unit test to reproduce the problem, and then correct the code. Once the test passes, check in the test and the code.
This is a journey, not a destination. Just take that first step.
What are some of the obstacles to beginning test automation in your organization, and what approaches have you tried to address them? Email comments to sanjay_kumar@hotmail.com