Wednesday, 3 February 2016

Test Automation Frameworks

Test Automation Frameworks

"When developing our test strategy, we must minimize the impact caused by changes in the applications we are testing, and changes in the tools we use to test them."
--Carl J. Nagle
1.1 Thinking Past "The Project"
In today’s environment of plummeting cycle times, test automation becomes an increasingly critical and strategic necessity. Assuming the level of testing in the past was sufficient (which is rarely the case), how do we possibly keep up with this new explosive pace of web-enabled deployment while retaining satisfactory test coverage and reducing risk? The answer is either more people for manual testing, or a greater level of test automation. After all, a reduction in project cycle times generally correlates to a reduction of time for test.
With the onset and demand for rapidly developed and deployed web clients test automation is even more crucial. Add to this the cold, hard reality that we are often facing more than one active project at a time. For example, perhaps the team is finishing up Version 1.0, adding the needed new features to Version 1.1, and prototyping some new technologies for Version 2.0!
Better still; maybe our test team is actually a pool of testers supporting many diverse applications completely unrelated to each other. If each project implements a unique test strategy, then testers moving among different projects can potentially be more a hindrance rather than a help. The time needed for the tester to become productive in the new environment just may not be there. And, it may surely detract from the productivity of those bringing the new tester up to speed.
To handle this chaos we have to think past the project. We cannot afford to engineer or reengineer automation frameworks for each and every new application that comes along. We must strive to develop a single framework that will grow and continuously improve with each application and every diverse project that challenges us. We will see the advantages and disadvantages of these different approaches inSection 1.2

1.1.1 Problems with Test Automation
Historically, test automation has not met with the level of success that it could. Time and again test automation efforts are born, stumble, and die. Most often this is the result of misconceived perceptions of the effort and resources necessary to implement a successful, long-lasting automation framework. Why is this, we might ask? Well, there are several reasons.
Foremost among this list is that automation tool vendors do not provide completely forthright demonstrations when showcasing the "simplicity" of their tools. We have seen the vendor’s sample applications. We have seen the tools play nice with those applications. And we try to get the tools to play nice with our applications just as fluently. Inherently, project after project, we do not achieve the same level of success.
This usually boils down to the fact that our applications most often contain elements that are not compatible with the tools we use to test them. Consequently, we must often mastermind technically creative solutions to make these automation tools work with our applications. Yet, this is rarely ever mentioned in the literature or the sales pitch.
The commercial automation tools have been chiefly marketed for use as solutions for testing an application. They should instead be sought as the cornerstone for an enterprise-wide test automation framework. And, while virtually all of the automation tools contain some scripting language allowing us to get past each tool’s failings, testers have typically neither held the development experience nor received the training necessary to exploit these programming environments.

"For the most part, testers have been testers, not programmers. Consequently, the ‘simple’ commercial solutions have been far too complex to implement and maintain; and they become shelfware."

Most unfortunate of all, otherwise fully capable testers are seldom given the time required to gain the appropriate software development skills. For the most part, testers have been testers, not programmers. Consequently, the "simple" commercial solutions have been far too complex to implement and maintain; and they become shelfware.
Test automation must be approached as a full-blown software development effort in its own right. Without this, it is most likely destined to failure in the long term.

Case Study: Costly Automation Failures
In 1996, one large corporation set out evaluating the various commercial automation tools that were available at that time. They brought in eager technical sales staff from the various vendors, watched demonstrations, and performed some fairly thorough internal evaluations of each tool.
By 1998, they had chosen one particular vendor and placed an initial order for over $250,000 worth of product licensing, maintenance contracts, and onsite training. The tools and training were distributed throughout the company into various test departments--each working on their own projects.
None of these test projects had anything in common. The applications were vastly different. The projects each had individual schedules and deadlines to meet. Yet, every one of these departments began separately coding functionally identical common libraries. They made routines for setting up the Windows test environment. They each made routines for accessing the Windows programming interface. They made file-handling routines, string utilities, database access routines--the list of code duplication was disheartening!
For their test designs, they each captured application specific interactive tests using the capture\replay tools. Some groups went the next step and modularized key reusable sections, creating reusable libraries of application-specific test functions or scenarios. This was to reduce the amount of code duplication and maintenance that so profusely occurs in pure captured test scripts. For some of the projects, this might have been appropriate if done with sufficient planning and an appropriate automation framework. But this was seldom the case.
With all these modularized libraries testers could create functional automated tests in the automation tool’s proprietary scripting language via a combination of interactive test capture, manual editing, and manual scripting.
One problem was, as separate test teams they did not think past their own individual projects. And although they were each setting up something of a reusable framework, each was completely unique--even where the common library functions were the same! This meant duplicate development, duplicate debugging, and duplicate maintenance. Understandably, each separate project still had looming deadlines, and each was forced to limit their automation efforts in order to get real testing done.
As changes to the various applications began breaking automated tests, script maintenance and debugging became a significant challenge. Additionally, upgrades in the automation tools themselves caused significant and unexpected script failures. In some cases, the necessity to revert back (downgrade) to older versions of the automation tools was indicated. Resource allocation for continued test developmentand test code maintenance became a difficult issue.
Eventually, most of these automation projects were put on hold. By the end of 1999--less than two years from the inception of this large-scale automation effort--over 75% of the test automation tools were back on the shelves waiting for a new chance to try again at some later date.

1.1.2 Some Test Strategy Guidelines
Past failings like these have been lessons for the entire testing community. Realizing that we must develop reusable test strategies is no different than the reusability concerns of any good application development project. As we set out on our task of automating test, we must keep these past lessons forefront.
In order to make the most of our test strategy, we need to make it reusable and manageable. To that end, there are some essential guiding principles we should follow when developing our overall test strategy:
  • Test automation is a fulltime effort, not a sideline.
  • The test design and the test framework are totally separate entities.
  • The test framework should be application-independent.
  • The test framework must be easy to expand, maintain, and perpetuate.
  • The test strategy/design vocabulary should be framework independent.
  • The test strategy/design should remove most testers from the complexities of the test framework.
These ideals are not earth shattering. They are not relatively new. Yet, it is seldom these principles are fully understood and instrumented.
So what do they mean?

1.1.3 Test automation is a fulltime effort, not a sideline.
While not necessarily typical design criteria, it bears repeating. The test framework design and the coding of that design together require significant front-loaded time and effort. These are not things that someone can do when they have a little extra time here, or there, or between projects. The test framework must be well thought out. It must be documented. It should be reviewed. It should be tested. It is a full software development project like any other. This bears repeating--again.
Will our test framework development have all of these wonderful documentation, design, review, and test processes? Does our application development team?
We should continuously push for both endeavors to implement all these critical practices.

1.1.4 The test design and the test framework are totally separate entities.
The test design details how the particular functions and features of our application will be tested. It will tell us what to do, how and when to do it, what data to use as input, and what results we expect to find. All of this is specific to the particular application or item being tested. Little of this requires any knowledge or care of whether the application will be tested automatically or manually. It is, essentially, the "how to" of what needs to be tested in the application.
On the other hand, the test framework, or specifically, the test automation framework is an execution environment for automated tests. It is the overall system in which our tests will be automated. The development of this framework requires completely different technical skills than those needed for test design.

1.1.5 The test framework should be application-independent.
Although applications are relatively unique, the components that comprise them, in general, are not. Thus, we should focus our automation framework to deal with the common components that make up our unique applications. By doing this, we can remove all application-specific context from our framework and reuse virtually everything we develop for every application that comes through the automated test process.

"We should focus our automation framework to deal with the common components that make up our unique applications."

Nearly all applications come with some form of menu system. They also have buttons to push, boxes to check, lists to view, and so on. In a typical automation tool script there is, generally, a very small number of component functions for each type of component. These functions work with the component objects independent of the applications that contain them.
Traditional, captured automation scripts are filled with thousands of calls to these component functions. So the tools already exist to achieve application independence. The problem is, most of these scripts construct the function calls using application-specific, hard coded values. This immediately reduces their effectiveness as application-independent constructs. Furthermore, the functions by themselves are prone to failure unless a very specific application state or synchronization exists at the time they are executed. There is little error correction or prevention built-in to these functions.
To deal with this in traditional scripts we must place additional code before and\or after the command, or a set of commands, to insure the proper application state and synchronization is maintained. We need to make sure our window has the current focus. We need to make sure the component we want to select, or press, or edit exists and is in the proper state. Only then can we perform the desired operation and separately verify the result of our actions.
For maximum robustness, we would have to code these state and synchronization tests for every component function call in our scripts. Realistically, we could never afford to do this. It would make the scripts huge, nearly unreadable, and difficult to maintain. Yet, where we forego this extra effort, we increase the possibility of script failure.
What we must do is develop a truly application-independent framework for these component functions. This will allow us to implement that extra effort just once, and execute it for every call to any component function. This framework should handle all the details of insuring we have the correct window, verifying the element of interest is in the proper state, doing something with that element, and logging the success or failure of the entire activity.
We do this by using variables, and providing application-specific data to our application-independent framework. In essence, we will provide our completed test designs as executable input into our automation framework.
Does this mean that we will never have to develop application-specific test scripts? Of course not. However, if we can limit our application-specific test scripts to some small percentage, while reusing the best features of our automation framework, we will reap the rewards project after project.

1.1.6 The test framework must be easy to expand, maintain, and perpetuate.
One of our goals should be a highly modular and maintainable framework. Generally, each module should be independent and separate from all the other modules. What happens inside one is of no concern to the others.
With this modular black-box approach, the functionality available within each module can be readily expanded without affecting any other part of the system. This makes code maintenance much simpler. Additionally, the complexity of any one module will likely be quite minimal.
However, modularity alone will not be enough to ensure a highly maintainable framework. Like any good software project, our design must be fully documented and published. Without adequate, published documentation it will be very difficult for anyone to decipher what it is the framework is designed to do. Any hope of maintenance will not last far beyond the departure of the original framework designers. Our test automation efforts will eventually become another negative statistic.
To prevent this, we should define documentation standards and templates. Wherever possible, module documentation should be developed "in-context". That is, directly in the source code itself. Tools should be retained, or designed and developed, so that we can automatically extract and publish the documentation. This will eliminate the task of maintaining two separate sets of files: the source code, and its documentation. It will also provide those doing the code maintenance quite a ready reference. Nearly everything they need to know should exist right there in the code.

"We must always remember: our ultimate goal is to simplify and perpetuate a successful automation framework."

We must always remember: our ultimate goal is to simplify and perpetuate a successful test automation framework. To put something in place that people will use and reuse for as long as it is technically viable and productive.

1.1.7 The test strategy/design vocabulary should be framework independent.
As noted before, the framework refers to the overall environment we construct to execute our tests. The centerpiece is usually one of many commercially available automation tools. In good time, it may be more than one. In some rare circumstances, it might even be a proprietary tool developed or contracted specifically for our test automation needs.
The point is, different tools exist and some will work better for us than others in certain situations. While one tool might have worked best with our Visual Basic or C/C++ applications, we may need to use a different tool for our web clients. By keeping a specific tool consideration out of our test designs, we avoid limiting our tests to that tool alone.
The overall test strategy will define the format and low-level vocabulary we use to test all applications much like an automation tool defines the format and syntax of the scripting language it provides. Our vocabulary, however, will be independent of any particular test framework employed. The same vocabulary will migrate with us from framework to framework, and application to application. This means, for example, the syntax used to click a button will be the same regardless of the tool we use to execute the instruction or the application that contains the button.
The test design for a particular application, however, will define a high-level vocabulary that is specific to that application. While this high-level vocabulary will be application specific, it is still independent of the test framework used to execute it. This means that the high-level instruction to login to our website with a particular user ID and password will be the same regardless of the tool we use to execute it.
When we provide all the instructions necessary to test a particular application, we should be able to use the exact same instructions on any number of different framework implementations capable of testing that application. We must also consider the very likely scenario that some or all of this testing may, at one time or another, be manual testing. This means that our overall test strategy should not only facilitate test automation, it should also support manual testing.
Consequently, the format and vocabulary we use to test our applications should be intuitive enough for mere mortals to comprehend and execute. We should be able to hand our test over to a person, point to an area that failed, and that person should be able to manually reproduce the steps necessary to duplicate the failure.

"A good test strategy can remove the necessity for both manual and automated test scripts. The same ‘script’ should suffice for both."

A good test strategy, comprised of our test designs and our test framework, can remove the necessity for both manual and automated test scripts for the same test. The same "script" should suffice for both. The important thing is that the vocabulary is independent of the framework used to execute it. And the test strategy must also accommodate manual testing.

1.1.8 The test strategy/design should remove most testers from the complexities of the test framework.
In practice, we cannot expect all our test personnel to become proficient in the use of the automation tools we use in our test framework. In some cases, this is not even an option worth considering. Remember, generally, testers are testers--they are not programmers. Sometimes our testers are not even professional testers. Sometimes they are application domain experts with little or no use for the technical skills needed for software development.
Sometimes testers are application developers splitting time between development and test. And when application developers step in to perform testing roles, they do not want or need a complex test scripting language to learn. That is what you get with commercial automation tools. And that may even be counter-productive and promote confusion since some of these scripting languages are modified subsets of standard programming languages. Others are completely unique and proprietary.
Yet, with the appropriate test strategy and vocabulary as discussed in the previous section, there is no reason we should not be able to use all our test resources to design tests suitable for automation without knowing anything about the automation tools we plan to deploy.
The bulk of our testers can concentrate on test design, and test design only. It is the automation framework folks who will focus on the tools and utilities to automate those tests.

1.2 Data Driven Automation Frameworks
Over the past several years there have been numerous articles done on various approaches to test automation. Anyone who has read a fair, unbiased sampling of these knows that we cannot and must notexpect pure capture and replay of test scripts to be successful for the life of a product. We will find nothing but frustration there.
Sometimes this manifesto is hard to explain to people who have not yet performed significant test automation with these capture\replay tools. But it usually takes less than a week, often less than a day, to hear the most repeated phrase: "It worked when I recorded it, but now it fails when I play it back!"
Obviously, we are not going to get there from here.

1.2.1 Data Driven Scripts
Data driven scripts are those application-specific scripts captured or manually coded in the automation tool’s proprietary language and then modified to accommodate variable data. Variables will be used for key application input fields and program selections allowing the script to drive the application with external data supplied by the calling routine or the shell that invoked the test script.
Variable Data, Hard Coded Component Identification: 
These data driven scripts often still contain the hard coded and sometimes very fragile recognition strings for the window components they navigate. When this is the case, the scripts are easily broken when an application change or revision occurs. And when these scripts start breaking, we are not necessarily talking about just a few. We are sometimes talking about a great many, if not all the scripts, for the entire application.
Figure 1 is an example of activating a server-side image map link in a web application with an automation tool scripting language:

Image Click "DocumentTitle=Welcome;\;ImageIndex=1" "Coords=25,20"

Figure 1
This particular scenario of clicking on the image map might exist thousands of times throughout all the scripts that test this application. The preceding example identifies the image by the title given to the document and the index of the image on the page. The hard coded image identification might work successfully all the way through the production release of that version of the application. Consequently, testers responsible for the automated test scripts may gain a false sense of security and satisfaction with these results.
However, the next release cycle may find some or all of these scripts broken because either the title of the document or the index of the image has changed. Sometimes, with the right tools, this might not be too hard to fix. Sometimes, no matter what tools, it will be frustratingly difficult.
Remember, we are potentially talking about thousands of broken lines of test script code. And this is just one particular change. Where there is one, there will likely be others.
Highly Technical or Duplicate Test Designs:
Another common feature of data driven scripts is that virtually all of the test design effort for the application is developed in the scripting language of the automation tool. Either that, or it is duplicated in bothmanual and automated script versions. This means that everyone involved with automated test development or automated test execution for the application must likely become proficient in the environment and programming language of the automation tool.
Findings:
A test automation framework relying on data driven scripts is definitely the easiest and quickest to implement if you have and keep the technical staff to maintain it. But it is the hardest of the data driven approaches to maintain and perpetuate and very often leads to long-term failure.

1.2.2 Keyword or Table Driven Test Automation
Nearly everything discussed so far defining our ideal automation framework has been describing the best features of "keyword driven" test automation. Sometimes this is also called "table driven" test automation. It is typically an application-independent automation framework designed to process our tests. These tests are developed as data tables using a keyword vocabulary that is independent of the test automation tool used to execute them. This keyword vocabulary should also be suitable for manual testing, as you will soon see.
Action, Input Data, and Expected Result ALL in One Record:
The data table records contain the keywords that describe the actions we want to perform. They also provide any additional data needed as input to the application, and where appropriate, the benchmark information we use to verify the state of our components and the application in general.
For example, to verify the value of a user ID textbox on a login page, we might have a data table record as seen in Table 1:
WINDOW
COMPONENT
ACTION
EXPECTED
VALUE
LoginPage
UserIDTextbox
VerifyValue
"MyUserID"

Table 1
Reusable Code, Error Correction and Synchronization:
Application-independent component functions are developed that accept application-specific variable data. Once these component functions exist, they can be used on each and every application we choose to test with the framework.
Figure 2 presents pseudo-code that would interpret the data table record from Table 1 and Table 2. In our design, the primary loop reads a record from the data table, performs some high-level validation on it, sets focus on the proper object for the instruction, and then routes the complete record to the appropriate component function for full processing. The component function is responsible for determining what action is being requested, and to further route the record based on the action.

Framework Pseudo-Code
Primary Record Processor Module:
   Verify "LoginPage" Exists. (Attempt recovery if not)
   Set focus to "LoginPage".
   Verify "UserIDTextbox" Exists. (Attempt recovery if not)
   Find "Type" of component "UserIDTextbox". (It is a Textbox)
   Call the module that processes ALL Textbox components.
Textbox Component Module:
   Validate the action keyword "VerifyValue".
   Call the Textbox.VerifyValue function.
Textbox.VerifyValue Function:
   Get the text stored in the "UserIDTextbox" Textbox.
   Compare the retrieved text to "MyUserID".
   Record our success or failure.
Figure 2
Test Design for Man and Machine, With or Without the Application:
Table 2 reiterates the actual data table record run by the automation framework above:

WINDOW
COMPONENT
ACTION
EXPECTED
VALUE
LoginPage
UserIDTextbox
VerifyValue
"MyUserID"

Table 2
Note how the record uses a vocabulary that can be processed by both man and machine. With minimal training, a human tester can be made to understand the record instruction as deciphered in Figure 3:
On the LoginPage, in the UserIDTextbox,
Verify the Value is "MyUserID".
Figure 3
Once they learn or can reference this simple vocabulary, testers can start designing tests without knowing anything about the automation tool used to execute them.
Another advantage of the keyword driven approach is that testers can develop tests without a functioning application as long as preliminary requirements or designs can be determined. All the tester needs is a fairly reliable definition of what the interface and functional flow is expected to be like. From this they can write most, if not all, of the data table test records.
Sometimes it is hard to convince people that this advantage is realizable. Yet, take our login example from Table 2 and Figure 3. We do not need the application to construct any login tests. All we have to know is that we will have a login form of some kind that will accept a user ID, a password, and contain a button or two to submit or cancel the request. A quick discussion with development can confirm or modify our determinations. We can then complete the test table and move on to another.
We can develop other tests similarly for any part of the product we can receive or deduce reliable information. In fact, if in such a position, testers can actually help guide the development of the UI and flow, providing developers with upfront input on how users might expect the product to function. And since the test vocabulary we use is suitable for both manual and automated execution, designed testing can commence immediately once the application becomes available.
It is, perhaps, important to note that this does not suggest that these tests can be executed automatically as soon as the application becomes available. The test record in Table 2 may be perfectly understood and executable by a person, but the automation framework knows nothing about the objects in this record until we can provide that additional information. That is a separate piece of the framework we will learn about when we discuss application mapping.
Findings:
The keyword driven automation framework is initially the hardest and most time-consuming data driven approach to implement. After all, we are trying to fully insulate our tests from both the many failings of the automation tools, as well as changes to the application itself.
To accomplish this, we are essentially writing enhancements to many of the component functions already provided by the automation tool: such as error correction, prevention, and enhanced synchronization.
Fortunately, this heavy, initial investment is mostly a one-shot deal. Once in place, keyword driven automation is arguably the easiest of the data driven frameworks to maintain and perpetuate providing the greatest potential for long-term success.
Additionally, there may now be commercial products suitable for your needs to decrease, but not eliminate, much of the up-front technical burden of implementing such a framework. This was not the case just a few years ago. We will briefly discuss a couple of these in Section 1.2.4

1.2.3 Hybrid Test Automation (or, "All of the Above")
The most successful automation frameworks generally accommodate both keyword driven testing as well as data driven scripts. This allows data driven scripts to take advantage of the powerful libraries and utilities that usually accompany a keyword driven architecture.
The framework utilities can make the data driven scripts more compact and less prone to failure than they otherwise would have been. The utilities can also facilitate the gradual and manageable conversion of existing scripts to keyword driven equivalents when and where that appears desirable.
On the other hand, the framework can use scripts to perform some tasks that might be too difficult to re-implement in a pure keyword driven approach, or where the keyword driven capabilities are not yet in place.

1.2.4 Commercial Keyword Driven Frameworks
Some commercially available keyword driven frameworks are making inroads in the test automation markets. These generally come from 3rd party companies as a bridge between your application and the automation tools you intend to deploy. They are not out-of-the-box, turnkey automation solutions just as the capture\replay tools are not turnkey solutions.
They still require some up-front investment of time and personnel to complete the bridge between the application and the automation tools, but they can give some automation departments and professionals a huge jumpstart in the right direction for successful long-term test automation.
Two particular products to note are the TestFrameä product led by Hans Buwalda of CMG Corp, and the Certifyä product developed with Linda Hayes of WorkSoft Inc. These products each implement their own version of a keyword driven framework and have served as models for the subject at international software testing conferences, training courses, and user-group discussions worldwide. I’m sure there are others.
It really is up to the individual enterprise to evaluate if any of the commercial solutions are suitable for their needs. This will be based not only on the capabilities of the tools evaluated, but also on how readily they can be modified and expanded to accommodate your current and projected capability requirements.

1.3 Keyword Driven Automation Framework Model
The following automation framework model is the result of over 18 months of planning, design, coding, and sometimes trial and error. That is not to say that it took 18 months to get it working--it was actually a working prototype at around 3 person-months. Specifically, one person working on it for 3 months!
The model focuses on implementing a keyword driven automation framework. It does not include any additional features like tracking requirements or providing traceability between automated test results and any other function of the test process. It merely provides a model for a keyword driven execution engine for automated tests.
The commercially available frameworks generally have many more features and much broader scope. Of course, they also have the price tag to reflect this.

1.3.1 Project Guidelines
The project was informally tasked to follow the guidelines or practices below:
  • Implement a test strategy that will allow reasonably intuitive tests to be developed and executed both manually and via the automation framework.
  • The test strategy will allow each test to include the step to perform, the input data to use, and the expected result all together in one line or record of the input source.
  • Implement a framework that will integrate keyword driven testing and traditional scripts, allowing both to benefit from the implementation.
  • Implement the framework to be completely application-independent since it will need to test at least 4 or 5 different applications once deployed.
  • The framework will be fully documented and published.
  • The framework will be publicly shared on the intranet for others to use and eventually (hopefully) co-develop.

1.3.2 Code and Documentation Standards
The first thing we did was to define standards for source code files and headers that would provide for in-context documentation intended for publication. This included standards for how we would use headers and what type of information would go into them.
Each source file would start with a structured block of documentation describing the purpose of the module. Each function or subroutine would likewise have a leading documentation block describing the routine, its arguments, possible return codes, and any errors it might generate. Similar standards were developed for documenting the constants, variables, dependencies, and other features of the modules.
We then developed a tool that would extract and publish the documentation in HTML format directly from the source and header files. We did this to minimize synchronization problems between the source code and the documentation, and it has worked very well.
It is beyond the scope of this work to illustrate how this is done. In order to produce a single HTML document we parse the source file and that source file’s primary headers. We format and link public declarations from the headers to the detailed documentation in the source as well as link to any external references for other documentation. We also format and group public constants, properties or variables, and user-defined types into the appropriate sections of the HTML publication.
One nice feature about this is that the HTML publishing tool is made to identify the appropriate documentation blocks and include them pretty much "as is". This enables the inclusion of HTML tags within the source documentation blocks that will be properly interpreted by a browser. Thus, for publication purposes, we can include images or other HTML elements by embedding the proper tags.

1.3.3 Our Automation Framework
Figure 4 is a diagram representing the design of our automation framework. It is followed by a description of each of the elements within the framework and how they interact. Some readers may recognize portions of this design. It is a compilation of keyword driven automation concepts from several sources. These include Linda Hayes with WorkSoft, Ed Kit from Software Development Technolgies, Hans Buwalda from CMG Corp, myself, and a few others.

Automation Framework DesignFigure 4


In brief, the framework itself is really defined by the Core Data Driven Engine, the Component Functions, and the Support Libraries. While the Support Libraries provide generic routines useful even outside the context of a keyword driven framework, the core engine and Component Functions are highly dependent on the existence of all three elements.
The test execution starts with the LAUNCH TEST(1) script. This script invokes the Core Data Driven Engine by providing one or more High-Level Test Tables to CycleDriver(2). CycleDriver processes these test tables invoking the SuiteDriver(3) for each Intermediate-Level Test Table it encounters. SuiteDriver processes these intermediate-level tables invoking StepDriver(4) for each Low-Level Test Table it encounters. As StepDriver processes these low-level tables it attempts to keep the application in synch with the test. When StepDriver encounters a low-level command for a specific component, it determines what Type of component is involved and invokes the corresponding Component Function(5) module to handle the task.
All of these elements rely on the information provided in the App Map to interface or bridge the automation framework with the application being tested. Each of these elements will be described in more detail in the following sections.

Application Map Box
1.3.4 The Application Map:
The Application Map is one of the most critical items in our framework. It is how we map our objects from names we humans can recognize to a data format useful for the automation tool. The testers for a given project will define a naming convention or specific names for each component in each window as well as a name for the window itself. We then use the Application Map to associate that name to the identification method needed by the automation tool to locate and properly manipulate the correct object in the window.
Not only does it give us the ability to provide useful names for our objects, it also enables our scripts and keyword driven tests to have a single point of maintenance on our object identification strings. Thus, if a new version of an application changes the title of our web page or the index of an image element within it, they should not affect our test tables. The changes will require only a quick modification in one place--inside the Application Map.
Figure 5 shows a simple HTML page used in one frame of a HTML frameset. Table 3 shows the object identification methods for this page for an automation tool. This illustrates how the tool’s recorded scripts might identify multiple images in the header frame or top frame of a multi-frame web page. This top frame contains the HTML document with four images used to navigate the site. Notice that these identification methods are literal strings and potentially appear many times in traditional scripts (a maintenance nightmare!):

Simple HTML Document With Four Images
HTML Image Maps

Figure 5

Script referencing HTML Document Components with Literal Strings
OBJECTIDENTIFICATION METHOD
Window
"WindowTag=WebBrowser"
Frame
"FrameID=top"
Image
"FrameID=top;\;DocumentTitle=topFrame"
Image
"FrameID=top;\;DocumentTitle=topFrame;\;ImageIndex=1"
Image
"FrameID=top;\;DocumentTitle=topFrame;\;ImageIndex=2"
Image
"FrameID=top;\;DocumentTitle=topFrame;\;ImageIndex=3"
Image
"FrameID=top;\;DocumentTitle=topFrame;\;ImageIndex=4"

Table 3

This particular web page is simple enough. It contains only four images. However, when we look at Table 3, how do we determine which image is for Product information, and which is for Services? We should not assume they are in any particular order based upon how they are presented visually. Consequently, someone trying to decipher or maintain scripts containing these identification strings can easily get confused.
An Application Map will give these elements useful names, and provide our single point of maintenance for the identification strings as shown in Table 4. The Application Map can be implemented in text files, spreadsheet tables, or your favorite database table format. The Support Libraries just have to be able to extract and cache the information for when it is needed.

An Application Map Provides Named References for Components
REFERENCEIDENTIFICATION METHOD
Browser
"WindowTag=WebBrowser"
TopFrame
"FrameID=top"
TopPage
"FrameID=top;\;DocumentTitle=topFrame"
CompInfoImage
"FrameID=top;\;DocumentTitle=topFrame;\;ImageIndex=1"
ProductsImage
"FrameID=top;\;DocumentTitle=topFrame;\;ImageIndex=2"
ServicesImage
"FrameID=top;\;DocumentTitle=topFrame;\;ImageIndex=3"
SiteMapImage
"FrameID=top;\;DocumentTitle=topFrame;\;ImageIndex=4"

Table 4

With the preceding definitions in place, the same scripts can use variables with values from the Application Map instead of those string literals. Our scripts can now reference these image elements as shown in Table 5. This reduces the chance of failure caused by changes in the application and provides a single point of maintenance in the Application Map for the identification strings used throughout our tests. It can also make our scripts easier to read and understand.

Script Using Variable References Instead of Literal Strings
OBJECTIDENTIFICATION METHOD
Window
Browser
Frame
TopFrame
Document
TopPage
Image
CompInfoImage
Image
ProductsImage
Image
ServicesImage
Image
SiteMapImage

Table 5

Component Functions Box1.3.5 Component Functions:
Component Functions are those functions that actively manipulate or interrogate component objects. In our automation framework we will have a different Component Function module for each Type of component we encounter (Window, CheckBox, TextBox, Image, Link, etc..).
Our Component Function modules are the application-independent extensions we apply to the functions already provided by the automation tool. However, unlike those provided by the tool, we add the extra code to help with error detection, error correction, and synchronization. We also write these modules to readily use our application-specific data stored in the Application Map and test tables as necessary. In this way, we only have to develop these Component Functions once, and they will be used again and again by every application we test.
Another benefit from Component Functions is that they provide a layer of insulation between our application and the automation tool. Without this extra layer, changes or "enhancements" in the automation tool itself can break existing scripts and our table driven tests. With these Component Functions, however, we can insert a "fix"--the code necessary to accommodate these changes that will avert breaking our tests.
Component Function Keywords Define Our Low-Level Vocabulary:
Each of these Component Function modules will define the keywords or "action words" that are valid for the particular component type it handles. For example, the Textbox Component Function module would define and implement the actions or keywords that are valid on a Textbox. These keywords would describe actions like those in Table 6:
Some Component Function Keywords for a Textbox
KEYWORDACTION PERFORMED
InputTextEnter new value into the Textbox
VerifyValueVerify the current value of the Textbox
VerifyPropertyVerify some other attribute of the Textbox

Table 6

Each action embodied by a keyword may require more information in order to complete its function. The InputText action needs an additional argument that tells the function what text it is suppose to input. The VerifyProperty action needs two additional arguments, (1) the name of the property we want to verify, and (2) the value we expect to find. And, while we need no additional information toClick a Pushbutton, a Click action for an Image map needs to know where we want the click on the image to occur.
These Component Function keywords and their arguments define the low-level vocabulary and individual record formats we will use to develop our test tables. With this vocabulary and the Application Mapobject references, we can begin to build test tables that our automation framework and our human testers can understand and properly execute.

Test Tables Box1.3.6 Test Tables:
Low-level Test Tables or Step Tables contain the detailed step-by-step instructions of our tests. Using the object names found in the Application Map, and the vocabulary defined by theComponent Functions; these tables specify what document, what component, and what action to take on the component. The following three tables are examples of Step Tables comprised of instructions to be processed by the StepDriver module. The StepDriver module is the one that initially parses and routes all low-level instructions that ultimately drive our application.

Step Table: LaunchSite
COMMAND/DOCUMENT
PARAMETER/ACTION
PARAMETER
LaunchBrowser
Default.htm
 
Browser
VerifyCaption
"Login"

Table 7

Step Table: Login
DOCUMENT
COMPONENT
ACTION
PARAMETER
LoginPage
UserIDField
InputText
"MyUserID"
LoginPage
PasswordField
InputText
"MyPassword"
LoginPage
SubmitButton
Click
 

Table 8

Step Table: LogOffSite
DOCUMENT
COMPONENT
ACTION
TOCPage
LogOffButton
Click

Table 9

Note:
In Table 7 we used a System-Level keyword, LaunchBrowser. This is also called a Driver Command. A Driver Command is a command for the framework itself and not tied to any particular document or component. Also notice that test tables often consist of records with varying numbers of fields.
Intermediate-Level Test Tables or Suite Tables do not normally contain such low-level instructions. Instead, these tables typically combine Step Tables into Suites in order to perform more useful tasks. The same Step Tables may be used in many Suites. In this way we only develop the minimum number of Step Tables necessary. We then mix-and-match them in Suites according to the purpose and design of our tests, for maximum reusability.
The Suite Tables are handled by the SuiteDriver module which passes each Step Table to the StepDriver module for processing.
For example, a Suite using two of the preceding Step Tables might look like Table 10:

Suite Table: StartSite
STEP TABLE REFERENCE
TABLE PURPOSE
LaunchSite
Launch web app for test
Login
Login "MyUserID" to app

Table 10

Other Suites might combine other Step Tables like these:

Suite Table: VerifyTOCPage
STEP TABLE REFERENCE
TABLE PURPOSE
VerifyTOCContent
Verify text in Table of Contents
VerifyTOCLinks
Verify links in Table of Contents

Table 11

Suite Table: ShutdownSite
STEP TABLE REFERENCE
TABLE PURPOSE
LogOffSite
Logoff the application
ExitBrowser
Close the web browser

Table 12

High-Level Test Tables or Cycle Tables combine intermediate-level Suites into Cycles. The Suites can be combined in different ways depending upon the testing Cycle we wish to execute (Regression, Acceptance, Performance…). Each Cycle will likely specify a different type or number of tests. These Cycles are handled by the CycleDriver module which passes each Suite to SuiteDriver for processing.
A simple example of a Cycle Table using the full set of tables seen thus far is shown in Table 13.

Cycle Table: SiteRegression
SUITE TABLE REFERENCE
TABLE PURPOSE
StartSite
Launch browser and Login for test
VerifyTOCPage
Verify Table of Contents page
ShutdownSite
Logoff and Shutdown browser

Table 13

1.3.7 The Core Data Driven Engine:

Core Data Driven Engine Box

We have already talked about the primary purpose of each of the three modules that make up the Core Data Driven Engine part of the automation framework. But let us reiterate some of that here.
CycleDriver processes Cycles, which are high-level tables listing Suites of tests to execute. CycleDriver reads each record from the Cycle Table, passing SuiteDriver each Suite Table it finds during this process.
SuiteDriver processes these Suites, which are intermediate-level tables listing Step Tables to execute. SuiteDriver reads each record from the Suite Table, passing StepDriver each Step Table it finds during this process.
StepDriver processes these Step Tables, which are records of low-level instructions developed in the keyword vocabulary of our Component FunctionsStepDriver parses these records and performs some initial error detection, correction, and synchronization making certain that the document and\or the component we plan to manipulate are available and active. StepDriver then routes the complete instruction record to the appropriate Component Function for final execution.
Let us again show a sample Step Table record in Table 14 and the overall framework pseudo-code that will process it in Figure 6.

Single Step Table Record
DOCUMENT
COMPONENT
ACTION
EXPECTED
VALUE
LoginPage
UserIDTextbox
VerifyValue
"MyUserID"

Table 14

Framework Pseudo-Code
Primary Record Processor Module:
   Verify "LoginPage" Exists. (Attempt recovery if not)
   Set focus to "LoginPage".
   Verify "UserIDTextbox" Exists. (Attempt recovery if not)
   Find "Type" of component "UserIDTextbox". (It is a Textbox)
   Call the module that processes ALL Textbox components.
Textbox Component Module:
   Validate the action keyword "VerifyValue".
   Call the Textbox.VerifyValue function.
Textbox.VerifyValue Function:
   Get the text stored in the "UserIDTextbox" Textbox.
   Compare the retrieved text to "MyUserID".
   Record our success or failure.
Figure 6

In Figure 6 you may notice that it is not StepDriver that validates the action command or keyword VerifyValue that will be processed by the Textbox Component Function Module. This allows the Textbox module to be further developed and expanded without affecting StepDriver or any other part of the framework. While there are other schemes that might allow StepDriver to effectively make this validation dynamically at runtime, we still chose to do this in the Component Functions themselves.
System-Level Commands or Driver Commands:
In addition to this table-processing role, each of the Driver modules has its own set of System-Level keywords or commands also called Driver Commands. These commands instruct the module to do something other than normal table processing.
For example, the previously shown LaunchSite Step Table issued the LaunchBrowser StepDriver command. This command instructs StepDriver to start a new Browser window with the given URL. Some common Driver Commands to consider:
Common Driver Commands
COMMANDPURPOSE
UseApplicationMapSet which Application Map(s) to use
LaunchApplicationLaunch a standard application
LaunchBrowserLaunch a web-based app via URL
CallScriptRun an automated tool script
WaitForWindowWait for Window or Browser to appear
WaitForWindowGoneWait for Window or Browser to disappear
Pause or SleepPause for a specified amount of time
SetRecoveryProcessSet an AUT recovery process
SetShutdownProcessSet an AUT shutdown process
SetRestartProcessSet an AUT restart process
SetRebootProcessSet a System Reboot process
LogMessageEnter a message in the log

Table 15

These are just some examples of possible Driver Commands. There are surely more that have not been listed and some here which you may not need implemented.

All for One, and One for All:
This discussion on the Core Data Driven Engine has identified three separate modules (StepDriverSuiteDriver, and CycleDriver) that comprise the Core. This does not, however, make this three-module design an automation framework requirement. It may be just as valid to sum up all the functionality of those three modules into one module, or any number of discreet modules more suitable to the framework design developed.

Support Libraries Box
1.3.8 The Support Libraries:
The Support Libraries are the general-purpose routines and utilities that let the overall automation framework do what it needs to do. They are the modules that provide things like:
  • File Handling
  • String Handling
  • Buffer Handling
  • Variable Handling
  • Database Access
  • Logging Utilities
  • System\Environment Handling
  • Application Mapping Functions
  • System Messaging or System API Enhancements and Wrappers

They also provide traditional automation tool scripts access to the features of our automation framework including the Application Map functions and the keyword driven engine itself. Both of these items can vastly improve the reliability and robustness of these scripts until such time that they can be converted over to keyword driven test tables (if and when that is desirable).

1.4 Automation Framework Workflow
We have seen the primary features of our automation framework and now we want to put it to the test. This section provides a test workflow model that works very well with this framework. Essentially, we start by defining our high level Cycle Tables and provide more and more detail down to our Application Map and low-level Step Tables.
For this workflow example, we are going to show the hypothetical test design that might make up security authentication tests for a web site. The order in which we present the information and construct the tables is an ideal workflow for our automation framework. It will also illustrate how we do not even need a functioning application in order to start producing these tests.

1.4.1 High-Level Tests -- Cycle Table
We will start out by defining our high-level Cycle Table in Table 16. This will list tests that verify user authentication functionality. There is no application yet, but we do know that we will authenticate users with a user ID and password via some type of Login page. With this much information we can start designing some tests.
At this level, our tables are merely keywords or actions words of what we propose to do. The keywords represent the names of Suites we expect to implement when we are ready or when more is known about the application.
We will call this particular high-level test, VerifyAuthenticationFunction (Table 16). It will not show all the tests that should be performed to verify the user authentication features of a web site. Just enough to illustrate our test design process.

Cycle Table: VerifyAuthenticationFunction
KEYWORDS (Suite Tables)TABLE PURPOSE
VerifyInvalidLogin
Tests with Invalid UserID and/or Password
VerifyBlankLogin
Tests with Missing UserID and/or Password
VerifyValidLogin
Tests with Valid UserID and Password

Table 16

The preceding table illustrates that we plan to run three tests to verify the authentication functionality of our application. We may add or delete tests from this list in the future, but this is how we currently believe this functionality can be adequately verified.

1.4.2 Intermediate-Level Tests -- Suite Tables
When we are ready to provide more detail for our authentication test effort we can begin to flesh-out the intermediate-level Suite Tables. Our high-level VerifyAuthenticationFunction Cycle Tabledefined three Suites that we must now expand upon.
First, we plan to verify invalid login attempts--those with an invalid user ID, invalid password, or both. This first Suite was aptly dubbed, VerifyInvalidLogin (Table 17).
Second, we want to run a simple test without any user ID or password whatsoever. This second Suite was called, VerifyBlankLogin (Table 18).
The last Suite we must implement is the one that insures we can successfully login when we provide a valid user ID and password. This one was named, VerifyValidLogin (Table 19).

Suite Table: VerifyInvalidLogin
KEYWORDS (Step Tables)
USERID
PASSWORD
LaunchSite  
LoginBadUserGoodPassword
VerifyLoginError  
LoginGoodUserBadPassword
VerifyLoginError  
LoginBadUserBadPassword
VerifyLoginError  
ExitLogin  

Table 17

Suite Table: VerifyBlankLogin
KEYWORDS (Step Tables)USERIDPASSWORD
LaunchSite
  
Login
""
""
VerifyLoginError
  
ExitLogin
  

Table 18

Suite Table: VerifyValidLogin
KEYWORDS (Step Tables)
USERID
PASSWORD
LaunchSite
  
Login
GoodUser
GoodPassword
VerifyLoginSuccess
  
ShutdownSite
  

Table 19

Notice that we start to see a large amount of test table reuse here. While not yet defined, we can see that we will be using the LaunchSiteLoginVerifyLoginError, and ExitLogin tables quite frequently. As you can probably imagine, VerifyLoginSuccess and ShutdownSite would also be used extensively throughout all the tests for this web site, but we are focusing just on user authentication here.
You should also notice that our Login keyword is describing a Step Table that will accept up to two arguments--the user ID and password we want to supply. We have not gone into the specific design details of how the automation framework should be implemented, but tests accepting variable data like this provide significant reuse advantages. This same table is used to login with different user ID and password combinations.

1.4.3 Application Maps
Up to this point there was little need for an Application Map. Nearly all the high-level and intermediate-level test designs are abstract enough to forego references to specific window components. But as we move towards our final level of test design we indeed must reference specific elements of the application. Thus, for test automation, we need to make our Application Maps.
Naming the Components:
Unless the application has formal design documentation that explicitly provides meaningful names for the windows and components, the test team is going to have to develop this themselves.
The team will discuss and agree upon the names for each window and component. Sometimes we can use the names developers have already given the components in the code. But the team can often come up with more meaningful and identifiable names. Regardless, once defined, the names chosen should not change. However, if the need arises, an item can always have more than one name.
Test Design Application Map:
For low-level test design and manual testing, an Application Map suitable for human consumption should be created. This is often done with screen captures of the various application windows. These screen captures are then clearly annotated with the names given to each component. This may also indicate the Type for each component.
While many components may be exactly what they look like, this is not always the case. Many controls may simply be windowless, painted elements that the underlying environment knows nothing about. They can also be custom components or COM objects that are functionally different from the controls they appear to emulate. So care must be taken when identifying the Type of a component.
Automation Framework Application Map:
For automated testing, an Application Map suitable for the automation framework will be created. We will use the Test Design Application Map as our reference for the names used to identify the components. The identification syntax suitable for the automation tool will then be mapped to each name as discussed in Section 1.3.4.
Our test design so far has indirectly identified three documents or windows that we will interact with: (1) the Login page, (2) an Error dialog or message displayed during our failed Login attempts, and (3) the Home page of our web site displayed following a successful login attempt. We will also need to reference the browser window itself throughout these tests.
We must now create an Application Map that will properly reference these items and their components for the automation framework. For the sake of simplicity, our Login page in Figure 7 will contain only the two textbox components for user ID and password, and two buttons; Submit and Cancel.

Login Page
Login Form Image

Figure 7

The Error dialog in Figure 8 is actually a system alert dialog. It has a Label containing the error message and an OK button to dismiss the dialog.

Error Dialog
Error Dialog Image

Figure 8

The Home Page of the application may contain many different elements. However, for the purpose of these tests and this example, we only need to identify the document itself. We will not interact with any of the components in the Home Page, so we will not define them in this sample Application Map.
Table 20 shows the Application Map we will use to identify the browser window and the three documents in a manner suitable for the automation framework.

Site Application Map
OBJECTNAMEIDENTIFICATION METHOD
Browser:
 
Browser
WindowID=WebBrowser
  
LoginPage:
 
LoginPage
FrameID=content;\;DocumentTitle=Login
UserID
FrameID=content;\;DocumentTitle=Login;\;TextBoxIndex=1
Password
FrameID=content;\;DocumentTitle=Login;\;TextBoxIndex=2
SubmitButton
FrameID=content;\;DocumentTitle=Login;\;ButtonText=Submit
CancelButton
FrameID=content;\;DocumentTitle=Login;\;ButtonText=Cancel
  
ErrorWin:
 
ErrorWin
WindowCaption=Error
ErrorMessage
LabelIndex=1
OKButton
ButtonText=OK
  
HomePage:
 
HomePage
FrameID=content;\;DocumentTitle=Home

Table 20

The physical format of the Application Map will be determined by how the automation framework will access it. It may be delimited files, database tables, spreadsheets, or any number of other possibilities.
Notice that we have identified each document with its own section in the Application Map of Table 20. We do this to avoid naming conflicts among the many components of our application. For instance, there may be many forms that contain a Submit button. Each form can name this button the SubmitButton because the names only have to be unique within each individual section of the Application Map.
With this we have now uniquely identified our components for the framework. The OBJECTNAME is how we reference each component in our low-level Step Tables. The IDENTIFICATION METHOD is the information the automation tool needs to identify the component within the computer operating system environment.
The syntax for these component identifications is different depending upon the automation tool deployed. The example syntax we used is not for any particular automation tool, but is representative of what is needed for automation tools in general. The Application Map designer will need a thorough understanding of the automation tool’s component identification syntax in order to create effective Application Maps.

New Component Types:
As we examine the application and develop an Automation Framework Application Map we may find new component types not handled by existing Component Functions. To deal with this, we need to either implement a new Component Function module for each new component type, or determine how best they can be handled by existing Component Functions. This, of course, is a job for the automation technical staff.

1.4.4 Low-Level Tests -- Steps Tables
Now that the design of our application is known and our Application Map has been developed, we can get down to the step-by-step detail of our tests. We can still develop Step Tables without an Application Map. But without the map, we cannot automatically execute the tests.
Looking back at all the Suites that detailed our high-level VerifyAuthenticationFunction Cycle Table, we see they invoke six reusable, low-level Step Tables that we are now ready to implement. These are listed in Table 21.

Step Tables Invoked by Suites
STEP TABLE
TABLE PURPOSE
LaunchSite
Launch the web app for testing
Login
Login with a User ID and Password
VerifyLoginError
Verify an Error Dialog on Invalid Login
ExitLogin
Exit the Login Page via Cancel
VerifyLoginSuccess
Verify Home Page up after Successful Login
ShutdownSite
Logoff and close the browser

Table 21

We develop these Step Tables using the component names defined in our Application Map and the keyword vocabulary defined by our Component Functions. Where test designers find missing yet needed keywords, actions, or commands in our Component Functions; the technical staff will need to expand the appropriate Component Function module, Driver CommandsSupport Libraries, or find some other solution to provide the needed functionality if possible.
Now we can provide the detailed test steps that make-up our six Step Tables.

Step Table: LaunchSite
COMMAND/DOCUMENT
ARG/COMPONENT
ACTION
EXPECTED
RESULTS
LaunchBrowser
Default.htm
  
Browser
 
VerifyCaption
"Login"

Table 22

Step Table: Login &user &password
DOCUMENT
COMPONENT
ACTION
INPUT DATA
LoginPage
UserID
InputText
&user
LoginPage
Password
InputText
&password
LoginPage
SubmitButton
Click
 

Table 23

Step Table: VerifyLoginError
DOCUMENT
COMPONENT
ACTION
EXPECTED
RESULT
ErrorWin
 
VerifyCaption
"Error"
ErrorWin
ErrorMessage
VerifyText
"The User ID or Password was invalid."
ErrorWin
OKButton
Click
 

Table 24

Step Table: ExitLogin
DOCUMENT
COMPONENT
ACTION
LoginPage
CancelButton
Click

Table 25

Step Table: VerifyLoginSuccess
DOCUMENT
COMPONENT
ACTION
EXPECTED
RESULT
HomePage
 
VerifyTitle
"Home"
Browser
 
VerifyCaption
"Welcome"

Table 26

Step Table: ShutdownSite
DOCUMENT
COMPONENT
ACTION
INPUT DATA
Browser
 
SelectMenuItem
File>Exit

Table 27

As you can see, our high-level test VerifyAuthenticationFunction has been detailed down to a very sparse set of highly reusable test steps. Our test designs have taken a high-level test specification, given that some abstract scenarios to perform, and finally provided the step-by-step instructions to execute those scenarios. We did all of this without resorting to difficult scripting languages, or complex software development environments.
It may seem simple enough, but that is because we have the underlying automation framework doing so much of the work for us.
Now that we have all of this defined, what does it actually do?
Remember, the automation framework is invoked with one or more high-level Cycle Tables. These tables call the test Suites that, in turn, call the Step Tables to perform the low-level test steps that drive the application.
We developed our own Cycle Table called VerifyAuthenticationFunction. This calls our own collection of Suites, and ultimately the Step Tables we just finished developing. If you look over all of these tables, you can see they form a relatively small set of reusable instructions that make up this test. But they generate a large instruction set at time of execution.
The following section will actually illustrate the automated execution flow of the test we have just implemented.

Table 28
Execution Flow of VerifyAuthenticationFunction Test
Cycle Table
     Suite Table
          Step Table
DOCUMENT
Or
COMMAND
COMPONENT
Or
COMMAND ARG
ACTION
INPUT DATA
Or
EXPECTED RESULTS
VerifyAuthenticationFunction
     VerifyInvalidLogin
          LaunchSite
LaunchBrowserDefault.htm
BrowserVerifyCaption"Login"
          Login&user=BadUser&password=GoodPassword
LoginPageUserIDInputText&user
LoginPagePasswordInputText&password
LoginPageSubmitButtonClick
          VerifyLoginError
ErrorWinVerifyCaption"Error"
ErrorWinErrorMessageVerifyText"The User ID or Password was Invalid."
ErrorWinOKButtonClick
          Login&user=GoodUser&password=BadPassword
LoginPageUserIDInputText&user
LoginPagePasswordInputText&password
LoginPageSubmitButtonClick
          VerifyLoginError
ErrorWinVerifyCaption"Error"
ErrorWinErrorMessageVerifyText"The User ID or Password was Invalid."
ErrorWinOKButtonClick
          Login&user=BadUser&password=BadPassword
LoginPageUserIDInputText&user
LoginPagePasswordInputText&password
LoginPageSubmitButtonClick
          VerifyLoginError
ErrorWinVerifyCaption"Error"
ErrorWinErrorMessageVerifyText"The User ID or Password was Invalid."
ErrorWinOKButtonClick
          ExitLogin
LoginPageCancelButtonClick
     VerifyBlankLogin
          LaunchSite
LaunchBrowserDefault.htm
BrowserVerifyCaption"Login"
          Login&user=""&password=""
LoginPageUserIDInputText&user
LoginPagePasswordInputText&password
LoginPageSubmitButtonClick
          VerifyLoginError
ErrorWinVerifyCaption"Error"
ErrorWinErrorMessageVerifyText"The User ID or Password was Invalid."
ErrorWinOKButtonClick
          ExitLogin
LoginPageCancelButtonClick
     VerifyValidLogin
          LaunchSite
LaunchBrowserDefault.htm
BrowserVerifyCaption"Login"
          Login&user=GoodUser&password=GoodPassword
LoginPageUserIDInputText&user
LoginPagePasswordInputText&password
LoginPageSubmitButtonClick
          VerifyLoginSuccess
HomePageVerifyTitle"Home"
BrowserVerifyCaption"Welcome"
          ShutdownSite
BrowserSelectMenuItemFile>Exit

Table 28

1.5 Summary
In order to keep up with the pace of product development and delivery it is essential to implement an effective, reusable test automation framework.
We cannot expect the traditional capture/replay framework to fill this role for us. Past experience has shown that capture\replay tools alone will never provide the long-term automation successes that other more robust test automation strategies can.
A test strategy relying on data driven automation tool scripts is definitely the easiest and quickest to implement if you have and keep the technical staff to handle it. But it is the hardest of the data driven approaches to maintain and perpetuate and often leads to long-term failure.
The most effective test strategies allow us to develop our structured test designs in a format and vocabulary suitable for both manual and automated testing. This will enable testers to focus on effective test designs unencumbered by the technical details of the framework used to execute them, while technical automation experts implement and maintain a reusable automation framework independent of any application that will be tested by it.
A keyword driven automation framework is probably the hardest and potentially most time-consuming data driven approach to implement initially. However, this investment is mostly a one-shot deal. Once in place, keyword driven automation is arguably the easiest of the data driven frameworks to maintain and perpetuate providing the greatest potential for long-term success. There are also a few commercially available products maturing that may be suitable for your needs.
What we really want is a framework that can be both keyword driven while also providing enhanced functionality for data driven scripts. When we integrate these two approaches the results can be veryimpressive!
The essential guiding principles we should follow when developing our overall test strategy (or evaluating the test strategy of a tool we wish to consider):
  • The test design and the test framework are totally separate entities.
  • The test framework should be application-independent.
  • The test framework must be easy to expand, maintain, and perpetuate.
  • The test strategy/design vocabulary should be framework independent.
  • The test strategy/design should isolate testers from the complexities of the test framework.

Bibliography:

  1. Kit, E. & Prince, S. "A Roadmap for Automating Software Testing" Tutorial presented at STAR’99East Conference, Orlando, Florida, May 10, 1999.

  2. Hayes, L. "Establishing an Automated Testing Framework" Tutorial presented at STAR’99East Conference, Orlando, Florida, May 11, 1999.


  3. Kit, E. "The Third Generation--Integrated Test Design and Automation" Guest presentation at STAR’99East Conference, Orlando, Florida, May 12, 1999.


  4. Mosley, D. & Posey, B. Just Enough Software Test Automation New Jersey: Prentice Hall PTR, 2002.


  5. Wust, G. "A Model for Successful Software Testing Automation" Paper presented at STAR’99East Conference, Orlando, Florida, May 12, 1999.


  6. Dustin, E. Automated Software Testing: Introduction, Management, and Performance.
    New York: Addison Wesley, 1999.


  7. Fewster & Graham Software Test Automation: Effective use of test execution tools New York: Addison Wesley, 1999.


  8. Dustin, E. "Automated Testing Lifecycle Methodology (ATLM)" Paper presented at STAR EAST 2000 Conference, Orlando, Florida, May 3, 2000.

  9. Kit, E. & Buwalda, H. "Testing and Test Automation: Establishing Effective Architectures" Presentation at STAR EAST 2000 Conference, Orlando, Florida, May 4, 2000.


  10. Sweeney, M. "Automation Testing Using Visual Basic" Paper presented at STAR EAST 2000 Conference, Orlando, Florida, May 4, 2000.


  11. Buwalda, H. "Soap Opera Testing" Guest presentation at STAR EAST 2000 Conference, Orlando, Florida, May 5, 2000.


  12. Pollner, A. "Advanced Techniques in Test Automation" Paper presented at STAR EAST 2000 Conference, Orlando, Florida, May 5, 2000.

  13. Cem Kaner, http://www.kaner.com

  14. Zambelich, K. Totally Data-Driven Automated Testing 1998 http://www.sqa-test.com/w_paper1.html

  15. SQA Suite Users, Discussions and Archives, 1999-2000, http://www.dundee.net/sqa/

  16. Nagle, C. Data Driven Test Automation: For Rational Robot V2000 1999-2000DDE Doc Index

Sunday, 15 March 2015

UML Use Case Diagram

Overview:

To model a system the most important aspect is to capture the dynamic behaviour. To clarify a bit in details, dynamic behaviour means the behaviour of the system when it is running /operating.
So only static behaviour is not sufficient to model a system rather dynamic behaviour is more important than static behaviour. In UML there are five diagrams available to model dynamic nature and use case diagram is one of them. Now as we have to discuss that the use case diagram is dynamic in nature there should be some internal or external factors for making the interaction.
These internal and external agents are known as actors. So use case diagrams are consists of actors, use cases and their relationships. The diagram is used to model the system/subsystem of an application. A single use case diagram captures a particular functionality of a system.
So to model the entire system numbers of use case diagrams are used.

Purpose:

The purpose of use case diagram is to capture the dynamic aspect of a system. But this definition is too generic to describe the purpose.
Because other four diagrams (activity, sequence, collaboration and Statechart) are also having the same purpose. So we will look into some specific purpose which will distinguish it from other four diagrams.
Use case diagrams are used to gather the requirements of a system including internal and external influences. These requirements are mostly design requirements. So when a system is analyzed to gather its functionalities use cases are prepared and actors are identified.
Now when the initial task is complete use case diagrams are modelled to present the outside view.
So in brief, the purposes of use case diagrams can be as follows:
·         Used to gather requirements of a system.
·         Used to get an outside view of a system.
·         Identify external and internal factors influencing the system.
·         Show the interacting among the requirements are actors.

How to draw Use Case Diagram?

Use case diagrams are considered for high level requirement analysis of a system. So when the requirements of a system are analyzed the functionalities are captured in use cases.
So we can say that uses cases are nothing but the system functionalities written in an organized manner. Now the second things which are relevant to the use cases are the actors. Actors can be defined as something that interacts with the system.
The actors can be human user, some internal applications or may be some external applications. So in a brief when we are planning to draw an use case diagram we should have the following items identified.
·         Functionalities to be represented as an use case
·         Actors
·         Relationships among the use cases and actors.
Use case diagrams are drawn to capture the functional requirements of a system. So after identifying the above items we have to follow the following guidelines to draw an efficient use case diagram.
·         The name of a use case is very important. So the name should be chosen in such a way so that it can identify the functionalities performed.
·         Give a suitable name for actors.
·         Show relationships and dependencies clearly in the diagram.
·         Do not try to include all types of relationships. Because the main purpose of the diagram is to identify requirements.
·         Use note when ever required to clarify some important points.
The following is a sample use case diagram representing the order management system. So if we look into the diagram then we will find three use cases (Order, SpecialOrder and NormalOrder) and one actor which is customer.
The SpecialOrder and NormalOrder use cases are extended from Order use case. So they have extends relationship. Another important point is to identify the system boundary which is shown in the picture. The actor Customer lies outside the system as it is an external user of the system.



Where to Use Case Diagrams?

As we have already discussed there are five diagrams in UML to model dynamic view of a system. Now each and every model has some specific purpose to use. Actually these specific purposes are different angles of a running system.
So to understand the dynamics of a system we need to use different types of diagrams. Use case diagram is one of them and its specific purpose is to gather system requirements and actors.
Use case diagrams specify the events of a system and their flows. But use case diagram never describes how they are implemented. Use case diagram can be imagined as a black box where only the input, output and the function of the black box is known.
These diagrams are used at a very high level of design. Then this high level design is refined again and again to get a complete and practical picture of the system. A well structured use case also describes the pre condition, post condition, exceptions. And these extra elements are used to make test cases when performing the testing.
Although the use cases are not a good candidate for forward and reverse engineering but still they are used in a slight different way to make forward and reverse engineering. And the same is true for reverse engineering. Still use case diagram is used differently to make it a candidate for reverse engineering.
In forward engineering use case diagrams are used to make test cases and in reverse engineering use cases are used to prepare the requirement details from the existing application.
So the following are the places where use case diagrams are used:
·         Requirement analysis and high level design.
·         Model the context of a system.
·         Reverse engineering.

·         Forward engineering.

Saturday, 17 January 2015

Entry,Exit&Suspension_criteria

A.   Entry Criteria:

All functionalities are successfully tested by dev team during Unit /Integration testing. All
High severity bugs has been fixed/closed those were founded in Unit/Integration testing.
Provided environment is stable for testing (Smoke test is passed)
Dev team has provided release note
Unit/Integration/Database testing has done by dev


B.   Exit Criteria:

All functionality has been tested according to the requirement
Defect rate is fall down to certain level
All requirements has been covered according to RTM or other metrics
All test cases have got passed status
Time is over
Budget is over

C.   Suspension/Resumption criteria:

Entry/Exit criteria is not meet
Smoke test is failed
Unavailability of any dependent system during testing
Project plan will be update accordingly requirement
Show Stopper (No work around)

Testing is resumed once a suitable fix has been delivered

Friday, 16 January 2015

Software Testing Definitions

 Software Testing Definitions
A list of 100 types of Software Testing Types along with definitions. A must read for any QA professional.
1. Acceptance Testing: Formal testing conducted to determine whether or not a system satisfies its acceptance criteria and to enable the customer to determine whether or not to accept the system. It is usually performed by the customer.
2. Accessibility Testing: Type of testing which determines the usability of a product to the people having disabilities (deaf, blind, mentally disabled etc). The evaluation process is conducted by persons having disabilities.
3. Active Testing: Type of testing consisting in introducing test data and analyzing the execution results. It is usually conducted by the testing teams.
4. Agile Testing: Software testing practice that follows the principles of the agile manifesto, emphasizing testing from the perspective of customers who will utilize the system. It is usually performed by the QA teams.
5. Age Testing: Type of testing which evaluates a system's ability to perform in the future. The evaluation process is conducted by testing teams.
6. Ad-hoc Testing: Testing performed without planning and documentation - the tester tries to 'break' the system by randomly trying the system's functionality. It is performed by the testing teams.
7. Alpha Testing: Type of testing a software product or system conducted at the developer's site. Usually it is performed by the end user.
8. Assertion Testing: Type of testing consisting in verifying if the conditions confirm the product requirements. It is performed by the testing teams.
9. API Testing: Testing technique similar to unit testing in that it targets the code level. API Testing differs from unit testing in that it is typically a QA task and not a developer task.
10. All-pairs Testing: Combinatorial testing method that tests all possible discrete combinations of input parameters. It is performed by the testing teams.
11. Automated Testing: Testing technique that uses automation testing tools to control the environment set-up, test execution and results reporting. It is performed by a computer and is used inside the testing teams.
12. Basis Path Testing: A testing mechanism which derives a logical complexity measure of a procedural design and use this as a guide for defining a basic set of execution paths. It is used by testing teams when defining test cases.
13. Backward Compatibility Testing: Testing method which verifies the behavior of the developed software with older versions of the test environment. It is performed by testing teams.
14. Beta Testing: Final testing before releasing application for commercial purpose. It is typically done by end-users or others.
15. Benchmark Testing: Testing technique that uses representative sets of programs and data designed to evaluate the performance of computer hardware and software in a given configuration. It is performed by testing teams.
16. Big Bang Integration Testing: Testing technique which integrates individual program modules only when everything is ready. It is performed by the testing teams.
17. Binary Portability Testing: Technique that tests an executable application for portability across system platforms and environments, usually for conformation to an ABI specification. It is performed by the testing teams.
18. Boundary Value Testing: Software testing technique in which tests are designed to include representatives of boundary values. It is performed by the QA testing teams.
19. Bottom Up Integration Testing: In bottom up integration testing, module at the lowest level are developed first and other modules which go towards the 'main' program are integrated and tested one at a time. It is usually performed by the testing teams.
20. Branch Testing: Testing technique in which all branches in the program source code are tested at least once. This is done by the developer.
21. Breadth Testing: A test suite that exercises the full functionality of a product but does not test features in detail. It is performed by testing teams.
22. Black box Testing: A method of software testing that verifies the functionality of an application without having specific knowledge of the application's code/internal structure. Tests are based on requirements and functionality. It is performed by QA teams.
23. Code-driven Testing: Testing technique that uses testing frameworks (such as xUnit) that allow the execution of unit tests to determine whether various sections of the code are acting as expected under various circumstances. It is performed by the development teams.
24. Compatibility Testing: Testing technique that validates how well a software performs in a particular hardware/software/operating system/network environment. It is performed by the testing teams.
25. Comparison Testing: Testing technique which compares the product strengths and weaknesses with previous versions or other similar products. Can be performed by tester, developers, product managers or product owners.
26. Component Testing: Testing technique similar to unit testing but with a higher level of integration - testing is done in the context of the application instead of just directly testing a specific method. Can be performed by testing or development teams.
27. Configuration Testing: Testing technique which determines minimal and optimal configuration of hardware and software, and the effect of adding or modifying resources such as memory, disk drives and CPU. Usually it is performed by the performance testing engineers.
28. Condition Coverage Testing: Type of software testing where each condition is executed by making it true and false, in each of the ways at least once. It is typically made by the automation testing teams.
29. Compliance Testing: Type of testing which checks whether the system was developed in accordance with standards, procedures and guidelines. It is usually performed by external companies which offer "Certified OGC Compliant" brand.
30. Concurrency Testing: Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. It it usually done by performance engineers.
31. Conformance Testing: The process of testing that an implementation conforms to the specification on which it is based. It is usually performed by testing teams.
32. Context Driven Testing: An Agile Testing technique that advocates continuous and creative evaluation of testing opportunities in light of the potential information revealed and the value of that information to the organization at a specific moment. It is usually performed by Agile testing teams.
33. Conversion Testing: Testing of programs or procedures used to convert data from existing systems for use in replacement systems. It is usually performed by the QA teams.
34. Decision Coverage Testing: Type of software testing where each condition/decision is executed by setting it on true/false. It is typically made by the automation testing teams.
35. Destructive Testing: Type of testing in which the tests are carried out to the specimen's failure, in order to understand a specimen's structural performance or material behavior under different loads. It is usually performed by QA teams.
36. Dependency Testing: Testing type which examines an application's requirements for pre-existing software, initial states and configuration in order to maintain proper functionality. It is usually performed by testing teams.
37. Dynamic Testing: Term used in software engineering to describe the testing of the dynamic behavior of code. It is typically performed by testing teams.
38. Domain Testing: White box testing technique which contains checkings that the program accepts only valid input. It is usually done by software development teams and occasionally by automation testing teams.
39. Error-Handling Testing: Software testing type which determines the ability of the system to properly process erroneous transactions. It is usually performed by the testing teams.
40. End-to-end Testing: Similar to system testing, involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate. It is performed by QA teams.
41. Endurance Testing: Type of testing which checks for memory leaks or other problems that may occur with prolonged execution. It is usually performed by performance engineers.
42. Exploratory Testing: Black box testing technique performed without planning and documentation. It is usually performed by manual testers.
43. Equivalence Partitioning Testing: Software testing technique that divides the input data of a software unit into partitions of data from which test cases can be derived. it is usually performed by the QA teams.
44. Fault injection Testing: Element of a comprehensive test strategy that enables the tester to concentrate on the manner in which the application under test is able to handle exceptions. It is performed by QA teams.
45. Formal verification Testing: The act of proving or disproving the correctness of intended algorithms underlying a system with respect to a certain formal specification or property, using formal methods of mathematics. It is usually performed by QA teams.
46. Functional Testing: Type of black box testing that bases its test cases on the specifications of the software component under test. It is performed by testing teams.
47. Fuzz Testing: Software testing technique that provides invalid, unexpected, or random data to the inputs of a program - a special area of mutation testing. Fuzz testing is performed by testing teams.
48. Gorilla Testing: Software testing technique which focuses on heavily testing of one particular module. It is performed by quality assurance teams, usually when running full testing.
49. Gray Box Testing: A combination of Black Box and White Box testing methodologies: testing a piece of software against its specification but using some knowledge of its internal workings. It can be performed by either development or testing teams.
50. Glass box Testing: Similar to white box testing, based on knowledge of the internal logic of an application’s code. It is performed by development teams.
51. GUI software Testing: The process of testing a product that uses a graphical user interface, to ensure it meets its written specifications. This is normally done by the testing teams.
52. Globalization Testing: Testing method that checks proper functionality of the product with any of the culture/locale settings using every type of international input possible. It is performed by the testing team.
53. Hybrid Integration Testing: Testing technique which combines top-down and bottom-up integration techniques in order leverage benefits of these kind of testing. It is usually performed by the testing teams.
54. Integration Testing: The phase in software testing in which individual software modules are combined and tested as a group. It is usually conducted by testing teams.
55. Interface Testing: Testing conducted to evaluate whether systems or components pass data and control correctly to one another. It is usually performed by both testing and development teams.
56. Install/uninstall Testing: Quality assurance work that focuses on what customers will need to do to install and set up the new software successfully. It may involve full, partial or upgrades install/uninstall processes and is typically done by the software testing engineer in conjunction with the configuration manager.
57. Internationalization Testing: The process which ensures that product’s functionality is not broken and all the messages are properly externalized when used in different languages and locale. It is usually performed by the testing teams.
58. Inter-Systems Testing: Testing technique that focuses on testing the application to ensure that interconnection between application functions correctly. It is usually done by the testing teams.
59. Keyword-driven Testing: Also known as table-driven testing or action-word testing, is a software testing methodology for automated testing that separates the test creation process into two distinct stages: a Planning Stage and an Implementation Stage. It can be used by either manual or automation testing teams.
60. Load Testing: Testing technique that puts demand on a system or device and measures its response. It is usually conducted by the performance engineers.
61. Localization Testing: Part of software testing process focused on adapting a globalized application to a particular culture/locale. It is normally done by the testing teams.
62. Loop Testing: A white box testing technique that exercises program loops. It is performed by the development teams.
63. Manual Scripted Testing: Testing method in which the test cases are designed and reviewed by the team before executing it. It is done by manual testing teams.
64. Manual-Support Testing: Testing technique that involves testing of all the functions performed by the people while preparing the data and using these data from automated system. it is conducted by testing teams.
65. Model-Based Testing: The application of Model based design for designing and executing the necessary artifacts to perform software testing. It is usually performed by testing teams.
66. Mutation Testing: Method of software testing which involves modifying programs' source code or byte code in small ways in order to test sections of the code that are seldom or never accessed during normal tests execution. It is normally conducted by testers.
67. Modularity-driven Testing: Software testing technique which requires the creation of small, independent scripts that represent modules, sections, and functions of the application under test. It is usually performed by the testing team.
68. Non-functional Testing: Testing technique which focuses on testing of a software application for its non-functional requirements. Can be conducted by the performance engineers or by manual testing teams.
69. Negative Testing: Also known as "test to fail" - testing method where the tests' aim is showing that a component or system does not work. It is performed by manual or automation testers.
70. Operational Testing: Testing technique conducted to evaluate a system or component in its operational environment. Usually it is performed by testing teams.
71. Orthogonal array Testing: Systematic, statistical way of testing which can be applied in user interface testing, system testing, regression testing, configuration testing and performance testing. It is performed by the testing team.
72. Pair Testing: Software development technique in which two team members work together at one keyboard to test the software application. One does the testing and the other analyzes or reviews the testing. This can be done between one Tester and Developer or Business Analyst or between two testers with both participants taking turns at driving the keyboard.
73. Passive Testing: Testing technique consisting in monitoring the results of a running system without introducing any special test data. It is performed by the testing team.
74. Parallel Testing: Testing technique which has the purpose to ensure that a new application which has replaced its older version has been installed and is running correctly. It is conducted by the testing team.
75. Path Testing: Typical white box testing which has the goal to satisfy coverage criteria for each logical path through the program. It is usually performed by the development team.
76. Penetration Testing: Testing method which evaluates the security of a computer system or network by simulating an attack from a malicious source. Usually they are conducted by specialized penetration testing companies.
77. Performance Testing: Functional testing conducted to evaluate the compliance of a system or component with specified performance requirements. It is usually conducted by the performance engineer.
78. Qualification Testing: Testing against the specifications of the previous release, usually conducted by the developer for the consumer, to demonstrate that the software meets its specified requirements.
79. Ramp Testing: Type of testing consisting in raising an input signal continuously until the system breaks down. It may be conducted by the testing team or the performance engineer.
80. Regression Testing: Type of software testing that seeks to uncover software errors after changes to the program (e.g. bug fixes or new functionality) have been made, by retesting the program. It is performed by the testing teams.
81. Recovery Testing: Testing technique which evaluates how well a system recovers from crashes, hardware failures, or other catastrophic problems. It is performed by the testing teams.
82. Requirements Testing: Testing technique which validates that the requirements are correct, complete, unambiguous, and logically consistent and allows designing a necessary and sufficient set of test cases from those requirements. It is performed by QA teams.
83. Security Testing: A process to determine that an information system protects data and maintains functionality as intended. It can be performed by testing teams or by specialized security-testing companies.
84. Sanity Testing: Testing technique which determines if a new software version is performing well enough to accept it for a major testing effort. It is performed by the testing teams.
85. Scenario Testing: Testing activity that uses scenarios based on a hypothetical story to help a person think through a complex problem or system for a testing environment. It is performed by the testing teams.
86. Scalability Testing: Part of the battery of non-functional tests which tests a software application for measuring its capability to scale up - be it the user load supported, the number of transactions, the data volume etc. It is conducted by the performance engineer.
87. Statement Testing: White box testing which satisfies the criterion that each statement in a program is executed at least once during program testing. It is usually performed by the development team.
88. Static Testing: A form of software testing where the software isn't actually used it checks mainly for the sanity of the code, algorithm, or document. It is used by the developer who wrote the code.
89. Stability Testing: Testing technique which attempts to determine if an application will crash. It is usually conducted by the performance engineer.
90. Smoke Testing: Testing technique which examines all the basic components of a software system to ensure that they work properly. Typically, smoke testing is conducted by the testing team, immediately after a software build is made.
91. Storage Testing: Testing type that verifies the program under test stores data files in the correct directories and that it reserves sufficient space to prevent unexpected termination resulting from lack of space. It is usually performed by the testing team.
92. Stress Testing: Testing technique which evaluates a system or component at or beyond the limits of its specified requirements. It is usually conducted by the performance engineer.
93. Structural Testing: White box testing technique which takes into account the internal structure of a system or component and ensures that each program statement performs its intended function. It is usually performed by the software developers.
94. System Testing: The process of testing an integrated hardware and software system to verify that the system meets its specified requirements. It is conducted by the testing teams in both development and target environment.
95. System integration Testing: Testing process that exercises a software system's coexistence with others. It is usually performed by the testing teams.
96. Top Down Integration Testing: Testing technique that involves starting at the stop of a system hierarchy at the user interface and using stubs to test from the top down until the entire system has been implemented. It is conducted by the testing teams.
97. Thread Testing: A variation of top-down testing technique where the progressive integration of components follows the implementation of subsets of the requirements. It is usually performed by the testing teams.
98. Upgrade Testing: Testing technique that verifies if assets created with older versions can be used properly and that user's learning is not challenged. It is performed by the testing teams.
99. Unit Testing: Software verification and validation method in which a programmer tests if individual units of source code are fit for use. It is usually conducted by the development team.
100. User Interface Testing: Type of testing which is performed to check how user-friendly the application is. It is performed by testing teams.


101. Usability Testing: Testing technique which verifies the ease with which a user can learn to operate, prepare inputs for, and interpret outputs of a system or component. It is usually performed by end users.
102. Volume Testing: Testing which confirms that any values that may become large over time (such as accumulated counts, logs, and data files), can be accommodated by the program and will not cause the program to stop working or degrade its operation in any manner. It is usually conducted by the performance engineer.
103. Vulnerability Testing: Type of testing which regards application security and has the purpose to prevent problems which may affect the application integrity and stability. It can be performed by the internal testing teams or outsourced to specialized companies.
104. White box Testing: Testing technique based on knowledge of the internal logic of an application’s code and includes tests like coverage of code statements, branches, paths, conditions. It is performed by software developers.

105. Workflow Testing: Scripted end-to-end testing technique which duplicates specific workflows which are expected to be utilized by the end-user. It is usually conducted by testing teams.

Thursday, 24 April 2014

Why Do We Need Framework for Test Automation?

For the same reason, we carry a map when commuting and we draw a blueprint before building a house.
Not convinced?
Let us try to find the answer to why test automation framework- through thorough deliberation.
Automation is tricky- not just technically but also economically. It is best suited for areas where there is a substantial amount of tests that have to be repeated across multiple test cycles. And always, automation testing happens in addition to manual testing or complements it- but cannot, does not and should not replace it altogether.
why test automation framework
Now, we testers know that the success of any testing endeavor- manual or automation- depends mainly on thorough planning. Typically, we are to spend 1/3 of the entire testing effort on planning- it’s the standard. This planning usually involves 2 main areas- managerial information like dates, schedules, milestones, people information etc., and technical testing related information on – Deliverables, how many cycles, tools, data etc. For an automation project, a framework is everything- a technical implementation guideline, as well as the managerial part of it.
The three technical entities (resources) in an automation project are roughly:
  1. Code- script
  2. Data
  3. Objects and their definition on the AUT
The economical (expenditure-wise in addition to the manual testing team’s presence) ones are:
  1. Tool – licensing. This might also include licenses for additional Add-ins required for the specific AUT on hand
  2. Specialized man power
  3. Time- effort in terms of man hours
  4. Infrastructure (could be anything, starting from the physical space you provide for an automation tester to work from.)
  5. Test Management tool if using any
In an industry where it is tough to get the buy-in for testing itself (although this has come a long way from where it used to be), it is going to be exceedingly difficult to prove the importance of automation, unless a sure result can be promised. In other words, we better figure out all the details and assure success or not venture into automation at all. This is where a solid framework can be the foundation that we need to set foot on solid ground and make sure that we don’t fall.
We are not going to go into the economic aspects in depth, because we get it. There is an X amount of money put in and we need to make sure that it stays a successful investment.

Why Test Automation Framework?

Let’s try to understand the technical part better:
We all know that, a good automation script should launch the AUT, supply the data, perform the operation, iterate/repeat itself as directed, validate and close the AUT, report results and finally, exit. Yes, it is a tall order!
Once we create a script that does all this, it reduces the execution and reporting time (especially if linked to a test management tool) to a few milliseconds as opposed to minutes or even hours. There is no overlooking or a typographic error etc., and overall efficiency is improved.
However, imagine how long it will take to write a perfect script that does it all. How long would it take to understand the test objective, come up with a solution, implement it in code, customize, debug and finalize?
As you see, the real time consuming factor is the test script creation. The more efficient and time-effective the automation scripts- the better the chances of success.
An example here:
1) Gmail.com page is the AUT.
2) Features to test:
  • Compose email
  • Create contacts
  • Receiving an email
3) Automation testers- assuming each one is working on one feature
Yes, this is a little bit of an over simplified example. But, personally I feel that there is no concept on earth that cannot be broken down into easy-to-understand pieces. After all, the earth itself is made of tiny atoms.
Now, roughly, the scripts should have code that performs the operations as below:
  1. Compose email : Gmail.com launch->Login authorization->Compose email->Write the contents, add attachments(email parameters)->Send email->Logout
  2. Create contacts: Gmail.com launch->Login authorization->Select contacts->Create contact->Save->logout
  3. Receive email: Gmail.com launch->login authorization->check email->read email-> logout
All the above scripts have to be tested for multiple users with multiple parameters in each operation.
------------
It is clear from the above representation that the following components are recurring/ repeating:
  1. Gmail.com launch
  2. Login authorization
  3. logout
We can significantly reduce effort and time, if we create these components only once and make them available for all the other tests to use as needed – Reusability – create once, debug once, finalize once and reuse many times.
Reusability also makes sure that if a change has to be made, it can be done in a centralized way.
For example, if the gmail.com home page changes (the first component in every script) – we don’t have to modify each script (3 times in our case). We can simply change it once (in the reusable component) and all the others scripts that use it will automatically use the centrally changed code. This reduces rework and makes sure that changes are consistent over all.
Also, automation scripts recognize the elements on the AUT based on the definitions/properties they store of them. This is yet another major asset for an automation script. If many scripts are accessing the same object, there is no need to duplicate it every time. A common repository with definitions of the objects can help centralize several small local repositories- again, a huge plus for efficiency.
To summarize – the resources Code and Objects can be optimized by maximum reusability. When components are to be made reusable:
a) A correct order has to be followed in creating them (because if we write the code for login last, all the other scripts that need that step can’t be validated in full- same rationale applies to objects as well)
b) An identifiable (readable, with-in context) name has to be assigned
c) Stored in a location that is accessible and/or available for all the other scripts.
All this information about – What, where, how and when to implement these factors- is part of a framework.
Finally, the data. You have an option to hard code your data into the code/script- which will force you to create a new script every time a new data has to be supplied. The solution – separate the data from the code. Reuse the code to the maximum and provide the data separately. Again, where will you find the details on how to implement this decision? Framework.
Framework plays a major role in determining how to organize your automation project in order to maximize results. Apart from the above discussed areas, Framework can also guide the automation testers on how to execute scripts, where to store results, how to present them, etc.
In short, test automation framework is your overall game plan and it can take you where you want to go. So, don’t find yourself without it.