A testcase is simply a test with formal steps and instructions; testcases are valuable because they are repeatable, reproducible under the same environments, and easy to improve upon with feedback. A testcase is the difference between saying that something seems to be working okay and proving that a set of specific tasks are known to be working correctly.
Some tests are more straightforward than others. For example, say you need to verify that all the links in your web site work. There are several different approaches to checking this:
you can read your HTML code to see that all the link code is correct
you can run an HTML DTD validator to see that all of your HTML syntax is correct, which would imply that your links are correct
you can use your browser (or even multiple browsers) to check every link manually
you can use a link-checking program to check every link automatically
you can use a site maintenance program that will display graphically the relationships between pages on your site, including links good and bad
you could use all of these approaches to test for any possible failures or inconsistencies in the tests themselves
Verifying that your site's links are not broken is relatively unambiguous. You simply need to decide which one of more of these tests best suits your site structure, your test resources, and your need for granularity of results. You run the test, and you get your results showing any broken links.
Notice that you now have a list of broken links, not of incorrect links. If a link is valid syntactically, but points at the incorrect page, your link test won't catch the problem. My general point here is that you must understand what you are testing. A testcase is a series of explicit actions and examinations that identifies the "what".
A testcase for checking links might specify that each link is tested for functionality, appropriateness, usability, style, consistency, etc. For example, a testcase for checking links on a typical page of a site might include these steps:Link Test: for each link on the page, verify that
the link works (i.e., it is not broken) the link points at the correct page the link text effectively and unambiguously describes the target page the link follows the approved style guide for this web site (for example, closing punctuation is or is not included in the link text, as per the style guide specification) every instance of a link to the same target page is coded the same way
As you can see, this is a detailed testing of many aspects of the link, with the result that on completion of the test, you can say definitively what you know works. However, this is a simple example: testcases can run to hundreds of instructions, depending on the types of functionality being tested and the need for iterations of steps.
Defining Test and Testcase ParametersA testcase should set up any special environment requirements the test may have, such as clearing the browser cache, enabling JavaScript support, or turning on the warnings for the dropping of cookies.In addition to specific configuration instructions, testcases should also record browser types and versions, operating system, machine platforms, connection speeds -- in short, the testcase should record any parameter that would affect the reproducibility of the results or could aid in troubleshooting any defects found by testing. Or to state this a little differently, specify what platforms this testcase should be run against, record what platforms it is run against, and in the case of defects report the exact environment in which the defect was found. The various required fields of a test case are as follows
Test Case ID: It is unique number given to test case in order to be identified.
Test description: The description if test case you are going to test.
Revision history: Each test case has to have its revision history in order to know when and by whom it is created or modified.
Function to be tested: The name of function to be tested.
Environment: It tells in which environment you are testing.
Test Setup: Anything you need to set up outside of your application for example printers, network and so on.
Test Execution: It is detailed description of every step of execution.
Expected Results: The description of what you expect the function to do.
Actual Results: pass / failed If pass - What actually happen when you run the test. If failed - put in description of what you've observed.
Sample TestcaseHere is a simple test case for applying bold formatting to a text.
Test case ID: B 001
Test Description: verify B - bold formatting to the text
Revision History:3/ 23/ 00 1.0- Valerie- Created
Function to be tested: B - bold formatting to the text Environment: Win 98
Test setup: N/A
Test Execution:
Open program
Open new document
Type any text
Select the text to make bold.
Click Bold
Expected Result: Applies bold formatting to the text
Actual Result: pass