Test Drive Development in Oracle PL/SQL

PL/SQL is Oracle’s procedural extension to the SQL programming language. It has been said that there is not much of a test culture in PL/SQL. To combat this, renowned book author Steven Feuerstein created the utPLSQL system in 2000. It implements automated unit testing of PL/SQL code. It originally was developed against database version Oracle 8i. Feuerstein released utPLSQL as an open source project.

The current version of utPLSQL is 3.0. Strangely enough, release 3.0 was a complete rewrite of version 2.0. It is not an evolution of the older product. It is effectively a whole new framework. There is a utility to migrate from v2 to v3.

utPLSQL can be installed using a script provided with the project. By default, it produces a log file in the same directly as the installation script. All project objects are created in a schema named UT3. It will create DBMS_PROFILER tables. DBMS_PROFILER is a package provided by Oracle which covers statistics. It also adds public synonyms so that anyone can execute code in the framework.

To install utPLSQL, the user must have a host of database privileges. This user will need standard stuff such as create session and create procedure privileges. The user will also need to have alter session privilege. The user needs execute privilege on the dbms_lock and dbms_cypto Oracle packages.

The testing in utPLSQL is done via a test package that you create. The name of the package must begin with “ut_”. This package must have a ut_setup() and ut_teardown() function defined in it. The tests are procedures placed in this package that have no parameters. The tests are executed by calling test().

Tests are configured using annotations in the PL/SQL code. These annotations exist only in the package specification. There is one package level annotation that must be the first line in the package spec. There must be a blank line that follows this package level annotation. The other annotations identify procedure in the package as being tests. Tests can be grouped into suites. You can set the namespace for the tests using the “suitepath” annotation.

utPLSQL has the ability to compare compound data types in the tests. This includes meaty objects such as ref cursors, object types, nested tables, and VARRAYs.

Tests can be placed in same schema as code being tested. Or they can be put in a separate helper package. The overall structure of the tests is:

1. Setup
2. Execute
3. Verify
4. Teardown

The teardown can be implemented by rolling back any changes from the transaction. Or you can back up the affected tables first. Then restore them once the tests are complete.  Tests should not modify the environment in which they are run.

There are two ways to runs the tests. You can call ut.run(), which executes all tests in the schema. If you pass in a specific procedure name as an argument, run() will execute just that one test. You will need to set serveroutput to “on” to see the test results. A second method to run the tests is to call utl_runner.run(). It only runs procedures, not functions. It does not output results to the screen. This is meant to be called as an API. The outcome of tests can be found in UTL_OUTCOME.

Like PLSQL, everything in utPLSQL is case insensitive. Tests are executed in alphabetical order of procedure name. Only public members are executed. You have full access to all of PL/SQL in your test code. utPLSQL is written to catch most exceptions thrown by the code being tested. It will record a clean stack trace, in that any utPLSQL plumbing is hidden.

utPLSQL pay specific attention to code coverage. It can be calculated on the basis of the whole schema, or on a specific set of files in your project. You specify a reporter to format the code coverage report as a parameter to the ut.run() call. There is a built-in reporter that comes with utPLSQL. Or you can specify a third-party version such as Sonar or TeamCity.

Test Drive Development


Test Driven Development (TDD) is surprisingly not just a testing technique. It is more a style of programming than testing. It is a software development process dating back to 2003. TDD is characterized by short development cycles. This timing fits well with Agile development goals.


Here is the TDD cycle: A requirement becomes a test case. A test is added to the test suite. All tests are then run. The newly added test should fail, as no code for it has been written yet. The developer proceeds to code the minimal amount of code to pass the new test. All tests are run again. Finally, the code base is then refactored.

The nature of TDD leads to simpler designs. You are always adding only the bare minimum code needed to pass a narrow test. Units/modules developed by TDD tend to be small. The code to implement the test cases is treated as a first-class citizen in TDD: test code is just as important as production code.

TDD tests should be independent. They should not depend on prior test cases. TDD will result in more tests being written. The benefit is that less will be time spent debugging code. Defect rates should decrease. Code coverage will be high (at least 80%).

Since TDD tests must be independent, they will need to make use of “test doubles” which substitute the behavior of the rest of the system. There are four types of test doubles, differing in how deeply they model the parts of the system they are abstracting away:

1. A dummy – only default values are returned
2. A stub – simple logic is employed
3. A mock – produces values for a specific test
4. A simulator –implements complex logic.


TDD forces you to think through requirements and design before any code is written. It adds confidence to refactoring due to the added test coverage. The goal of TDD is to write clean code that works. While TDD may not produce a full regression test set, it will force you to have significant test coverage.

Agile Development

Agile development has become a common practice in the software development world. There are a lot of moving parts, ideas, and optional parts involved However, there are just a few core principles. These were documented in the Agile Manifesto of 2001:


• Individuals over processes/tools
• Working software over documentation
• Customer collaboration over contract negotiation
• Responding to change over following a plan


Agile was designed to respond to continually evolving requirements. It preaches an adaptive planning strategy to rapidly respond to changes. As such, project plans cannot not be overly rigid. Agile is characterized by early delivery. Software is developed incrementally.


There are twelve main principles to Agile. Some highlights are continuous/frequent delivery, progress measured by working software, good design, and simplicity. Software work is broken down into iterations called springs. Sprints last one to four weeks normally. Programming and testing are conducted in the same sprint.


A key activity in Agile is the daily stand up meeting, also known as the daily scrum. Team members announce what they did the previous day, what they plan to do today, and any roadblocks. This meeting is not meant for any problem resolution; you only provide the team a brief overview on your progress.


There are assorted techniques used in Agile: continuous integration, automated unit testing, pair programming, design patterns, test driven development, and code refactoring. Agile is more of an adaptive than predictive form of development.


Agile emphasizes the product over the project. Some liken Agile to working at a lean startup. A goal is to produce documentation that is just barely enough. Stories, which are akin to use cases, are used to drive modeling. Choices are deferred as close as possible to implementation time.