One of the many straplines of ScriptRunner is "Making easy things easy and hard things possible". Another one is, "Letting jira admins do hacky stuff".

You can do anything in a script that you could do in a plugin, usually without the overhead of understanding the host of software development tools and methodologies that a typical plugin developer would have to worry about.

However, if you’re using ScriptRunner extensively and writing lots of custom code (or small amounts of custom code that you rely on heavily), you may find yourself under many of the same expectations as a software developer, even if that’s not your formal job title or background. If that’s happening, fear not: you’ve simply crossed over from "ScriptRunner the Swiss Army Knife" into "ScriptRunner the development platform" territory.

You may find yourself writing detailed testing plans to make sure that your custom scripts still work after you upgrade your linked instances of Jira & Confluence. Those plans may include some repetitive tasks that you have to complete throughout the upgrade cycle. As another example, you may find that you want a quick way to verify that your script works as you are developing it, without having to click through several steps in the UI every time you make a change. Both those needs (and several others) can be met by automated tests.

We embrace a "felt need" philosophy when it comes to highly evangelized methodologies like automated testing. Not every ScriptRunner user needs to write tests, but the minute you feel the need for them, you should start writing them. In this case, "feeling the need" might mean you find yourself doing repetitive work to test your scripts (or delegating that work to others). It might mean fielding concerns from others over your custom scripts' reliability. Anything that makes you say, "I wish I had a quicker way to test this…​" is a good reason to start writing automated tests for your scripts.

An Aside for the Initiated Developer

If you’re a professional software developer who’s already on board the automated testing train, you may have a host of questions. "Are these unit tests, integration tests, functional tests, or what? Can I use my favorite testing framework? What is ScriptRunner’s testing philosophy and can you point me to the nearest Hacker News thread / Subreddit / tweet chain so that I can participate in the inevitable partisan flame war?"

Easy, killer. ;) These are integration tests. They test your code within the context of a running Atlassian host application. The testing framework we use is Spock, though JUnit is included as well if you want to write plain old JUnit tests.

While you theoretically could write a unit test, it’s almost certainly more trouble than benefit for the average scripter. A unit test would focus on testing only your code. Typically that means isolating your code by mocking any Atlassian managers and services. In the context of scripting, these kinds of tests have little value. First, mocking everything is prohibitively expensive. Second, scripts are frequently simple enough that your logic isn’t complex enough to require testing. Your goal in writing a test is to make sure the script still works after you’ve made a change to the script itself or, more likely, to the environment where it runs, such as by installing another plugin, upgrading the host application, or changing some connected piece of configuration. This is one of the notable bits of nuance that often gets lost by people parroting "integration tests are a scam"; sometimes, they’re not a scam.

Of course, if you’re building a script plugin with a host of custom classes that have deep business logic of their own, your mileage may vary. Feel free to prove us wrong and write a robust unit test suite for your scripts if you’re convinced it’s needed. Just make sure you’ve got enough integration tests in the pile to be braced for the inevitable changes to Atlassian’s API in the next major release, and don’t kill yourself trying to mock/stub the Atlassian API and every library or framework in the mix.

Writing and Running Tests

First things first, a word of caution:

Avoid running tests on a production server. Most tests will create data to perform tests on, change bit of the host application’s configuration, and otherwise do things that you wouldn’t want done on a real live system with users milling about. Make sure to setup a test instance and/or local development environment where you can run your tests. You can get a development license to setup a test server where you can run a cloned instance of your Atlassian application.

Okay, down to business. How do we write and run one of these tests?

Two test libraries are included, Spock and JUnit. Of the two, we recommend Spock. Like ScriptRunner, it was born in the Groovy ecosystem and it makes automated tests more readable, maintainable, and approachable.

A basic Spock test looks like this:

import com.onresolve.scriptrunner.canned.common.admin.SrSpecification

class MyVeryOwnScriptSpec extends SrSpecification {

    def "test that my script does what I say it does"() {
        setup: "create any test data I need (projects, issues, etc.)"

        when: "I invoke run my script"
        //write code that makes your script run here

        then: "my script makes the changes I expect"
        true //Each line is an assertion
        1 == 1 //Anything that returns true will let the test pass
        "a" == "b" //Anything that returns false will cause the test to fail

Fill in the lines beneath setup:, when:, and then: blocks with code that tests your script. The particulars of doing that will vary a bit based on what you’re testing (an event handler, a REST Endpoint, a Jira workflow function, a Confluence macro, etc.).

See the child pages of this page for some key particulars.

Once you’ve written your test, you can save it to one of your script roots and run it as described below.

Test-driven workflow

When starting on a new feature, we at Adaptavist often use a test-driven development workflow. In plain English, that means the first thing we do is write a test for what we want to happen. We run the test, and it fails (at least it should), indicating that our new feature isn’t working yet. Then we implement the feature until test passes. Usually there’s some code cleanup to do after the first pass, so we’ll refactor, then make sure tests still passes. Then we’ll get peer review, make any suggested edits, and again, the test is there to help us quickly verify that our new code is still running as expected.

Running Tests

There are two ways that you can run automated tests: through the Test Runner Built-in Script, or via your IDE.

Using the Built-in Script

You can run your tests through the Test Runner under the Built-in Scripts menu.

test runner

When you first see the test runner the package will be set to com.acme.scriptrunner.test, which contains some sample tests shipped with the plugin. You should be able to run all of the tests (one is supposed to fail to demonstrate what a failing test looks like).

Running the pre-packaged tests will create a project with key SRTESTPRJ, and a workflow and workflow scheme. All these are safe to delete and will be recreated each time you run the test.

When you write your own tests you will want to change the default package. If you want to use the included tests as a basis for your own, check out the source code for the sample plugins, or you can unzip the ScriptRunner jar itself and ferret the tests out if you’re feeling adventurous. :)

When running the test runner you will need to specify the base package(s) that you wish to keep your tests under, generally something like com.yourcompany.scriptrunner.test. In this case you would have a directory com/yourcompany/scriptrunner/test under your script root, where you will keep test classes. Make sure you are using a source control system, so that you can test on your dev instance, commit your changes, then update your working copy on your production system.

Running from an IDE

You can run tests from your IDE. We have only tested with Intellij IDEA, and recommend it.

The way this works is that a test runner executes them via REST, in the running application (JIRA, Confluence, Bitbucket etc). In order for the IDE to know which runner to use you must annotate your test class using the @RunWith annotation. Example:

import com.onresolve.scriptrunner.canned.common.admin.ScriptRunnerTestRunner
import org.junit.runner.RunWith
import spock.lang.Specification

class TestRunnerSampleSpec extends Specification {

    def "test something"() {
When you add a new test class, you must build your project (mvn package). This is because the tests are invoked locally, and unless the compiled class is available the IDE cannot know which test runner to use. After this initial build you can add new test methods and execute them without rebuilding.
IDEA Configuration

The IDE runs your test locally, which executes a REST call to the app. Therefore the IDE needs to know the address of your running application, so it can tell the app to run the tests. To do this you can modify the default setting for Junit tests:

Do Run → Edit Configurations, then open Defaults → Junit, as shown below:

debug test config
  1. Add a property -Dbaseurl= pointing to the url of the running application. You can omit this if you are running your application locally on port 8080, as in http://localhost:8080/jira.

  2. Under Before Launch, delete the Build task using the "minus" button. It’s not necessary to rebuild the plugin after editing the tests, as ScriptRunner will recompile the test if necessary.

  3. Uncheck the "Activate Tool Window" box. It’s not useful to switch to the tool window, as any failures will be shown in the main "log" window.

As mentioned, any exception will be shown in the tool window, but log messages are not redirected to it, so in practice there is little point in looking at the tool window.

You can run the code by clicking the annotations in the margin, as shown below:

debug test code

Or, for keyboard shortcut addicts, put the cursor in the test class name or method name, and press Ctrl + Shift + F10 (on Windows) or Ctrl + Shit + R (on Mac OSX).

For how-to questions please ask on Atlassian Answers where there is a very active community. Adaptavist staff are also likely to respond there.

Ask a question about ScriptRunner for JIRA, for for Bitbucket Server, or for Confluence.