Static Type Checking

Static type checking is a feature intended to provide information to you about whether your script is correctly written.

The first thing to understand about Groovy is that it’s a dynamic language. Amongst many other facets of dynamic languages, this means that method and property names are looked up when your code is run, not when it’s compiled, unlike java.

Take a very simple but complete script:

foo.bar()

That is, call the method bar() on the object foo. Unlike java, this script will compile without errors, but when you run it you will get a MissingPropertyException, as foo hasn’t been defined. This behaviour is useful as there are many circumstances that could make this code execute successfully, for instance an object called foo, or a closure getFoo(), being passed to the script’s binding.

Although Groovy is a dynamic language, we can compile scripts in a manner that checks method and property references at compile time. The new static type checking (STC) feature does just that, so that you can see any problems in your scripts when you are writing them, as opposed to when they execute.

It’s important to understand that when your scripts are executed, they are always compiled dynamically. When they are compiled for STC, the resulting generated bytecode is thrown away.

There are limitations to the type checker - it’s quite possible to write code that is displayed as having errors, but which is perfectly valid and will execute fine. Some examples of this are:

  • using certain Builders

  • using closures where the parameter types can’t be inferred

However, if you are writing code like this you are probably using an IDE, and can forget about the type checker.

At the risk of repetition, the type checker is not always correct. It is a best-efforts attempt to let you know if there are problems with your code - but please let us know if you find something that should compile but doesn’t. Note also that your code could still have runtime errors that won’t be found until it executes.

If we were writing a condition where we wanted to check the remaining estimate on the issue was zero, we might inadvertently write:

type checked condition

STC is telling us that we are trying to set the estimate, rather than retrieve it. We meant to use the equality operator:

type checked condition ok

Note the green dot, which tells us that the code is syntactically correct, and the methods and properties have been checked.

Continuing our example, let’s say we are writing code for the Additional Issue Actions, which is commonly used to manipulate the resulting issue when creating a subtask or cloning and linking an issue: issue.estimate = 0L. So code that is not valid as a Condition can be used as an additional issue action, because the type checker is aware of what type the issue object will be when the code executes.

Deprecations

The type checker will also show you methods and properties that are deprecated. These are parts of the API that Atlassian would prefer you not to use.

For example, in the following image we are using four deprecated methods.

type checked condition depr

It is advisable for you to change your code to the suggested alternative. Atlassian remove deprecated code on major version releases (but not always), so if you switch to the non-deprecated code now you have a better chance of your script continuing to work on your next upgrade.

A fixed version of the above script might look like (in JIRA 6.4):

type checked condition depr fixed

You can view the type checking information for all your code using the script registry.

Tips for type checking

Fields

If you are writing classes, you need to give the type checker information on the types of your fields. So, instead of writing this:

type checked condition class depr

You should declare the type of your field:

CustomFieldManager customFieldManager = ComponentAccessor.getCustomFieldManager()
Closures

Likewise, when writing closures you may need to provide additional type information. This for example is fine:

import com.atlassian.jira.component.ComponentAccessor

def attachmentManager = ComponentAccessor.attachmentManager
attachmentManager.getAttachments(issue).findAll {
    it.authorKey == "admin"
}

We can infer from the context that the type of it is an Attachment object.

However, if you wanted to reuse the closure you might write:

import com.atlassian.jira.component.ComponentAccessor

def attachmentManager = ComponentAccessor.attachmentManager

def findByAdmin = { it -> it.authorKey == "admin" }
attachmentManager.getAttachments(issue).findAll(findByAdmin)

If you use this code, STC will produce an error: No such property: authorKey for class: java.lang.Object. To make the type checker happy you can fix this by writing:

import com.atlassian.jira.issue.attachment.Attachment
...
def findByAdmin = { Attachment it -> it.authorKey == "admin" }

Note however that either version of the code is fine, if you want to forget the type checker you are completely free to do so. As mentioned, the version of the bytecode that is actually executed is always compiled in "dynamic" mode.

Providing Type Information

Certain parts of the JIRA API are not strongly typed, e.g. the methods for getting and setting custom field values are defined to receive and return a java.lang.Object. The type checker only has access to this information, and is not aware of the types of your custom fields.

A special case is when using cfValues['My Field'], whereby a special effort is made to attempt to introspect the type of field named My Field.

Therefore, the following script, which sets a text field to the display name of a user read from a user custom field is flagged as having errors:

type checked customfields

This is because the only information that the JIRA API gives us is that user has the type of Object. Given that we know that a User Picker custom field will give us a User object (pre JIRA 7), we can fix this by using a cast:

type checked customfields fixed
I am hesitant about using this as a an example, because in JIRA 7 this needs to be cast to a com.atlassian.jira.user.ApplicationUser. If you didn’t provide any type information at all this would work fine in both JIRA 6 and 7.

Script Roots

Previously when entering a file to be run (console, workflow function, script field etc) you were required to give the full script path, or the path relative to the catalina.base directory.

This version introduces the concept of script roots - these are directories in whose files and subdirectories you can keep your scripts. The advantages of this is that changes to dependent classes will be detected automatically, and get automatically recompiled. Note - the one exception is when you first change a dependent class without having changed the class/script that’s actually called (a JQL function, workflow function etc). In this case, make some irrelevant change like adding a space to a comment to the calling script. After having done this, changes to the base or dependent class will trigger recompilation of both.

When the plugin is first installed it will create a directory called "scripts" under your JIRA home directory, and register it as one of its script roots. This should be sufficient for most users and no other configuration need be made. This is a logical place to store your scripts, as it’s preserved during JIRA upgrades, and by definition will be accessible by all nodes in a clustered JIRA.

You can create subdirectories for your scripts, perhaps dividing them up into the business processes they support. You can also create supporting or utility classes to be used by them, but ensure they have the correct package, otherwise you will get a compilation error.

For instance, a script and a class:

<jira-home>/scripts/foo.groovy
import util.Bollo

log.debug ("Hello from the script")
Bollo.sayHello()
<jira-home>/scripts/util/Bollo.groovy
package util

public class Bollo {
    public static String sayHello() {
        "hello sailor!!!"
    }
}

Absolute paths outside of script roots will continue to work, although changes to dependent classes may not get picked up.

AST Browser

The AST browser is a web-based implementation of the AST browser found in the groovy distribution. It can be useful for very advanced users, as it allows you to see the effect AST transformations have on your code, at different phases of the compile process.

Upgrading from Previous Versions

In previous versions of ScriptRunner, relative paths were resolved to the container working directory, ie $catalina.base on Tomcat.

This is no longer the case…​ relative paths will be resolved relative to each of the script roots until a file is found. If you don’t wish to change all your paths, you can add a new script root, pointing to either the working directory, or better, to where you had your scripts.

For instance, let’s say your jira instance is in /usr/opt/jira, and you had your scripts in /usr/opt/scripts. Therefore you would have referred to them as ../scripts/foo.groovy.

Now you will add a new property pointing to your scripts dir:

set JAVA_OPTS=%JAVA_OPTS% -Dplugin.script.roots=/usr/opt/scripts

Resolving ../scripts/foo.groovy relative to this script path will have the same result.

If you have multiple roots then use a comma to delimit them.

If you are working on a script locally before deploying to production, you can set breakpoints in scripts or classes and attach the debugger.

If you are working on the plugin it makes sense to add the src and test directories from the checkout, so you can work on the scripts without having to recompile.

set JAVA_OPTS=%JAVA_OPTS% -Dplugin.script.roots=checkout-directory\src\main\resources,checkout-directory\src\test\resources

Logging and Profiling

Each execution of any of your scripts are recorded. We record:

  • any parameters passed to it (the script binding), which is known as the payload

  • the log output including any exception message, if present

  • timing information, which is the total elapsed time, and the CPU time used

The last 15 executions are displayed where relevant, eg when viewing workflows, script fields, script listeners, and REST endpoints administration.

A summary of recent history is displayed, eg

diags failure message

Clicking through on any of these will give you further information about that particular invocation.

diags failure display
diags failure dialog

This is most useful for viewing why your own scripts failed, particularly if it’s an intermittent failure, which may only happen because of certain issue attributes - for instance a field value being unexpectedly null.

Only uncaught exceptions are shown as failures

On JIRA shutdown the last 15 invocations of each function are written to the database so that they persist a restart.

Known Issues

  • Uncaught exceptions in conditions or additional code are not displayed as errors, but the error will appear in the logs

  • When using JIRA Data Center, only invocations that executed on that node are shown. If using Data Center, and you suspect issues on just one node, you will need to open the corresponding URL on that DC instance

  • Certain categories of scripts are currently excluded from log captures, namely JQL functions and administration scripts (built-in scripts)

  • On JIRA 6.x the persistence functionality may not always work

For how-to questions please ask on Atlassian Answers where there is a very active community. Adaptavist staff are also likely to respond there.

Ask a question about ScriptRunner for JIRA, for for Bitbucket Server, or for Confluence.