Jenkins: Difference between revisions

From miki
Jump to navigation Jump to search
 
(13 intermediate revisions by the same user not shown)
Line 1: Line 1:
== Links ==
== Links ==
* [https://learnxinyminutes.com/docs/groovy/ Learn X in Y minutes - Groovy]
* [https://learnxinyminutes.com/docs/groovy/ Learn X in Y minutes - Groovy]
* [https://www.jenkins.io/blog/2017/10/02/pipeline-templates-with-shared-libraries/ Share a standard Pipeline across multiple projects with Shared Libraries]
* [https://www.jenkins.io/doc/book/pipeline/shared-libraries/ Extending with Shared Libraries]


== Groovy reference ==
== Groovy reference ==
Line 7: Line 9:
<source lang="groovy">
<source lang="groovy">
s="hello"
s="hello"
println s
println s // hello
println "${s}"
println "${s}" // hello
println '${s}' // ${s}
sh "echo s is ${s}" // s is hello -- s is substituted by Groovy
sh 'echo s is ${s}' // s is -- s env var not defined

env.s="hello"
println "${env.s}" // hello
println '${env.s}' // ${env.s}
sh "echo s is ${env.s}" // s is hello
sh 'echo s is ${env.s}' // s is hello

env.BUILD_NUMBER0=String.format('%03d', BUILD_NUMBER as int)
println "${env.BUILD_NUMBER0}" // 001


// https://stackoverflow.com/questions/50029296/extracting-part-of-a-string-on-jenkins-pipeline
// https://stackoverflow.com/questions/50029296/extracting-part-of-a-string-on-jenkins-pipeline
Line 23: Line 37:


// dirname using find operator
// dirname using find operator
@NonCPS
def dirnameslash(path) {
def dir_name(path) {
dir=path =~ /(^.*[\\\/])/
return dir ? dir[0][1] : './'
def dir = path =~ /(^.*)[\\\/]/
dir ? dir[0][1] : '.'
}
}


@NonCPS
println dirnameslash("T'ar\\ta\\gueule\\jenkins\\de\\merde.exe") // T'ar\ta\gueule\jenkins\de\
def dir_name_slash(path) {
println dirnameslash("T'ar/ta/gueule/jenkins/de/merde.exe") // T'ar/ta/gueule/jenkins/de/
def dir = path =~ /(^.*[\\\/])/
println dirnameslash("merde.exe") // ./
dir ? dir[0][1] : './'
println dirnameslash("c:\\merde.exe") // c:\
}

println dir_name_slash("T'ar\\ta\\gueule\\jenkins\\de\\merde.exe") // T'ar\ta\gueule\jenkins\de\
println dir_name_slash("T'ar/ta/gueule/jenkins/de/merde.exe") // T'ar/ta/gueule/jenkins/de/
println dir_name_slash("merde.exe") // ./
println dir_name_slash("c:\\merde.exe") // c:\
</source>

=== array ===
<source lang="groovy">
def arr=[]
arr += ['foo']
arr += ['bar']

arr.join(', ') // 'foo, bar'
'foo' in arr // true
</source>

=== Map ===
<source lang="groovy">
def map=[:]
map += ['foo': "FOO"]
map += ['bar': { println "BAR" }]
map += [baz: { println "BAZ" }] // Don't need quote on definition
println map['foo'] // 'FOO'
map['bar']() // 'BAR'
map.bar() // ... can use shorter notation
'foo' in map // true

# Note that Groovy map maintains order (https://stackoverflow.com/questions/32811732/do-maps-in-groovy-maintain-order)
map.foreach { println "$it.key, $it.value" }
map.foreach { iter -> println "$iter.key, $iter.value" }
</source>
</source>


Line 69: Line 116:
Say we have a repo with submodules, also with submodules, but only want to checkout first level.
Say we have a repo with submodules, also with submodules, but only want to checkout first level.


The settings that work:
* Add parent repo
* Add '''Advanced clone behaviour''', and select shallow clone with a depth 1.
:* Optionally, select fetch tags or not (note that fetching tags may lead to failure if people create tag foo, then tag foo/bar)
* Select '''Advanced sub-modules behaviours''', but don't click recursively update submodules.
* Select '''Advanced sub-modules behaviours''', but don't click recursively update submodules.
:* Also, '''DO NOT''' select shallow clone with submodules. It fails.
* Optionally, add '''Clean Before Checkout'''.
* Optionally, add '''Prune stale tags'''.


=== Increase perf on Windows ===
=== Increase perf on Windows ===
Line 78: Line 130:


=== Increase job number ===
=== Increase job number ===
First get a listing all available job in the script console:
First get a listing all available jobs in the script console ('''Jenkins &rarr; Manage Jenkins &rarr; Script Console'''):


Jenkins.instance.getAllItems(AbstractItem.class).each {
Jenkins.instance.getAllItems(AbstractItem.class).each {
Line 131: Line 183:
* Go to Jenkins, Manage Jenkins, Manage plugins, Advanced.
* Go to Jenkins, Manage Jenkins, Manage plugins, Advanced.
* Use the Upload plugin panel.
* Use the Upload plugin panel.

=== Move multibranch pipeline to a git submodule ===
See [https://devops.stackexchange.com/questions/9243/is-there-a-way-to-use-a-jenkinsfile-from-a-git-submodule-in-a-multibranch-pipeli/16730#16730 stackexchange] (and also [https://stackoverflow.com/questions/37800195/how-do-you-load-a-groovy-file-and-execute-it here]).

Let's say we have following project tree:

<source lang="text">
build/
Jenkinsfile
src/
...
common/ <-- a git submodule
src/
...
</source>

Our Jenkinsfile is something like

<source lang="groovy">
// file: build/Jenkinsfile

def doThings()
{
// ...
}

try {
node {
// ...
}
// ...
}
catch(e) {
// ...
}
// ...
</source>

So basically some utility functions and some pipeline stages (here in a try-catch for instance).

Two actions are required to move this file to our common submodule:

* Move the file to submodule and adapt it slightly.
* Replace the original jenkinsfile with a small boilerplate version.

Our moved Jenkinsfile becomes

<source lang="groovy">
// file: common/build/Jenkinsfile

def doThings()
{
// ...
}

def call() {
try {
// ...
}
// ...
}
return this
</source>

So basically we surround the pipeline in a call function, and don't forget to return this at the end of the file.

The original Jenkinsfile is then replaced with this small boilerplate code as explained in the original post:

<source lang="groovy">
// file: build/Jenkinsfile

def myPipeline
node {
checkout scm // mandatory
myPipeline = load "common/build/Jenkinsfile"
}
myPipeline()
</source>

Of course this can be tuned further to pass parameters, use several jenkinsfile, etc. For instance reusing ideas from the [https://www.jenkins.io/blog/2017/10/02/pipeline-templates-with-shared-libraries/ documentation]:

<source lang="groovy">
def call(Map params)
{
println "Building for project " + params.projectName
// ...
}
</source>

<source lang="groovy">
// ...
myPipeline(projectName: "MyProject")
</source>

Also note that this checkout the whole project, which might be inefficient. A better approach would be to checkout only the submodule that contains the Jenkinsfile.


== Troubleshoot ==
== Troubleshoot ==
Line 175: Line 322:
<source lang="groovy">
<source lang="groovy">
copyArtifacts filter: '**/*.wsp, **/*.tgz, **/*.exe', fingerprintArtifacts: true, projectName: '${JOB_NAME}', selector: specific(''+currentBuild.number)
copyArtifacts filter: '**/*.wsp, **/*.tgz, **/*.exe', fingerprintArtifacts: true, projectName: '${JOB_NAME}', selector: specific(''+currentBuild.number)
</source>

=== Jenkins pipeline java.io.NotSerializableException ===
Yet more horror from Jenkins [https://stackoverflow.com/questions/40454558/jenkins-pipeline-java-io-notserializableexception-java-util-regex-matcher-error].

* Jenkins expect that all variables to be '''serializable.'''
* This excludes then objects which are '''null'''.
* In particular <code>matcher</code> objects (like <code> foo =~ /abc/</code>)

As a result:
* Use and abuse of <code>def</code>. Without it, all variables are defined in '''global''' scope.
* Use <code>@NonCPS</code> annotation on functions that are not serializable. Example:
<source lang="groovy">
@NonCPS
def version(text) {
def matcher = text =~ '<version>(.+)</version>' // This matcher is NOT serializable
matcher ? matcher[0][1] : null
}
</source>
</source>

Latest revision as of 07:53, 27 April 2023

Links

Groovy reference

Strings

s="hello"
println s             // hello
println "${s}"        // hello
println '${s}'        // ${s}
sh "echo s is ${s}"   // s is hello -- s is substituted by Groovy
sh 'echo s is ${s}'   // s is       -- s env var not defined

env.s="hello"
println "${env.s}"       // hello
println '${env.s}'       // ${env.s}
sh "echo s is ${env.s}"  // s is hello
sh 'echo s is ${env.s}'  // s is hello

env.BUILD_NUMBER0=String.format('%03d', BUILD_NUMBER as int)
println "${env.BUILD_NUMBER0}"  // 001

// https://stackoverflow.com/questions/50029296/extracting-part-of-a-string-on-jenkins-pipeline
def url = "git@github.com:project/access-server-pd.git"
final beforeColon = url.substring(0, url.indexOf(':'))  // git@github.com
final afterLastSlash = url.substring(url.lastIndexOf('/') + 1, url.length())  // access-server-pd.git
println beforeColon

// Note: def / final optional

String a = "Hello World Hello";
println(a.matches("Hello(.*)")); // true
println(a.replaceAll("^Hello","Bye"));  // Bye World Hell

// dirname using find operator
@NonCPS
def dir_name(path) {
  def dir = path =~ /(^.*)[\\\/]/
  dir ? dir[0][1] : '.'
}

@NonCPS
def dir_name_slash(path) {
  def dir = path =~ /(^.*[\\\/])/
  dir ? dir[0][1] : './'
}

println dir_name_slash("T'ar\\ta\\gueule\\jenkins\\de\\merde.exe")  // T'ar\ta\gueule\jenkins\de\
println dir_name_slash("T'ar/ta/gueule/jenkins/de/merde.exe")       // T'ar/ta/gueule/jenkins/de/
println dir_name_slash("merde.exe")                                 // ./
println dir_name_slash("c:\\merde.exe")                             // c:\

array

def arr=[]
arr += ['foo']
arr += ['bar']

arr.join(', ')  // 'foo, bar'
'foo' in arr    // true

Map

def map=[:]
map += ['foo': "FOO"]
map += ['bar': { println "BAR" }]
map += [baz: { println "BAZ" }] // Don't need quote on definition
println map['foo']     // 'FOO'
map['bar']()           // 'BAR'
map.bar()              // ... can use shorter notation
'foo' in map           // true

# Note that Groovy map maintains order (https://stackoverflow.com/questions/32811732/do-maps-in-groovy-maintain-order)
map.foreach { println "$it.key, $it.value" }
map.foreach { iter -> println "$iter.key, $iter.value" }

Tips

Cancel older builds if new one starting

Using milestone, we can cancel older builds if a new commit is pushed.

Here an example where all new build cancels the older ones, except on the master branch. This is useful to increase throughput of Jenkins slaves. This policy makes sense since non-master branches typically do not require thorough testing/analysis.

    stage('UTs')
    {
        // if not on master and older builds are ongoing, cancel them !
        if ( env.BRANCH_NAME != 'master' )
        {
            def buildNumber = env.BUILD_NUMBER as int
            if (buildNumber > 1) milestone(buildNumber - 1)
            milestone(buildNumber)
        }
        // ...
    }

Use 'Pipeline Syntax' to write Groovy script

Jenkins offers a button Pipeline Syntax to generate Groovy scripts. Very useful for adding new commands.

Custom git scm checkout with another submodule

The first line was created with the pipeline syntax tool in Jenkins. Then we simply checkout the branch we want in the submodule.

 def checkoutUsk() {
     checkout([$class: 'GitSCM', branches: name: '*/master', doGenerateSubmoduleConfigurations: false, extensions: $class: 'SubmoduleOption', disableSubmodules: false, parentCredentials: false, recursiveSubmodules: true, reference: '', trackingSubmodules: false, submoduleCfg: [], userRemoteConfigs: url: 'ssh://user@server.com/project.git'])
     sh 'cd my_submodule && git checkout origin/master && cd ..'
 }

Checkout git with submodule, not recursive

Say we have a repo with submodules, also with submodules, but only want to checkout first level.

The settings that work:

  • Add Advanced clone behaviour, and select shallow clone with a depth 1.
  • Optionally, select fetch tags or not (note that fetching tags may lead to failure if people create tag foo, then tag foo/bar)
  • Select Advanced sub-modules behaviours, but don't click recursively update submodules.
  • Also, DO NOT select shallow clone with submodules. It fails.
  • Optionally, add Clean Before Checkout.
  • Optionally, add Prune stale tags.

Increase perf on Windows

  • Disable anti-virus, or move master/slave workspace in not-scanned directory
  • Careful slave: on Windows, the slave uses the launch directory as base directory. Move that, or define user.dir in the slave environment variables (within Jenkins).
  • Disable Windows search

Increase job number

First get a listing all available jobs in the script console (Jenkins → Manage Jenkins → Script Console):

Jenkins.instance.getAllItems(AbstractItem.class).each {
    println it.fullName + " - " + it.class
};

Then set the RC for the selected job with:

Jenkins.instance.getItemByFullName("your/job/name").updateNextBuildNumber(128)

Git push tag using a SSH key

This is the most difficult ever action.

On Windows in particular.

Really.

Sadly


The solution is to use SSH Agent Plugin.

This requires that ssh-agent is available on Windows (eg. through Git for Windows).
  • Add some SSH private key credentials in Jenkins.
This is the same credentials that we use for instance to clone a git repo

Then the magic setup:

stage('deliver') {
    sshagent(['b123c5ec-fb7c-6601-bd09-2e8d3be0aaf0']) { // Use your credential id here
        bat "git push -f origin dev/foo"

        // Some alternatives:
        // sh "git push -f origin dev/foo"   // This may use a different user root

        // sh "ssh -v username@server  // for debugging

        // To force to accept host key 
        // sh "GIT_SSH_COMMAND='ssh -o StrictHostKeyChecking=no' git push -f origin dev/foo"
    }
}

Possible issues:

  • Wrong user being picked, hence the bad folder .ssh/ is used.
  • Environment clashes / wrong PATH in either bat or sh shell.

Installing plugin offline

  • Download the plugin .hpi file.
  • Go to Jenkins, Manage Jenkins, Manage plugins, Advanced.
  • Use the Upload plugin panel.

Move multibranch pipeline to a git submodule

See stackexchange (and also here).

Let's say we have following project tree:

build/
    Jenkinsfile
src/
    ...
common/           <-- a git submodule
    src/
        ...

Our Jenkinsfile is something like

// file: build/Jenkinsfile

def doThings()
{
    // ...
}

try {
    node {
        // ...
    }
    // ...
}
catch(e) {
    // ...
}
// ...

So basically some utility functions and some pipeline stages (here in a try-catch for instance).

Two actions are required to move this file to our common submodule:

  • Move the file to submodule and adapt it slightly.
  • Replace the original jenkinsfile with a small boilerplate version.

Our moved Jenkinsfile becomes

// file: common/build/Jenkinsfile

def doThings()
{
    // ...
}

def call() {
    try {
        // ...
    }
    // ...
}
  return this

So basically we surround the pipeline in a call function, and don't forget to return this at the end of the file.

The original Jenkinsfile is then replaced with this small boilerplate code as explained in the original post:

// file: build/Jenkinsfile

def myPipeline
node {
    checkout scm // mandatory
    myPipeline = load "common/build/Jenkinsfile"
}
myPipeline()

Of course this can be tuned further to pass parameters, use several jenkinsfile, etc. For instance reusing ideas from the documentation:

def call(Map params)
{
    println "Building for project " + params.projectName
    // ...
}
// ...
myPipeline(projectName: "MyProject")

Also note that this checkout the whole project, which might be inefficient. A better approach would be to checkout only the submodule that contains the Jenkinsfile.

Troubleshoot

igfx_ error

Doing this:

println new ProcessBuilder('sh','-c','ls').redirectErrorStream(true).start().text

we got a igfx_ crash message in Windows. PC not accessible through remote desktop, and had to reboot.

Multibranch failed because of tag/branch name conflict (some local refs could not be updated; try running)

This occurs when for instance a branch named dev/foo is deleted and a new branch dev/foo/bar is created afterwards.

error: cannot lock ref 'refs/tags/dev/delivery/FOO': 'refs/tags/dev/delivery' exists; cannot create 'refs/tags/dev/delivery/FOO'
 ! [new tag]         dev/delivery/FOO -> dev/delivery/FOO  (unable to update local ref)
error: some local refs could not be updated; try running
 'git remote prune ssh://git.server.com/project/project.git' to remove any old, conflicting branches

To fix:

  • Go to multibranch log, and check for line like:
Creating git repository in C:\Jenkins\caches\git-1f7143a5b7a29bcbc3c47f31fc7a597c
 > git init C:\Jenkins\caches\git-1f7143a5b7a29bcbc3c47f31fc7a597c # timeout=10
  • Go to the server, and delete that cache directory.

Unable to find project for artifact copy

We get the following error in a multibranch pipeline:

 ERROR: Unable to find project for artifact copy: Foo/bar/master
 This may be due to incorrect project name or permission settings; see help for project name in job configuration.

A FIRST problem is that the Permission to copy artifact cannot be set from the multibranch configuration, but must be set for each branch separatedly.

In script pipelines, do [1]:

properties([[$class: 'JiraProjectProperty'], copyArtifactPermission('*')])

In declarative pipelines, do:

options {
    copyArtifactPermission('my-downstream-project');
}

A SECOND problem is that the full path of the artifact must be given, so better use **/*.tgz or the like:

copyArtifacts filter: '**/*.wsp, **/*.tgz, **/*.exe', fingerprintArtifacts: true, projectName: '${JOB_NAME}', selector: specific(''+currentBuild.number)

Jenkins pipeline java.io.NotSerializableException

Yet more horror from Jenkins [2].

  • Jenkins expect that all variables to be serializable.
  • This excludes then objects which are null.
  • In particular matcher objects (like foo =~ /abc/)

As a result:

  • Use and abuse of def. Without it, all variables are defined in global scope.
  • Use @NonCPS annotation on functions that are not serializable. Example:
@NonCPS
def version(text) {
  def matcher = text =~ '<version>(.+)</version>'   // This matcher is NOT serializable
  matcher ? matcher[0][1] : null
}