nf-core/configs: Roslin Configuration
This profile is similar to the ‘eddie’ profile managed by the IGC team, but focusssed towards eddie/nf-core users at Roslin. nf-core pipelines sarek, rnaseq, chipseq, mag, differentialabundance and isoseq have all been tested on the University of Edinburgh Eddie HPC with test profile.
Getting help
There is a Teams group dedicated to Nextflow users: Nextflow Teams Also, you can find at the coding club held each Wednesday: Code Club Teams Please also contact the Roslin Bioinformatics team with questions and we’ll try to help: https://www.wiki.ed.ac.uk/spaces/RosBio/pages/602179073/Roslin+Bioinformatics+Home We also have some notes on running the rnaseq pipeline (and of course much applies to all nf-core pipelines) here: https://www.wiki.ed.ac.uk/spaces/RosBio/pages/649925054/Nextflow+and+nf-core
Using the Roslin config profile
To use, run the pipeline with -profile roslin (one hyphen).
This will download and launch the roslin.config file which has been pre-configured with a setup suitable for the University of Edinburgh Eddie HPC.
The configuration file supports running nf-core pipelines with Docker containers running under Singularity by default. Conda is not currently supported.
nextflow run nf-core/PIPELINE -profile roslin # ...rest of pipeline flagsBefore running the pipeline, you will need to load Nextflow from the module system or activate your Nextflow conda environment. Generally, the most recent version will be the one you want.
To list versions:
module avail -C NextflowTo load the most recent version (28/10/2015):
module load roslin/nextflow/25.04.6This config enables Nextflow to manage the pipeline jobs via the SGE job scheduler and using Singularity / apptainer for software management.
Apptainer (/Singularity) set up
We have now (from August 2025) configured the roslin profile to use apptainer rather than singularity in the worker node jobs. This works better for us on Eddie with Nextflow and the nf-core pipelines. The roslin profile is set to use /exports/cmvm/eddie/eb/groups/alaw3_eb_singularity_cache as the apptainer (/singularity) cache directory. This directory is put at the disposition of roslin institute nextflow/nf-core users by the Roslin Bioinformatics group led by Andy Law. If an SGE project code is setup (see next section for more information), all new containers will be cached in this directory. Otherwise, the apptainers containers will be stored in the work directory created when Nextflow is run. If you face any problem with singularity cache, please contact Sébastien Guizard, Donald Dunbar and Andy Law with the Roslin Bioinformatics group in CC.
Apptainer/Singularity will by default create a directory .singularity in your $HOME directory on eddie. Space on $HOME is very limited, so it is a good idea to create a directory somewhere else with more room and link the locations.
cd $HOME
mkdir /exports/eddie/path/to/my/area/.singularity
ln -s /exports/eddie/path/to/my/area/.singularity .singularitySGE project set up
By default, users’ jobs are started with the uoe_baseline project that gives access to free nodes. If you have a project code that gives you access to paid nodes. It can be used by jobs submitted by Nextflow. To do so, you need to set up an environment variable called NFX_SGE_PROJECT:
export NFX_SGE_PROJECT="<PROJECT_NAME_HERE>"If you wish, you place this variable declaration in your .bashrc file located in your home directory to automatically set it up each time you log on Eddie.
NB: This will work only with the roslin profile.
Excluding problematic node
Eddie is a fragile little thing. Time to time, some nodes might struggle to run singularity. The most common error message is: env: ‘singularity’: No such file or directory. The reason why this error occurs is still obscure, but we suspect network problems around network disks.
A temporary solution is to exclude the problematic nodes in job requirements. Similarly to the project code variable (see above), we implemented a detection of a specific variable containing the list of nodes to exclude.
Finding those nodes can be done by extracting the job ids from the execution trace file, then request job information with qacct. To facilitate this search, we wrote a bash script that will list the nodes and print it to screen. You can find it and copy it on Eddie from here (do not forget to make it executable chmod a+x get_fail_jobs_nodes.sh).
The script take as input an execution trace file via the --file option. It reads it, find the failed jobs, extract job ids, request info to scheduler, extract the execution nodes and format the names before printing.
get_fail_jobs_nodes.sh --file execution_trace_2026-01-20_09-32-27.txt.
Then, you can set up an environment variable called NFX_NODE_EXCLUSION and copy/paste the printed node list.
export NFX_NODE_EXCLUSION="<FORMATED_LIST_OF_NODES_TO_EXCLUDE>"Running Nextflow
On a login node
You can use a qlogin to run Nextflow, if you request more than the default 2 GB of memory. Unfortunately, you can’t submit the initial Nextflow run process as a job as you can’t qsub within a qsub. If your eddie terminal disconnects your Nextflow job will stop. You can run qlogin in a screen session to prevent this.
Start a new screen session.
screen -S <session_name>Start an interactive job with qlogin.
qlogin -l h_vmem=8GYou can leave our screen session by typing Ctrl + A, then d.
To list existing screen sessions, use:
screen -lsTo reconnect to an existing screen session, use:
screen -r <session_name>On the wild west node
Wild West node has relaxed restriction compared to regular nodes, which allows the execution of Nextflow. The access to Wild West node must be requested to Andy Law and IS. Similarly to the qlogin option, it is advised to run Nextflow within a screen session.
Config file
//Profile config names for nf-core/configs
params {
config_profile_description = 'University of Edinburgh (Eddie) cluster profile for Roslin Institute provided by nf-core/configs.'
config_profile_contact = 'Sebastien Guizard (@sguizard) and Donald Dunbar (@donalddunbar)'
config_profile_url = 'https://www.ed.ac.uk/information-services/research-support/research-computing/ecdf/high-performance-computing'
}
executor {
name = "sge"
}
process {
stageInMode = 'symlink'
scratch = 'false'
penv = { task.cpus > 1 ? "sharedmem" : null }
// This will override all jobs clusterOptions
clusterOptions = {
def memoryPerCpu = task.memory.toMega() / task.cpus
def nodeExclusion = System.getenv('NFX_NODE_EXCLUSION') ?
"-l h=!(${System.getenv('NFX_NODE_EXCLUSION')})" :
""
def projectCode = System.getenv('NFX_SGE_PROJECT') ?
"${System.getenv('NFX_SGE_PROJECT')}" :
"uoe_baseline"
"-l h_rss=${memoryPerCpu}M -P ${projectCode} ${nodeExclusion}"
}
// common SGE error statuses
errorStrategy = {task.exitStatus in [143,137,104,134,139,140] ? 'retry' : 'finish'}
maxErrors = '-1'
maxRetries = 3
// Use apptainer module instead of singularity
beforeScript =
"""
. /etc/profile.d/modules.sh
module load igmm/apps/apptainer/1.3.4
export SINGULARITY_TMPDIR="\$TMPDIR"
"""
}
env {
MALLOC_ARENA_MAX=1
}
singularity {
envWhitelist = "SINGULARITY_TMPDIR,TMPDIR"
runOptions = '-p -B "$TMPDIR"'
enabled = true
autoMounts = true
// Define the singularity cache directory depending on the presence of the NFX_SGE_PROJECT variable
// User without compute project can't access to the shared cache directory.
// So, they need to store singularity images into the work directory.
cacheDir = System.getenv('NFX_SGE_PROJECT') ?
"/exports/cmvm/eddie/eb/groups/alaw3_eb_singularity_cache" :
null
}