Operations nightly Builds

THIS PAGE IS UNDER CONSTRUCTION

The LCG nightly builds provide regular daily builds of the whole set of packages provided later on in the LCG releases. The end results are distributed to both AFS and CVMFS storage backends.  There are two nightly build slots; dev3 and dev4. Both slots provide the same set of externals and Grid packages and also generators (with identical versions). The difference stays in the ROOT version; while dev3 runs with the ROOT-HEAD version, dev4 is currently running with ROOT 6.08-patches version. The current set of LCG releases  include different tags and ROOT 6.08 and therefore they are built with the information included in the dev4 slot at the corresponding LCG release time.

General Jenkins setup and procedures

Builds areas is both AFS and CVMFS

AFS file system

  • /afs/cern.ch/sw/lcg/app/nightlies/dev3/<day_of_the_week>/<package_name>/<version_number>/<platforms>/binaries
  • /afs/cern.ch/sw/lcg/app/nightlies/dev4/<day_of_the_week>/<package_name>/<version_number>/<platforms>/binaries

CVMFS file system

  • /cvmfs/sft.cern.ch/lcg/nightlies/dev3/<day_of_the_week>/<package_name>/<version_number>/<platforms>/binaries
  • /cvmfs/sft.cern.ch/lcg/nightlies/dev4/<day_of_the_week>/<package_name>/<version_number>/<platforms>/binaries

To speed up the process, both slots are incrementally build. This means packages, are built against the latest LCG release available in CVMFS. If the package declared in the nightly build is already available in the releases area (with the same version) then a simple link is established from the releases to the nightly area. Otherwise the package is fully built and installed in both AFS and CVMFS areas.
Daily results of the nightly builds can be found in CDash.

Jenkins slaves

 Around 400 cores are available for nightly builds and LCG releases in the EP-SFT Jenkins instance. These cores belong to SLC6, Centos7, mac1011 and Ubuntu1604 systems. Among these cores, there are two SLC6 32-CPUs physical machines used mostly for the builds of releases and for the software installation of the nightly builds in AFS. SLC6 and Centos7 nodes are handled by the puppet infrastructure provided by the IT.
In addition, the Jenkins master instance is also a puppet SLC6 32-CPUs physical machine.
Finally two Stratum-0 machines are available for EP-SFT to write in CVMFS. These two machines have been also implemented in Jenkins as slaves for software installation purposes:

  • cvmfs-sft --> Distribution of LCG releases, compilers and CMake distributions
  • cvmfs-sft-nightlies --> Distribution of nightly builds in CVMFS. The mount area sft-nightlies.cern.ch is linked in the previous node. 

Testing of new packages and versions: Validation procedure

The reliability of the packages provided in the LCG releases is ensured after the following validation procedure:

  1. New packages and/or versions are tested and validated first in the experimental  and experimental_full slots available in Jenkins and visible in CDash. The associated binaries are not installed in AFS nor CVMFS. The difference between both slot is that experimental corresponds to an incremental build against the releases area, while experimental_full builds all packages from scratch. While the former runs 5 days per week, the latest is executed only twice per week. 
  2. Packages which have been validated in the experimental slots are implemented in both dev3 and dev4 as a previous step to entering in the future releases.

The dev3 and dev4 binary distributions are used by the experiments to trigger and build their own daily software stacks.

Build steps included in the dev3-4 slots

The Jenkins jobs responsible of the devX executions are called lcg_ext_dev3 and lcg_ext_dev4.
The set of scripts responsible of the full builds are mostly included in the lcgjenkins repository. Only the infrastructure responsible of the view creation is included in lcgcmake. A complete nightly build executes the following steps:

BUILD PROCEDURES

1. Setup of the environment that will be used during the build. This step includes the following declarations (script: lcgjenkins/jk-setup.sh):

  • Compiler: native for Ubuntu and Mac nodes, gcc4.9.3 and gcc6.2.0 for SLC6 and Centos7 builds. In this later case, the compilers are available from CVMFS and AFS. Nighty builds use the compilers included in CVMFS. Therefore, all SLC6 and Centos7 nodes have access to CVMFS. 
    • gcc distribution in CVMFS: /cvmfs/sft.cern.ch/lcg/contrib/gcc
      • Extra gcc versions available: 4.8.4, 4.9.3, 5.1.0, 5.2.0 (old ABI4 compatible), 6.1.0 (native and old ABI4 compatible), 6.2.0 (native and old ABI compatible)
    • gcc distribution in AFS: /afs/cern.ch/sw/lcg/contrib/gcc
      • Extra gcc versions available: all versions mentioned in CVMFS (older versions also available).
  • Buildtype, this is "optimize (opt)" and "debug (dbg)".
  • CMake version also installed in both AFS and CVMFS. The latest 3.7.0 version is currently used for the nightly builds.
    • CMake distribution in CVMFS: /cvmfs/sft.cern.ch/sw/lcg/contrib
      • Extra CMake versions available: 3.2.3, 3.3.2, 3.4.3, 3.5.2, 3.6.0, 3.7.0
    • CMake distribution in AFS: /afs/cern.ch/sw/lcg/contrib/CMake
      • Extra CMake versions available: all versions mentioned in CVMFS (older versions also available)
  • Extra CMake options (if any, provided through the corresponding Jenkins job)
The combination of the architecture, operating system, compiler and build type configures the so called build platform (i.e., x86_64-slc6-gcc49-opt). All the previous elements interconnected configure the set of platforms which will be used for each individual build. For each individual platform, the full set of packages are built independently. The complete set for each slot is summarized below:
 
dev3 x86_64-slc6-gcc49-opt, x86_64-slc6-gcc49-dbg
x86_64-slc6-gcc62-opt, x86_64-slc6-gcc62-dbg
x86_64-centos7-gcc62-opt, x86_64-centos7-gcc62-dbg
x86_64-ubuntu16-gcc5404-opt, x86_64-ubuntu1604-gcc54-dbg (native compiler)
dev4 x86_64-slc6-gcc49-opt, x86_64-slc6-gcc49-dbg
x86_64-slc6-gcc62-opt, x86_64-slc6-gcc62-dbg
x86_64-centos7-gcc62-opt, x86_64-centos7-gcc62-dbg
x86_64-mac1011-clang80-opt (native compiler)
2. Execution of each individual package build (script lcgjenkins/jk-runbuild.sh)
Based on the lcgcmake toolkit, sequential builds of all packages are performed per each platform. In the case of nightly builds and since the build approach is incremental against the releases area, only new packages or versions are built. Existing packages available in the release area are simply linked. For each individual package a tar file is created containing the binary files and it is stored locally in the corresponding Jenkins slave node at the end of each build. Together with all tar files, a summary txt file is also created including the list of packages which have been built, the versions, hashes and dependencies. This file is instrumental for the installation in the end systems. The name of this txt file includes the nightly slot and the platform in the name: LCG_<slot>_<platform>.txt.
 
3. Creation of control files to evaluate the final status of the global build (script lcgjenkins/isDone.sh)
isDone-<PLATFORM> or isDoneUnstable-<PLATFORM> files are touched and installed in both backends, AFS and CVMFS to summary the quality of the while build. While the former is available for fully successfully builds (this means, all packages included in the slot have been successfully built), the later is provided for partial successfully builds (i.e., any (or several) of the packages included in the nightly build has failed). Even in this case, the set of successfully built packages are installed in both AFS and CVMFS.
 
These three steps configure the package builds themselves. The rest of actions are dedicated to the installation in AFS and CVMFS and the packages publication in a web interface.

AFS/CVMFS INSTALLATION PROCEDURES and PACKAGES PUBLICATION

1. Copy of the tar files to the Master node (lcgjenkins/copyToMaster.sh)
This is an intermedium step to collecting tar files from all platforms in a common place of the master node:  /build/workspace/nightly-tarfiles/<slot>/<day_of_the_week>. This directory includes also the summary txt file and the isDone and isDoneUnstable file already described in the previous subsection.
2. Copy from the master to EOS (lcgjenkins/copyMastertoEOS.sh)
The storage backend chosen by SFT to store both the tar files and associated build information from the releases and nightly builds and the sources of the packages is EOS. The area is reachable from a web interface and the full path is: 
/eos/project/l/lcg/www/lcgpackages/tarFiles. The corresponding Jenkins job (called lcg-Master-to-EOS) is automatically triggered at the end of each individual platform-based build job.
3. Installation in AFS and CVMFS (lcgjenkins/afs_install.sh and cvmfs_install.sh)
Separated jobs execute the installation of the tar files containing in the binaries of each package in AFS and CVMFS. The jobs (called lcg_afs_install and lcg_cvmfs_install) are automatically triggered by Jenkins at the end of the complete lcg-Master-to-EOS job execution. These jobs can be also used for the installation of the binaries in the case of releases. The different actions are selected via the BUILDTYPE parameter (nightly, releases or limited) declared as part of the corresponding Jenkis jobs. The corresponding jobs take key arguments as the PLATFORM or the day of the week from the configuration of the parent build job. These job execute the following steps:

  • Cleanup of the CVMFS area under used via the clean_nightlies script.  In the case of AFS this operation is performed in a separate (historical reasons) job called clean_AFS.py. This job is executed one day in advance and it cleans up both the AFS area and the master area hosting the set of tar files of the previous week. clean_nightlies takes the following arguments: the LCG_VERSION (in this case, the nightly slot dev3 or dev4), the platform to erase, the day of the week and the backend.
  • The full set of tar files is downloaded from EOS and installed via the script lcg_install following the contains included in the summary txt file also available in EOS. It installs sequentially the packages included in that file with the corresponding dependencies. If the package/version is already available in the releases area and therefore there is not associated tar file in EOS, a soft link to the release area is performed. In the case of nightly builds this command acts as follows:

           lcginstall.py -y -u http://lcgpackages.web.cern.ch/lcgpackages/tarFiles/nightlies/${LCG_VERSION}/$weekday -r ${LCG_VERSION} -d LCG_${LCG_VERSION}_$PLATFORM.txt -p /cvmfs/sft-nightlies.cern.ch/lcg/nightlies/${LCG_VERSION}/$weekday/ -e cvmfs      where:

  -y is used in the cases of nightly builds only
  -u indicates the position on EOS of the tar files associated to that particular release or nightly
  -r corresponds to the release version (or nightly slot)
  - d is the name of the description file available in the same place as the tar files
  -p is the final installation path
  -e is the backend.
  • The relocation of the packages is guaranteed through the execution of a post-install script. This post-installation script is copied from the build machine to the corresponding backend and executed there after un-tarring the associated package file. Clearly, this execution takes place only if the package is not available in the releases area and therefore it has not been linked. Otherwise this step is avoided.
  • Download of the isDone or isDoneUnstable files previously copied to EOS
  • If the build has succeded and therefore the isDone file exists, then the following steps are followed. Otherwise the nightly installation procedure finishes here:
    • Execution of the script lcgjenkins/extract_LCG_summary which creates summary files of the packages installed for that build separated for external and generator packages in two files. This split has been requested by LHCb.
    • Creation of the views following the script lcgcmake/cmake/scripts/create_lcg_view.py which takes the following arguments:
      • -l full path where the packages have been installed
      • -p platform or nightly slot
      • -d delete previous views if existing
      • -B full path where the views are going to be installed.
    • In the case of an AFS installation the procedure has finished. However, if the installation is taking place in CVMFS there are still some pending steps to ensure the views can be used by the SWAN project:
      • The SWAN project needs to have access to a stable and updated set of dev3 views. In addition, they do not want to depend on the day of the week to ease their implementation. Therefore CVMFS has a "latest" sub-directory inside the views directory which points to the latest successfully set of views.
      • The publication of the full nightly with all platforms and views is the last part of the whole installation procedure in cvmfs via: cvmfs_server publication <mount_point>
NOTE: A full description of the views meaning is given below.

WHAT ARE THE LCG VIEWS?

The usage of a certain release of nightly passes by the redefinition of the PATH, LD_LIBRARY_PATH and some extra environment variables which point to the binaries provided by those builds. This is the aim of the LCG views. The views groups together all libraries, headers, man files, executables... corresponding to all packages included in a release or a nightly in common directories establishing the following tree structure:

  • Views installation in AFS for nightly builds: /afs/cern.ch/sw/lcg/views/devX/<platform>
  • Views installation in CVMFS for nightly builds: /cvmfs/sft.cern.ch/lcg/views/devX/<platform>

In both cases, under the platform subdirectory, common directories such as "include", "bin", "lib64"... include the corresponding files from all packages which are part of the associated release or nightly build. In addition, there is a setup.(c)sh script which set all environment variables neccessary to use the setup of the selected release or build.

Other nightly builds

Migrating dev3 from Python 2 to Python 3 (current dev3python3)

Build and version changes

Currently, dev3python3 is running using Python 3 (3.5.2) without errors except for the tests, pending to migrate. The last succesful build can be visited in jenkins. The following changes, sorted by folders, have been applied during the migration:

cmake

  • New heptool file: heptools-dev3python3.cmake.
    • Includes Python updated to 3.5.2.
    • Updated with last updated versions in heptools-dev3.cmake
    • Removed LCGCMT as added project
  • Changes in heptools-dev-generators.cmake
    • Gosam updated from 2.0 to 2.0.3 (removed 2.0, master has both now)
    • Removed yoda 1.5.5 due to errors despite the patch for python 2 errors. (Only yoda version not used by Rivet)

externals

  • File Python_postinstall.sh:
    • Python 3 replaces python.config.sh script with python3.config.sh. The postinstall script has been modified to check the python version installed in the current build and use the correct python script.

generators

  • Changes in CMakeLists.txt:
    • Added command sed to cython LCG_Add_package to solve problems with shebang interpeter and long paths.
    • Multiple changes from PYTHON=${python_home}/bin/pythonto PYTHON=${python_home}/bin/${PYTHON} where ${PYTHON}contains the Python version in use.
    • Blackhat generators and dependencies have been wrapped in conditionals to avoid his installation when using Python 3 (unknown errors)
  • New patches (Most of them applied to configure or setup.py files):
    • agile 1.4.1
    • gosam 2.0.3
    • professor 1.4.0
    • All versions of rivet
    • All versions of yoda

projects

  • Changes in CMakeLists.txt:
    • Set of Python_cmd in the top has been changed to use ${PYTHON} variable.

pyexternals

  • Changes in CMakeLists.txt
    • Conditional installation of pygsi and xrootd depending on Python version.

Python changes

The Python 3.0 release introduced some changes that can produce some compatibility problems in current Python 2 code. In order to avoid unexpected issues updating to Python 3, the followings changes allow the code to work correctly using Python 3 and keep the compatibility with Python 2 without errors.
 

Function print

The function print "String" on Python 3 needs to wrap the string with bracketsprint("String").

>>> print "Hello world" # Works on python 2, fails in python
Hello world
>>> print("Hello world") # Works on python 2 and 3
Hello world

The problem comes printing multiple strings. Different behaviours in python 2 and 3:      

print('hello', 'Guido')

Using Python 2 the result is a tuple:       

>>> print 'hello', 'world'
hello world
>>> print('hello', 'world')
('hello', 'world')

Using Python 3:

>>> print('hello', 'world')
hello world

This can be solved importing print_function to prevent Python 2 from interpreting it as a tuple.

from __future__ import print_function #(at top of module)   

>>> print('hello', 'world')
hello world 


Function map

The function map() now returns iternators. A quick fix is to replace map(...) with list(map(...)) to get the same result.
That works well on Python 3 but could be innefienct on python 2. To solve it, is possible to import:
from future.builtins import map #(at top of module)

Functions range and xrange

The function xrange(...) no longer exits, now range(...) behaves like xrange(...) used to behave.

To keep using xrange in Python 3 use:

from past.builtins import xrange
To have xrange behaviour with range on Python 2 use:
from future.builtins import range 
   

Catching exceptions

except ImportError,err now replace the comma with as. It works both on Python 2 and 3 without any change:
except ImportError as err

Getting a var from a dict

The has_key() method is no longer supported on Python 3 dictionaries. Use instead in supported for both Python versions.

os.environ.has_key("NAME_VAR") # Python 2 only

if "NAME_VAR" in os.environ # Python 2 and 3


More changes

All changes from Python 2 to Python 3 are described on this page(link is external).
For a full list of solutions to migrate code from Python 2 to Python 3 keeping backward compatibility visit this page(link is external).
 

Backward compatibility requirements

Most of these libraries are included in the future package which is not included in Python by default. Therefore, this package 
should be installed as a pyexternal package and every package using code compatible with Python 2 and 3 have to contain it
as a dependency on its build.

 

You are here