When trying to build the container, the code exits with an error when trying to install OSA. I attached a log file for clarity. Is this something you are familiar with @savchenk?
Update: I believe I have figured out what to do in order to run the container.
The image build from integralsw/osa-python (with the additions I have made) works, and I can run the container with osa-docker.sh. The problems I encountered were related to building from scratch (which I will ignore now) or running a container from the successfully built image without the right initialisation parameters that are present in osa-docker.sh.
So now I will try to copy osa-docker.sh and adapt it to build the environment calibration. I will close this issue since I in principle should have a working docker configuration.
Maybe it is easier to fix the issue with the original osa-python image here, as it will probably also be encountered when the build from scratch is completed.
@savchenk after a bit of digging it seems that HDF5 should be installed with EPEL, so the required packages should not be missing. In relation to the configuration of astromodels there is an issue when working with https://pypi.org/simple/tables/. Could there be some version issue with tables-3.9.1?
Maybe this needs to be installed? https://www.hdfgroup.org/downloads/hdf5
Adding the requirements for the calibration environment to the original image worked, but I am unable to run the container afterwards due to this error (after running docker run image_name):
bash: line 0: cd: /home/jovyan: No such file or directory
mkdir: cannot create directory '/home/jovyan/pfiles': No such file or directory
headas-setup: ERROR -- unable to cd /home/jovyan
So I decided to resume with building from scratch. I cloned a commit from July 2021 which fixed my problem, but I still cannot finish the build due to the program not being able to locate the HDF5 installation (see log)
I can build an image with the additional features needed using the already made integralsw/osa-python:latest. Building from scratch would require that I solve the dependency issue, which I will look into it in due time and report the result here.
I will now try to run the container from this new image and make the necessary adaptations to declare the right variables in env-snapshot.sh for calibration. I will open a separate issue to discuss problems I might encounter in this step (in the right repository).
Yes, might help. Try to take an version from early 2022 or 2021?
The attempt to install a new version of heasoft didn't work. It crashes before reaching 3ML with a vague error code (I don't see any clear error message: log_heasoft.txt). So I will try with an older version of 3ML.
I think the version that is cloned is the most recent one. Not sure if it is compatible with the version of xspec that I have (from heasoft 6.27.2). I am trying to install the latest version of heasoft to see what happens. Otherwise, I can maybe try to retrieve an older version of astromodels+threeml that works with heasoft 6.27.2.
You may want to try different version of astromodels+threeml, compatible with xspec which you have. Could you check if you can build a more recent version?
I tracked down the area in the 3ML code that crashes: astromodels/astromodels/xspec/src/_xspec.cc
#ifdef XSPEC_12_12_0
#include "XSFunctions/Utilities/xsFortran.h"
#include "XSFunctions/funcWrappers.h"
#else
#include "xsFortran.h"
#include "funcWrappers.h"
#endif
Since XSFunctions/Utilities/xsFortran.h is the directory that cannot be found, I am thinking it could be avoided by having a different xspec version, but it doesn't appear to be defined anywhere in the Dockerfile. Is this something you know how to solve @savchenk?
Not enforcing a specific value for astropy seems to work. When cloning the ThreeML repository there is a problem with locating XSFunctions/Utilities/xsFortran.h, see 3ml_log.txt. This seems odd since I find in the logs that 'Found library XSFunctions in /opt/heasoft/x86_64-pc-linux-gnu-libc2.17/lib'. Must be something with the directories that the code looks into later on that doesn't work.
Solved the issue with scipy, although there are problems with astropy helpers. I am not sure if it is a problem with astropy or pip, but I could maybe try with a different version of astropy (unless the version constraint is crucial?). log.txt
Strange, but it seems to want openblas. Try adding yum -y install openblas-devel
.
Also do upgrade pip, pip install pip --upgrade
.
scipy version is not constrained here and left to the resolver, so you can in principle adjust something, it should not mess up within the same backward-compatible major version.
There seems to be a problem with installing scipy. I tried using GCC=8 but run into trouble when running pip install scipy with the OpenBLAS dependency:
../../scipy/meson.build:134:7: ERROR: Dependency "OpenBLAS" not found, tried pkgconfig
What are your thoughts on the matter @savchenk? I don't want to make too many changes in versions and packages since it might mess things up more.
It seems that the OSA version and platform present in the Dockerfile were no longer accessible, so I need to update the file with a new version. I ran into some other version issues, such as Pyenv cloning failing due to updated GitHub protocols, SciPy requiring GCC > 8. I am currently working on updating all these and will see if I manage to build the image.
No I didn't think of that, I will try to add these and send the updated log files.
There is no error message, did you redirect stderr to the log too?