Skip to content

Conversation

pjonsson
Copy link
Contributor

@pjonsson pjonsson commented Dec 2, 2024

Switch the Dockerfile to using
the ubuntu user/group which is
required for building after
davidfrantz/base_image#4
has been merged.

Also remove the WORKDIR since that
is the last thing set by the Dockerfile
for davidfrantz/base.

Include paths for some dependencies
have changed, so update the Makefile
with those paths as well.

@pjonsson
Copy link
Contributor Author

pjonsson commented Dec 2, 2024

I have built the image, and since that prints a bunch of messages from various force-tools, it's not obviously broken, but I would appreciate help with testing this. The CI tests will obviously fail on this in its current state since the Ubuntu 24.04 base image isn't present yet.

@davidfrantz
Copy link
Owner

Thanks @pjonsson,

as discussed in the base repository, I appreciate the addition. I will merge this PR at some point, but will test first.

This will take some time, but will happen eventually

Thanks,
David

@pjonsson
Copy link
Contributor Author

@davidfrantz would you like me to rebase on latest develop and add a commit that changes the FROM line to use the ubuntu24 tag instead of latest?

Switch the Dockerfile to using
the ubuntu user/group which is
required for building after
davidfrantz/base_image#4
has been merged.

Also remove the WORKDIR since that
is the last thing set by the Dockerfile
for davidfrantz/base.

Include paths for some dependencies
have changed, so update the Makefile
with those paths as well.
@pjonsson pjonsson force-pushed the upgrade-ubuntu24 branch 2 times, most recently from bd4e4bb to 8f3a51b Compare December 23, 2024 11:35
@pjonsson
Copy link
Contributor Author

I rebased on top of latest develop and added a commit that sets the FROM to use the Ubuntu 24.04 version of the base image.

I also added a workaround that sets the home directory to world-writable so the CI tests could pass, but I wouldn't recommend having a world-writable home directory for security reasons. If you want to avoid that, there's a way to re-uid/re-gid the user inside the container upon startup (example here: contiki-ng/contiki-ng#2309, but you also want contiki-ng/contiki-ng#2822 on top of that to handle signal delivery inside the container properly).

It's not related to this PR, but when I looked at the CI failure I noticed that Force is compiled with g++ -std=c++11 which I guess means it's a C++ program. All the source code files have a .c suffix rather than the C++ standard suffixes (.cc/.cxx/.cpp) though, so I was surprised that it's C++. I've never done anything like that, but my guess is that you will save yourself some trouble with fighting tools if you rename the C++ source files so their suffixes match the language they are written in.

I also saw a bunch of compiler warnings. I have not looked at the code, but my guess is that the differing sign warnings are unlikely to be a problem in practice, but the stringop-warnings looked like they should be addressed.

pjonsson referenced this pull request in davidfrantz/base_image Feb 15, 2025
Switch to Ubuntu 24.04, bump some
versions of dependencies, and use
some of the packages from Ubuntu
instead of building OpenCV manually.

There is also an ubuntu user/group
by default in the base image, so
use that instead of creating
a separate docker user like before.

Refs #3
@kelewinska
Copy link
Contributor

Hi,
When I use the experimental image 0aed0fe to build force 3.8.0 based on the ubuntu24.04 base image i am getting the following error that terminates the build with the exit code 2:

In file included from src/modules/aux-level/train-aux.c:28:
src/modules/aux-level/train-aux.h:35:10: fatal error: opencv2/ml.hpp: No such file or directory
35 | #include <opencv2/ml.hpp>
| ^~~~~~~~~~~~~~~~
compilation terminated.

the libopencv libraries are installed under /lib/x86_64-linux-gnu in the base image, however, it seems force is not able to access them during the build. I checked the /usr/include and the opencv2 is not there, though I can be barking up the wrong tree here.

do you have any hints how to resolve it?
Thank you in advance!

@pjonsson
Copy link
Contributor Author

That is a header file, so some -dev package is missing for some reason. The particular header file is in libopencv-ml-dev.

@kelewinska
Copy link
Contributor

kelewinska commented Feb 15, 2025

ok, so I added the libopencv-ml-dev \ to the apt-get build layer of ubuntu24.04 base image but because now opencv2 is nested inside opencv4 like so/usr/include/opencv4/opencv2 it requires additional layer
RUN ln -s /usr/include/opencv4/opencv2 /usr/include.

With this change, the force main image now crushes with:

src/modules/higher-level/py-udf-hl.c:33:10: fatal error: numpy/ndarrayobject.h: No such file or directory
33 | #include <numpy/ndarrayobject.h>
| ^~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.

I am done for the day. If someone would like to continue, great, if not, I will pick it up tomorrow or on Monday.

Cheers,

@pjonsson
Copy link
Contributor Author

pjonsson commented Feb 16, 2025

[..] now opencv2 is nested inside opencv4 like so/usr/include/opencv4/opencv2 it requires additional layer RUN ln -s /usr/include/opencv4/opencv2 /usr/include.

That is what the first line in the Makefile diff in this PR fixes.

src/modules/higher-level/py-udf-hl.c:33:10: fatal error: numpy/ndarrayobject.h: No such file or directory
33 | #include <numpy/ndarrayobject.h>
| ^~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.

That is what the second and third line in the Makefile diff in this PR fixes.

I'm guessing something went wrong when you cherry-picked the commits from this PR into your local repository.

@kelewinska
Copy link
Contributor

Hi,
I came across something interesting while using this FORCE image based on Ubuntu 24.04 and GDAL 3.8.4 version (out of the box, no custom changes):
FORCE based on this updated image masks out more clouds in force-level2 than FORCE based on the old image version (Ubuntu 20.04 GDAL 3.0.4). The screen grab below shows the spatial difference with the clouds erased (ERASE_CLOUDS = TRUE). On the bottom in magma color scheme there is NIR band derived with FORCE built on the old Ubuntu 20.04 image, while in grayscale on top there is the same NIR band but derived using the FORCE built on Ubuntu 24 image. BOA values agree perfectly between both outputs, and the only difference I noticed is the more aggressive masking.

Do you know by chance the cause of these differences? It is a rather important aspect of consistency in the time series when the existing data collection derived with FORCE based on the old image is/will be extended and maintained with the FORCE based on this new image.

Both images were built on FORCE version 3.8.01. The runs were executed using this prm file, and the input scenes were LT05_L1TP_193024_20090730_20200827_02_T1.tar and LT05_L1TP_193023_20090730_20200827_02_T1.tar.

Cheers,
Kasia

image

@pjonsson
Copy link
Contributor Author

I'm afraid that is a question for @davidfrantz, I have never looked at the C code of FORCE.

A lot happened between GDAL 3.0.4 and 3.8.4 though, and quite a bit happened with GCC as well (9.3.0 vs 13.2.0).

@davidfrantz
Copy link
Owner

huh, interesting. It seems that there is also a directional component. It is probably rooted in the warping/resampling, which indeed uses GDAL functionality...

Nevertheless, I would not expect full reproducibility when upgrading to a new GDAL version. For Ubuntu or GCC, it would be good if this would be the case. But changes in the outcome are expected for a thematic library like GDAL. As long as nothing breaks, this is OK. I will try to look at this nonetheless

@pjonsson
Copy link
Contributor Author

The geoloc transformer was made exact in GDAL 3.5 (OSGeo/gdal#5520) which fixed an error that was frequently >1 pixel, does that come into the picture here? Going from 3.4.3 gdalwarp to 3.9.0 moved my pixels a couple of blocks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants