Skip to content

Commit d597c2b

Browse files
committed
feat: add custom logging helper functions
Signed-off-by: vsoch <[email protected]>
1 parent 0e8dead commit d597c2b

File tree

19 files changed

+715
-564
lines changed

19 files changed

+715
-564
lines changed

README.md

Lines changed: 1 addition & 38 deletions
Original file line numberDiff line numberDiff line change
@@ -21,44 +21,7 @@ This part of the library is under development. There are three kinds of agents:
2121
The design is simple in that each agent is responding to state of error vs. success. In the case of a step agent, the return code determines to continue or try again. In the case of a helper, the input is typically an erroneous response (or something that needs changing) with respect to a goal.
2222
For a manager, we are making a choice based on a previous erroneous step.
2323

24-
See [examples/agent](examples/agent) for an example.
25-
26-
#### To do items
27-
28-
- refactor manager to not handle prompt, just get step when retries come back.
29-
- then need to decide how to handle kubernetes job creating additional structures.
30-
- Get basic runner working
31-
- Add in ability to get log and optimize - the manager will need to use goal
32-
- We likely want the manager to be able to edit the prompt.
33-
- should be provided with entire prompt?
34-
- When pod pending, it can be due to resource issues (and will never start). Right now we will time out, but we should be able to catch that earlier.
35-
36-
#### Research Questions
37-
38-
**And experiment ideas**
39-
40-
- How do we define stability?
41-
- What are the increments of change (e.g., "adding a library")? We should be able to keep track of times for each stage and what changed, and an analyzer LLM can look at result and understand (categorize) most salient contributions to change.
42-
- We also can time the time it takes to do subsequent changes, when relevant. For example, if we are building, we should be able to use cached layers (and the build times speed up) if the LLM is changing content later in the Dockerfile.
43-
- We can also save the successful results (Dockerfile builds, for example) and compare for similarity. How consistent is the LLM?
44-
- How does specificity of the prompt influence the result?
45-
- For an experiment, we would want to do a build -> deploy and successful run for a series of apps and get distributions of attempts, reasons for failure, and a general sense of similarity / differences.
46-
- For the optimization experiment, we'd want to do the same, but understand gradients of change that led to improvement.
47-
48-
#### Observations
49-
50-
- Specifying cpu seems important - if you don't it wants to do GPU
51-
- If you ask for a specific example, it sometimes tries to download data (tell it where data is)
52-
- Always include common issues in the initial prompt
53-
- If you are too specific about instance types, it adds node selectors/affinity, and that often doesn't work.
54-
55-
#### Ideas
56-
57-
- The manager agent is currently generated an updated prompt AND choosing the step.
58-
- Arguably we should have a separation of responsibility so a step can ask to fix an error without a manager.
59-
- I think we need one more level of agent - a step agent should have helper agents that can:
60-
- take an error message and analyze to get a fix.
61-
24+
See [examples/agent](examples/agent) for an example, along with observations, research questions, ideas, and experiment brainstorming!
6225

6326
### Job Specifications
6427

examples/agent/Dockerfile

Lines changed: 71 additions & 123 deletions
Original file line numberDiff line numberDiff line change
@@ -1,123 +1,71 @@
1-
# Dockerfile for LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator)
2-
# Target: Production HPC environment on Google Cloud
3-
# Strategy: Multi-stage build for a lean final image with MPI support.
4-
5-
# Use ARGs at the top to easily update versions of key components globally
6-
ARG LAMMPS_VERSION=stable_2Aug2023
7-
ARG OPENMPI_VERSION=4.1.6
8-
9-
# =====================================================================
10-
# Stage 1: Builder
11-
# This stage compiles Open MPI and LAMMPS from source. It contains all
12-
# the build-time dependencies, which will be discarded later.
13-
# =====================================================================
14-
FROM debian:bullseye AS builder
15-
16-
# Inherit ARGs from the global scope
17-
ARG LAMMPS_VERSION
18-
ARG OPENMPI_VERSION
19-
20-
# Set environment variables for the Open MPI build location and path
21-
ENV OMPI_DIR=/opt/openmpi-${OPENMPI_VERSION}
22-
ENV PATH=$OMPI_DIR/bin:$PATH
23-
ENV LD_LIBRARY_PATH=$OMPI_DIR/lib
24-
25-
# Prevent interactive prompts during package installation
26-
ENV DEBIAN_FRONTEND=noninteractive
27-
28-
# Install essential build tools and libraries for both Open MPI and LAMMPS
29-
# Added ca-certificates to allow git and wget to verify SSL certificates securely.
30-
RUN apt-get update && apt-get install -y --no-install-recommends \
31-
build-essential \
32-
ca-certificates \
33-
cmake \
34-
g++ \
35-
gfortran \
36-
git \
37-
libevent-dev \
38-
libhwloc-dev \
39-
wget \
40-
&& rm -rf /var/lib/apt/lists/*
41-
42-
# --- Build Open MPI from source ---
43-
# Building from source gives control over the configuration, crucial for
44-
# containerized HPC environments. We enable PMIx for modern process management.
45-
WORKDIR /tmp
46-
RUN wget https://download.open-mpi.org/release/open-mpi/v${OPENMPI_VERSION%.*}/openmpi-${OPENMPI_VERSION}.tar.gz && \
47-
tar -xzf openmpi-${OPENMPI_VERSION}.tar.gz
48-
49-
WORKDIR /tmp/openmpi-${OPENMPI_VERSION}
50-
RUN ./configure \
51-
--prefix=${OMPI_DIR} \
52-
--with-pmix \
53-
--disable-pty-support
54-
RUN make -j$(nproc) all && make install
55-
56-
# --- Build LAMMPS from source ---
57-
# Clone a specific stable release tag for reproducibility.
58-
WORKDIR /opt
59-
RUN git clone --depth 1 --branch ${LAMMPS_VERSION} https://github.com/lammps/lammps.git lammps
60-
61-
# Use CMake to configure the LAMMPS build. Enable common packages.
62-
WORKDIR /opt/lammps/build
63-
RUN cmake ../cmake \
64-
-D CMAKE_INSTALL_PREFIX=/usr/local \
65-
-D BUILD_MPI=yes \
66-
-D PKG_KSPACE=yes \
67-
-D PKG_MOLECULE=yes \
68-
-D PKG_RIGID=yes \
69-
-D PKG_MANYBODY=yes \
70-
-D PKG_REPLICA=yes \
71-
-D CMAKE_BUILD_TYPE=Release \
72-
-D LAMMPS_EXCEPTIONS=yes
73-
74-
# Compile and install LAMMPS
75-
RUN make -j$(nproc) && make install
76-
77-
# =====================================================================
78-
# Stage 2: Final Image
79-
# This stage creates the lean, final image. It starts from a minimal
80-
# base and only copies the necessary executables, libraries, and runtime
81-
# dependencies from the builder stage.
82-
# =====================================================================
83-
FROM debian:bullseye-slim
84-
85-
# Inherit ARG for version consistency
86-
ARG OPENMPI_VERSION
87-
88-
# Set environment variables for Open MPI runtime
89-
ENV OMPI_DIR=/opt/openmpi-${OPENMPI_VERSION}
90-
ENV PATH=/usr/local/bin:$OMPI_DIR/bin:$PATH
91-
92-
# Install only the essential runtime dependencies.
93-
# libgfortran5 is required by the Fortran-compiled parts of LAMMPS.
94-
RUN apt-get update && apt-get install -y --no-install-recommends \
95-
libevent-2.1-7 \
96-
libgfortran5 \
97-
libhwloc15 \
98-
&& rm -rf /var/lib/apt/lists/*
99-
100-
# Copy the compiled Open MPI installation from the builder stage
101-
COPY --from=builder ${OMPI_DIR} ${OMPI_DIR}
102-
103-
# Copy the entire LAMMPS installation (binary, libs, potentials) from the builder stage
104-
COPY --from=builder /usr/local /usr/local
105-
106-
# Configure the dynamic linker to find Open MPI and LAMMPS libraries.
107-
# This is more robust than setting LD_LIBRARY_PATH.
108-
RUN echo "${OMPI_DIR}/lib" > /etc/ld.so.conf.d/openmpi.conf && \
109-
echo "/usr/local/lib" > /etc/ld.so.conf.d/lammps.conf && \
110-
ldconfig
111-
112-
# Create a dedicated, non-root user for running the application for security
113-
RUN useradd --create-home --shell /bin/bash lammps
114-
USER lammps
115-
WORKDIR /home/lammps
116-
117-
# Set the entrypoint to the LAMMPS executable.
118-
# Allows running the container with LAMMPS args directly, e.g., `docker run <image> -in in.lj`
119-
ENTRYPOINT ["lmp"]
120-
121-
# Provide a default command to display help if no other args are provided.
122-
CMD ["-h"]
123-
# Generated by fractale build agent
1+
# Base image: Ubuntu 22.04 LTS for a stable and recent environment
2+
FROM ubuntu:22.04
3+
4+
# Set non-interactive frontend for package managers to avoid prompts
5+
ENV DEBIAN_FRONTEND=noninteractive
6+
7+
# Configure OpenMPI for containerized environments
8+
# Allow running MPI as root, a requirement for this specific Dockerfile
9+
ENV OMPI_ALLOW_RUN_AS_ROOT=1
10+
ENV OMPI_ALLOW_RUN_AS_ROOT_CONFIRM=1
11+
# Force components to work over TCP, common in container orchestrators
12+
ENV OMPI_MCA_btl=self,tcp
13+
ENV OMPI_MCA_pml=ob1
14+
ENV OMPI_MCA_btl_tcp_if_include=eth0
15+
ENV OMPI_MCA_oob_tcp_if_include=eth0
16+
17+
# Install build dependencies, git, cmake, and MPI libraries
18+
# Added python3 to satisfy the LAMMPS cmake build system dependency
19+
RUN apt-get update && \
20+
apt-get install -y --no-install-recommends \
21+
build-essential \
22+
cmake \
23+
git \
24+
wget \
25+
ca-certificates \
26+
g++ \
27+
openmpi-bin \
28+
libopenmpi-dev \
29+
libfftw3-dev \
30+
python3 && \
31+
rm -rf /var/lib/apt/lists/*
32+
33+
# Clone, build, and install LAMMPS
34+
# Using 'develop' branch as 'master' is no longer a valid branch in the LAMMPS repository
35+
# A selection of common CPU packages are enabled, including REAXFF as requested
36+
RUN git clone --depth 1 -b develop https://github.com/lammps/lammps.git /lammps && \
37+
cd /lammps && \
38+
mkdir build && \
39+
cd build && \
40+
cmake ../cmake \
41+
-D CMAKE_INSTALL_PREFIX=/usr/local \
42+
-D BUILD_MPI=yes \
43+
-D BUILD_OMP=yes \
44+
-D PKG_KSPACE=yes \
45+
-D PKG_MOLECULE=yes \
46+
-D PKG_RIGID=yes \
47+
-D PKG_MANYBODY=yes \
48+
-D PKG_REAXFF=yes \
49+
-D PKG_MISC=yes \
50+
-D PKG_EXTRA-COMPUTE=yes \
51+
-D PKG_EXTRA-DUMP=yes \
52+
-D PKG_EXTRA-FIX=yes \
53+
-D PKG_EXTRA-MOLECULE=yes && \
54+
make -j$(nproc) && \
55+
make install
56+
57+
# Set the working directory for the container
58+
WORKDIR /data
59+
60+
# Copy the requested example files into the working directory
61+
# These files can be used for initial testing or as templates
62+
RUN cp /lammps/examples/reaxff/HNS/* /data/ && \
63+
# Clean up the source code to reduce final image size
64+
rm -rf /lammps
65+
66+
# Set the default entrypoint to the LAMMPS executable
67+
# The executable is on the PATH due to the CMAKE_INSTALL_PREFIX
68+
ENTRYPOINT ["lmp"]
69+
70+
# Default command can be overridden, e.g., docker run <image> -in in.script
71+
CMD ["-h"]

examples/agent/README.md

Lines changed: 45 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -10,9 +10,10 @@ The build agent will use the Gemini API to generate a Dockerfile and then build
1010
Here is how to first ask the build agent to generate a lammps container for Google cloud.
1111

1212
```bash
13-
fractale agent build lammps --environment "google cloud CPU" --outfile Dockerfile.lammps
13+
fractale agent build lammps --environment "google cloud CPU" --outfile Dockerfile --details "Ensure all globbed files from examples/reaxff/HNS from the root of the lammps codebase are in the WORKDIR. Clone the latest branch of LAMMPS."
1414
```
1515

16+
Note that we are specific about the data and using CPU, which is something the builder agent would have to guess.
1617
That might generate the [Dockerfile](Dockerfile) here, and a container that defaults to the application name "lammps"
1718

1819
### Kubernetes Job
@@ -27,9 +28,20 @@ kind load docker-image lammps
2728
To start, we will assume a kind cluster running and tell the agent the image is loaded into it (and so the pull policy will be never).
2829

2930
```bash
30-
fractale agent kubernetes-job lammps --environment "google cloud CPU" --context-file ./Dockerfile --no-pull
31+
fractale agent kubernetes-job lammps --environment "google cloud CPU" --context-file ./Dockerfile --no-pull --details "Run in.reaxff.hns in the pwd with lmp" --outfile ./job.yaml
3132
```
3233

34+
## With Cache
35+
36+
The same steps can be run using a cache. This will save to a deterministic path in the present working directory, and means that you can run steps a la carte, and run a workflow later to re-use the context (and not wait again).
37+
Note that when you save a cache, you often don't need to save the output file, because it will be the result in the context.
38+
39+
```bash
40+
fractale agent build lammps --environment "google cloud CPU" --details "Ensure all globbed files from examples/reaxff/HNS from the root of the lammps codebase are in the WORKDIR. Clone the latest branch of LAMMPS." --use-cache
41+
```
42+
43+
And then try running with the manager (below) with the cache to see it being used.
44+
3345
## Manager
3446

3547
Let's run with a manager. Using a manager means we provide a plan along with a goal. The manager itself takes on a similar structure to a step agent, but it has a high level goal. The manager will follow the high level structure of the plan, and step
@@ -42,10 +54,37 @@ try again.
4254

4355
```bash
4456
fractale agent --plan ./plans/run-lammps.yaml
57+
58+
# or try using with the cache
59+
fractale agent --plan ./plans/run-lammps.yaml --use-cache
4560
```
4661

47-
For this first design, we are taking an approach where we only re-assess the state and go back to a previous step given a last step failure. The assumption is that if a previous step fails, we keep trying until it succeeds. We only need to backtrack if the last step in a sequence is not successful, and it is due to failure at some stage in the process. But I do think we have a few options:
62+
We haven't hit the case yet where the manager needs to take over - that needs further development, along with being goal oriented (e.g., parsing a log and getting an output).
63+
64+
## Notes
65+
66+
#### To do items
67+
68+
- Figure out optimization agent (with some goal)
69+
70+
#### Research Questions
71+
72+
**And experiment ideas**
73+
74+
- How do we define stability?
75+
- What are the increments of change (e.g., "adding a library")? We should be able to keep track of times for each stage and what changed, and an analyzer LLM can look at result and understand (categorize) most salient contributions to change.
76+
- We also can time the time it takes to do subsequent changes, when relevant. For example, if we are building, we should be able to use cached layers (and the build times speed up) if the LLM is changing content later in the Dockerfile.
77+
- We can also save the successful results (Dockerfile builds, for example) and compare for similarity. How consistent is the LLM?
78+
- How does specificity of the prompt influence the result?
79+
- For an experiment, we would want to do a build -> deploy and successful run for a series of apps and get distributions of attempts, reasons for failure, and a general sense of similarity / differences.
80+
- For the optimization experiment, we'd want to do the same, but understand gradients of change that led to improvement.
81+
82+
#### Observations
4883

49-
1. Allow the manager to decide what to do on _every_ step (likely not ideal)
50-
2. Allow step managers to execute until success, always (too much issue if a step is failing because of dependency)
51-
3. Allow step managers to execute until success unless a limit is set, and then let the manager take over (in other words, too many failures means we hand it back to the manager to look.)
84+
- Specifying cpu seems important - if you don't it wants to do GPU
85+
- If you ask for a specific example, it sometimes tries to download data (tell it where data is)
86+
- There are issues that result from not enough information. E.g., if you don't tell it what to run / data, it can only guess. It will loop forever.
87+
- As an example, we know where in a git clone is the data of interest. The LLM can only guess. It's easier to tell it exactly.
88+
- An LLM has no sense of time with respect to versions. For example, the reax data changed from reaxc to reaxff in the same path, and which you get depends on the clone. Depending on when the LLM was trained with how to build lammps, it might select an older (or latest) branch. Instead of a juggling or guessing game (that again) would result in an infinite loop, we need to tell it the branch and data file explicitly.
89+
- Always include common issues in the initial prompt
90+
- If you are too specific about instance types, it adds node selectors/affinity, and that often doesn't work.

0 commit comments

Comments
 (0)