Skip to content

Commit acc39af

Browse files
authored
Remove installation and reproducibility sections from README
Removed sections on reproducibility, installation, and data availability from the README.
1 parent 0fa2ebc commit acc39af

File tree

1 file changed

+0
-105
lines changed

1 file changed

+0
-105
lines changed

README.md

Lines changed: 0 additions & 105 deletions
Original file line numberDiff line numberDiff line change
@@ -262,112 +262,7 @@ All directories related to raw data, predictions, and model checkpoints are **in
262262

263263
---
264264

265-
## Reproducibility & Installation
266265

267-
### Continuous Integration and Code Quality
268-
269-
This repository enforces consistent coding standards and documentation to support long-term reproducibility and collaborative research.
270-
271-
All Python code is automatically checked using:
272-
273-
- **Ruff** for PEP8 and PEP257 compliance
274-
- **Pre-commit hooks** to prevent non-compliant code from being committed locally
275-
- **GitHub Actions CI** to validate code quality on every push and pull request
276-
277-
The CI pipeline runs the following checks:
278-
279-
```bash
280-
ruff check .
281-
ruff format --check .
282-
```
283-
Pull requests to the main branch are blocked unless all checks pass, ensuring that the repository remains clean, readable, and reproducible over time.
284-
285-
### Data Availability
286-
287-
Due to data access restrictions associated with Oak Ridge National Laboratory (ORNL), the original datasets used in this study are **not publicly available**. Full reproduction of the reported experimental results therefore requires **authorized access** to the Advanced Plant Phenotyping Laboratory (APPL) data.
288-
289-
That said, the codebase is **dataset-agnostic by design**. Any 3D LiDAR point cloud dataset can be used **provided that**:
290-
- Point clouds are available in **XYZ format** (e.g., `.txt`, `.pcd`, `.ply`)
291-
- Point-wise semantic labels are provided (or generated) following a compatible annotation scheme
292-
- The data can be adapted to the expected input format used by the dataset loader
293-
294-
This enables reuse of the pipeline for **methodological experimentation**, architectural benchmarking, and extension to alternative 3D segmentation tasks.
295-
296-
297-
### Installation
298-
299-
The main dependencies of the project are listed below.
300-
301-
**Core Requirements**
302-
- Python ≥ 3.8
303-
- CUDA ≥ 11.x (optional, but recommended for training)
304-
- PyTorch + PyTorch Geometric
305-
- Open3D
306-
307-
308-
### Step 1: Clone the Repository and Create Environment
309-
310-
```bash
311-
git clone https://github.com/angomezu/geometric-deep-learning-plant-organ-segmentation.git
312-
cd geometric-deep-learning-plant-organ-segmentation
313-
314-
conda create -n plantseg python=3.9 pip
315-
conda activate plantseg
316-
```
317-
318-
### Step 2: Install PyTorch
319-
320-
Install PyTorch with CUDA support (adjust CUDA version if needed):
321-
322-
```bash
323-
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
324-
```
325-
326-
For CPU-only usage:
327-
328-
```bash
329-
pip install torch torchvision torchaudio
330-
```
331-
332-
### Step 3: Install PyTorch Geometric
333-
334-
Install PyTorch Geometric and its dependencies:
335-
336-
```bash
337-
pip install torch-geometric
338-
```
339-
340-
If you encounter issues, refer to the official installation guide:
341-
https://pytorch-geometric.readthedocs.io/en/latest/install/installation.html
342-
343-
### Step 4: Install 3D Processing and ML Dependencies
344-
345-
```bash
346-
pip install open3d numpy scikit-learn tqdm
347-
```
348-
349-
350-
## Notes on Usage
351-
352-
- Training scripts assume point-wise labeled data
353-
- Data loaders and feature computation logic are implemented in src/dataset.py
354-
- Visualization utilities require a functioning OpenGL context (for on-screen rendering)
355-
356-
### Users intending to apply the pipeline to new datasets may need to:
357-
358-
- Adapt the annotation format
359-
- Update normalization statistics
360-
- Adjust neighborhood radius and voxelization parameters
361-
362-
---
363-
364-
### Model checkpoints (.pth)
365-
366-
This project uses PyTorch checkpoint files (`.pth`) to store trained model weights.
367-
Running `python train.py` will save a checkpoint to `models/` (see the filename in `train.py`).
368-
Update `MODEL_PATH` (evaluation) and `CHECKPOINT` (visualization) to point to your `.pth`.
369-
370-
---
371266

372267
### Future Research Directions
373268

0 commit comments

Comments
 (0)