Skip to content

Commit 1327fde

Browse files
committed
Merge branch 'main' of github.com:SFI-Visual-Intelligence/Collaborative-Coding-Exam into mag-branch
2 parents a7898d0 + c4f6027 commit 1327fde

File tree

13 files changed

+203
-252
lines changed

13 files changed

+203
-252
lines changed

.github/workflows/build-image.yml

Lines changed: 69 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,69 @@
1+
name: Create and publish a Docker image
2+
3+
# Configures this workflow to run every time a change is pushed to the branch called `main`.
4+
on:
5+
push:
6+
paths:
7+
- "pyproject.toml"
8+
- "Dockerfile"
9+
pull_request:
10+
paths:
11+
- "pyproject.toml"
12+
- "Dockerfile"
13+
14+
# Defines two custom environment variables for the workflow. These are used for the Container registry domain, and a name for the Docker image that this workflow builds.
15+
env:
16+
REGISTRY: ghcr.io
17+
IMAGE_NAME: ${{ github.repository }}
18+
19+
# There is a single job in this workflow. It's configured to run on the latest available version of Ubuntu.
20+
jobs:
21+
build-and-push-image:
22+
runs-on: ubuntu-latest
23+
# Sets the permissions granted to the `GITHUB_TOKEN` for the actions in this job.
24+
permissions:
25+
contents: read
26+
packages: write
27+
attestations: write
28+
id-token: write
29+
#
30+
steps:
31+
- name: Checkout repository
32+
uses: actions/checkout@v4
33+
# Uses the `docker/login-action` action to log in to the Container registry registry using the account and password that will publish the packages. Once published, the packages are scoped to the account defined here.
34+
- name: Log in to the Container registry
35+
uses: docker/login-action@65b78e6e13532edd9afa3aa52ac7964289d1a9c1
36+
with:
37+
registry: ${{ env.REGISTRY }}
38+
username: ${{ github.actor }}
39+
password: ${{ secrets.GITHUB_TOKEN }}
40+
# This step uses [docker/metadata-action](https://github.com/docker/metadata-action#about) to extract tags and labels that will be applied to the specified image. The `id` "meta" allows the output of this step to be referenced in a subsequent step. The `images` value provides the base name for the tags and labels.
41+
- name: Extract metadata (tags, labels) for Docker
42+
id: meta
43+
uses: docker/metadata-action@9ec57ed1fcdbf14dcef7dfbe97b2010124a938b7
44+
with:
45+
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
46+
tags: |
47+
type=ref,event=branch
48+
type=sha
49+
# This step uses the `docker/build-push-action` action to build the image, based on your repository's `Dockerfile`. If the build succeeds, it pushes the image to GitHub Packages.
50+
# It uses the `context` parameter to define the build's context as the set of files located in the specified path. For more information, see [Usage](https://github.com/docker/build-push-action#usage) in the README of the `docker/build-push-action` repository.
51+
# It uses the `tags` and `labels` parameters to tag and label the image with the output from the "meta" step.
52+
- name: Build and push Docker image
53+
id: push
54+
uses: docker/build-push-action@f2a1d5e99d037542a71f64918e516c093c6f3fc4
55+
with:
56+
context: .
57+
push: true
58+
tags: ${{ steps.meta.outputs.tags }}
59+
labels: ${{ steps.meta.outputs.labels }}
60+
61+
# This step generates an artifact attestation for the image, which is an unforgeable statement about where and how it was built. It increases supply chain security for people who consume the image. For more information, see [Using artifact attestations to establish provenance for builds](/actions/security-guides/using-artifact-attestations-to-establish-provenance-for-builds).
62+
- name: Generate artifact attestation
63+
uses: actions/attest-build-provenance@v2
64+
with:
65+
subject-name: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME}}
66+
subject-digest: ${{ steps.push.outputs.digest }}
67+
push-to-registry: true
68+
69+

.github/workflows/format.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,9 +4,11 @@ on:
44
push:
55
paths:
66
- 'CollaborativeCoding/**'
7+
- 'tests/**'
78
pull_request:
89
paths:
910
- 'CollaborativeCoding/**'
11+
- 'tests/**'
1012

1113
jobs:
1214
format:

.github/workflows/test.yml

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,9 +2,13 @@ name: Test
22

33
on:
44
push:
5-
branches: [ main ]
5+
paths:
6+
- 'CollaborativeCoding/**'
7+
- 'tests/**'
68
pull_request:
7-
branches: [ main ]
9+
paths:
10+
- 'CollaborativeCoding/**'
11+
- 'tests/**'
812

913
jobs:
1014
test:

CollaborativeCoding/dataloaders/mnist_4_9.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -19,6 +19,10 @@ class MNISTDataset4_9(Dataset):
1919
Array of indices spcifying which samples to load. This determines the samples used by the dataloader.
2020
train : bool, optional
2121
Whether to train the model or not, by default False
22+
transorm : callable, optional
23+
Transform to apply to the images, by default None
24+
nr_channels : int, optional
25+
Number of channels in the images, by default 1
2226
"""
2327

2428
def __init__(

CollaborativeCoding/dataloaders/uspsh5_7_9.py

Lines changed: 1 addition & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -2,10 +2,8 @@
22

33
import h5py
44
import numpy as np
5-
import torch
65
from PIL import Image
76
from torch.utils.data import Dataset
8-
from torchvision import transforms
97

108

119
class USPSH5_Digit_7_9_Dataset(Dataset):
@@ -55,6 +53,7 @@ def __init__(
5553
self.h5_path = data_path / self.filename
5654
self.sample_ids = sample_ids
5755
self.nr_channels = nr_channels
56+
self.num_classes = 3
5857

5958
# Load the dataset from the HDF5 file
6059
with h5py.File(self.filepath, "r") as hf:
@@ -103,34 +102,3 @@ def __getitem__(self, id):
103102
image = self.transform(image)
104103

105104
return image, label
106-
107-
108-
def main():
109-
# Example Usage:
110-
transform = transforms.Compose(
111-
[
112-
transforms.Resize((16, 16)), # Ensure images are 16x16
113-
transforms.ToTensor(),
114-
transforms.Normalize((0.5,), (0.5,)), # Normalize to [-1, 1]
115-
]
116-
)
117-
indices = np.array([7, 8, 9])
118-
# Load the dataset
119-
dataset = USPSH5_Digit_7_9_Dataset(
120-
data_path="C:/Users/Solveig/OneDrive/Dokumente/UiT PhD/Courses/Git",
121-
sample_ids=indices,
122-
train=False,
123-
transform=transform,
124-
)
125-
data_loader = torch.utils.data.DataLoader(dataset, batch_size=2, shuffle=True)
126-
batch = next(iter(data_loader)) # grab a batch from the dataloader
127-
img, label = batch
128-
print(img.shape)
129-
print(label.shape)
130-
131-
# Check dataset size
132-
print(f"Dataset size: {len(dataset)}")
133-
134-
135-
if __name__ == "__main__":
136-
main()

0 commit comments

Comments
 (0)