Skip to content

Commit b621459

Browse files
committed
Merge branch 'develop' into 11639-db-opts-idempotency
2 parents 9497acd + a042fc5 commit b621459

File tree

43 files changed

+881
-815
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

43 files changed

+881
-815
lines changed

.github/workflows/deploy_beta_testing.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ jobs:
3636
run: echo "war_file=$(ls *.war | head -1)">> $GITHUB_ENV
3737

3838
- name: Upload war artifact
39-
uses: actions/upload-artifact@v4
39+
uses: actions/upload-artifact@v5
4040
with:
4141
name: built-app
4242
path: ./target/${{ env.war_file }}
@@ -50,7 +50,7 @@ jobs:
5050
- uses: actions/checkout@v5
5151

5252
- name: Download war artifact
53-
uses: actions/download-artifact@v5
53+
uses: actions/download-artifact@v6
5454
with:
5555
name: built-app
5656
path: ./

.github/workflows/maven_unit_test.yml

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ jobs:
6262

6363
# Upload the built war file. For download, it will be wrapped in a ZIP by GitHub.
6464
# See also https://github.com/actions/upload-artifact#zipped-artifact-downloads
65-
- uses: actions/upload-artifact@v4
65+
- uses: actions/upload-artifact@v5
6666
with:
6767
name: dataverse-java${{ matrix.jdk }}.war
6868
path: target/dataverse*.war
@@ -72,7 +72,7 @@ jobs:
7272
- run: |
7373
tar -cvf java-builddir.tar target
7474
tar -cvf java-m2-selection.tar ~/.m2/repository/io/gdcc/dataverse-*
75-
- uses: actions/upload-artifact@v4
75+
- uses: actions/upload-artifact@v5
7676
with:
7777
name: java-artifacts
7878
path: |
@@ -112,7 +112,7 @@ jobs:
112112
cache: maven
113113

114114
# Get the build output from the unit test job
115-
- uses: actions/download-artifact@v5
115+
- uses: actions/download-artifact@v6
116116
with:
117117
name: java-artifacts
118118
- run: |
@@ -124,7 +124,7 @@ jobs:
124124

125125
# Wrap up and send to coverage job
126126
- run: tar -cvf java-reportdir.tar target/site
127-
- uses: actions/upload-artifact@v4
127+
- uses: actions/upload-artifact@v5
128128
with:
129129
name: java-reportdir
130130
path: java-reportdir.tar
@@ -145,7 +145,7 @@ jobs:
145145
cache: maven
146146

147147
# Get the build output from the integration test job
148-
- uses: actions/download-artifact@v5
148+
- uses: actions/download-artifact@v6
149149
with:
150150
name: java-reportdir
151151
- run: tar -xvf java-reportdir.tar
Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
## New Endpoint: `/datasets/{id}/license`
2+
3+
A new endpoint has been implemented to manage dataset licenses.
4+
5+
### Functionality
6+
- Updates the license of a dataset by applying it to the draft version.
7+
- If no draft exists, a new one is automatically created.
8+
9+
### Usage
10+
This endpoint supports two ways of defining a license:
11+
1. **Predefined License** – Provide the license name (e.g., `CC BY 4.0`).
12+
2. **Custom Terms of Use and Access** – Provide a JSON body with the `customTerms` object.
13+
- All fields are optional **except** `termsOfUse`, which is required.
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
### Suppression of the Host Dataverse field
2+
3+
When creating a dataset, the _host dataverse_ field is not shown when the user can only add datasets to one collection.
Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
### Dataset version summaries API changedFileMetaData count fix
2+
3+
The endpoint ``{id}/versions/compareSummary`` was previously returning an incorrect count for
4+
the ``changedFileMetaData`` field.
5+
The logic for calculating this count has been fixed to accurately reflect the total number of file metadata changes
6+
across all files in the dataset version.
7+
8+
### Related issues
9+
10+
- https://github.com/IQSS/dataverse/issues/11921

doc/sphinx-guides/source/admin/integrations.rst

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -240,11 +240,6 @@ Discoverability
240240

241241
A number of builtin features related to data discovery are listed under :doc:`discoverability` but you can further increase the discoverability of your data by setting up integrations.
242242

243-
SHARE
244-
+++++
245-
246-
`SHARE <http://www.share-research.org>`_ is building a free, open, data set about research and scholarly activities across their life cycle. It's possible to add a Dataverse installation as one of the `sources <https://share.osf.io/sources>`_ they include if you contact the SHARE team.
247-
248243
Geodisy
249244
+++++++
250245

doc/sphinx-guides/source/api/native-api.rst

Lines changed: 51 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -4208,24 +4208,24 @@ Delete files from a dataset. This API call allows you to delete multiple files f
42084208
42094209
curl -H "X-Dataverse-key:$API_TOKEN" -X PUT "$SERVER_URL/api/datasets/:persistentId/deleteFiles?persistentId=$PERSISTENT_IDENTIFIER" \
42104210
-H "Content-Type: application/json" \
4211-
-d '{"fileIds": [1, 2, 3]}'
4211+
-d '[1, 2, 3]'
42124212
42134213
The fully expanded example above (without environment variables) looks like this:
42144214
42154215
.. code-block:: bash
42164216
42174217
curl -H "X-Dataverse-key:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -X PUT "https://demo.dataverse.org/api/datasets/:persistentId/deleteFiles?persistentId=doi:10.5072/FK2ABCDEF" \
42184218
-H "Content-Type: application/json" \
4219-
-d '{"fileIds": [1, 2, 3]}'
4219+
-d '[1, 2, 3]'
42204220
4221-
The ``fileIds`` in the JSON payload should be an array of file IDs that you want to delete from the dataset.
4221+
The JSON payload should be an array of file IDs that you want to delete from the dataset.
42224222
42234223
You must have the appropriate permissions to delete files from the dataset.
42244224
42254225
Upon success, the API will return a JSON response with a success message and the number of files deleted.
42264226
42274227
The API call will report a 400 (BAD REQUEST) error if any of the files specified do not exist or are not in the latest version of the specified dataset.
4228-
The ``fileIds`` in the JSON payload should be an array of file IDs that you want to delete from the dataset.
4228+
The JSON payload should be an array of file IDs that you want to delete from the dataset.
42294229
42304230
.. _api-dataset-role-assignment-history:
42314231
@@ -4357,6 +4357,53 @@ The CSV response for this call is the same as for the /api/datasets/{id}/assignm
43574357
43584358
Note: This feature requires the "role-assignment-history" feature flag to be enabled (see :ref:`feature-flags`).
43594359
4360+
Update Dataset License
4361+
~~~~~~~~~~~~~~~~~~~~~~
4362+
4363+
Updates the license of a dataset by applying it to the draft version, or by creating a draft if none exists.
4364+
4365+
The JSON representation of a license can take two forms, depending on whether you want to specify a predefined license or define custom terms of use and access.
4366+
4367+
To set a predefined license (e.g., CC BY 4.0), provide a JSON body with the license name:
4368+
4369+
.. code-block:: json
4370+
4371+
{
4372+
"name": "CC BY 4.0"
4373+
}
4374+
4375+
To define custom terms of use and access, provide a JSON body with the following properties. All fields within ``customTerms`` are optional, except for the ``termsOfUse`` field, which is required:
4376+
4377+
.. code-block:: json
4378+
4379+
{
4380+
"customTerms": {
4381+
"termsOfUse": "Your terms of use",
4382+
"confidentialityDeclaration": "Your confidentiality declaration",
4383+
"specialPermissions": "Your special permissions",
4384+
"restrictions": "Your restrictions",
4385+
"citationRequirements": "Your citation requirements",
4386+
"depositorRequirements": "Your depositor requirements",
4387+
"conditions": "Your conditions",
4388+
"disclaimer": "Your disclaimer"
4389+
}
4390+
}
4391+
4392+
.. code-block:: bash
4393+
4394+
export API_TOKEN=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
4395+
export SERVER_URL=https://demo.dataverse.org
4396+
export ID=3
4397+
export FILE_PATH=license.json
4398+
4399+
curl -H "X-Dataverse-key:$API_TOKEN" -X PUT "$SERVER_URL/api/datasets/$ID/license" -H "Content-type:application/json" --upload-file $FILE_PATH
4400+
4401+
The fully expanded example above (without environment variables) looks like this:
4402+
4403+
.. code-block:: bash
4404+
4405+
curl -H "X-Dataverse-key:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -X PUT "https://demo.dataverse.org/api/datasets/3/license" -H "Content-type:application/json" --upload-file license.json
4406+
43604407
Files
43614408
-----
43624409

doc/sphinx-guides/source/container/dev-usage.rst

Lines changed: 17 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -145,13 +145,13 @@ Accessing Harvesting Log Files
145145

146146
\1. Open a terminal and access the Dataverse container.
147147

148-
Run the following command to access the Dataverse container (assuming your container is named dataverse-1):
148+
Run the following command to access the Dataverse container:
149149

150150
.. code-block::
151151
152-
docker exec -it dataverse-1 bash
152+
docker exec -it dev_dataverse bash
153153
154-
This command opens an interactive shell within the dataverse-1 container.
154+
This command opens an interactive shell within the dev_dataverse container.
155155

156156
\2. Navigate to the log files directory.
157157

@@ -233,6 +233,13 @@ Hotswapping methods requires using JDWP (Debug Mode), but does not allow switchi
233233
**Requires IntelliJ Ultimate!**
234234
(Note that `free educational licenses <https://www.jetbrains.com/community/education/>`_ are available)
235235

236+
Go to settings, then plugins. Install "Payara Ultimate Tools". For more information:
237+
238+
- `plugin homepage <https://plugins.jetbrains.com/plugin/15114-payara-ultimate-tools>`_
239+
- `docs <https://docs.payara.fish/community/docs/Technical%20Documentation/Ecosystem/IDE%20Integration/IntelliJ%20Plugin/Overview.html>`_
240+
- `source <https://github.com/payara/ecosystem-intellij-plugin>`_
241+
- `issues <https://github.com/payara/ecosystem-support>`_
242+
236243
.. image:: img/intellij-payara-plugin-install.png
237244

238245
#. Configure a connection to Payara:
@@ -284,6 +291,7 @@ Hotswapping methods requires using JDWP (Debug Mode), but does not allow switchi
284291

285292
You might want to tweak the hot deploy behavior in the "Server" tab now.
286293
"Update action" can be found in the run window (see below).
294+
By default it is "Hot Swap classes", which works fine, but as the screenshot shows you can also change it to "Redeploy".
287295
"Frame deactivation" means switching from IntelliJ window to something else, e.g. your browser.
288296
*Note: static resources like properties, XHTML etc will only update when redeploying!*
289297

@@ -305,7 +313,11 @@ Hotswapping methods requires using JDWP (Debug Mode), but does not allow switchi
305313
See cheat sheet above for more options.
306314
Note that this command either assumes you built the :doc:`app-image` first or will download it from Docker Hub.
307315
.. group-tab:: IntelliJ
308-
You can create a service configuration to automatically start services for you.
316+
Note that you can skip this step if you're ok running the command under the "Maven" tab, which is this:
317+
318+
``mvn -Pct docker:run -Dapp.skipDeploy``
319+
320+
In IntelliJ you can create a service configuration to automatically start services for you.
309321

310322
**IMPORTANT**: This requires installation of the `Docker plugin <https://plugins.jetbrains.com/plugin/7724-docker>`_.
311323

@@ -362,7 +374,7 @@ Hotswapping methods requires using JDWP (Debug Mode), but does not allow switchi
362374

363375
.. image:: img/intellij-payara-run-output.png
364376

365-
Manually hotswap classes in "Debug" mode via "Run" > "Debugging Actions" > "Reload Changed Classes".
377+
Manually hotswap classes in "Debug" mode via "Run" > "Debugging Actions" > "Compile and Reload Modified Files".
366378

367379
.. image:: img/intellij-payara-run-menu-reload.png
368380

doc/sphinx-guides/source/developers/making-library-releases.rst

Lines changed: 10 additions & 50 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,9 @@ Note: See :doc:`making-releases` for Dataverse itself.
1313
We release Java libraries to Maven Central that are used by Dataverse (and perhaps `other <https://github.com/gdcc/xoai/issues/141>`_ `software <https://github.com/gdcc/xoai/issues/170>`_!):
1414

1515
- https://central.sonatype.com/namespace/org.dataverse
16+
- https://central.sonatype.com/namespace/org.dataverse.test
1617
- https://central.sonatype.com/namespace/io.gdcc
18+
- https://central.sonatype.com/namespace/io.gdcc.export
1719

1820
We release JavaScript/TypeScript libraries to npm:
1921

@@ -109,60 +111,18 @@ Releasing a New Library to Maven Central
109111
At a high level:
110112

111113
- Start with a snapshot release.
112-
- Use an existing pom.xml as a starting point.
113-
- Use existing GitHub Actions workflows as a starting point.
114-
- Create secrets in the new library's GitHub repo used by the workflow.
115-
- If you need an entire new namespace, look at previous issues such as https://issues.sonatype.org/browse/OSSRH-94575 and https://issues.sonatype.org/browse/OSSRH-94577
116-
117-
Updating pom.xml for a Snapshot Release
118-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
119-
120-
Before publishing a final version to Maven Central, you should publish a snapshot release or two. For each snapshot release you publish, the jar name will be unique each time (e.g. ``foobar-0.0.1-20240430.175110-3.jar``), so you can safely publish over and over with the same version number.
121-
122-
We use the `Nexus Staging Maven Plugin <https://github.com/sonatype/nexus-maven-plugins/blob/main/staging/maven-plugin/README.md>`_ to push snapshot releases to https://s01.oss.sonatype.org/content/groups/staging/io/gdcc/ and https://s01.oss.sonatype.org/content/groups/staging/org/dataverse/
123-
124-
Add the following to your pom.xml:
125-
126-
.. code-block:: xml
114+
- Use an existing pom.xml as a starting point, such as from `Croissant <https://github.com/gdcc/exporter-croissant>`_, that inherits from the common Maven parent (https://github.com/gdcc/maven-parent). You can also play around with the "hello" project (https://github.com/gdcc/hello) and even make releases from it since it is designed to be a sandbox for publishing to Maven Central.
115+
- Use existing GitHub Actions workflows as a starting point, such as from `Croissant <https://github.com/gdcc/exporter-croissant>`_. As of this writing we have separate actions for ``maven-snapshot.yml`` and ``maven-release.yml``.
116+
- For repos under https://github.com/IQSS, create secrets in the new library's GitHub repo used by the workflow. This is necessary for the IQSS org because "organization secrets are not available for organizations on legacy per-repository billing plans." For repos under https://github.com/gdcc you can make use of shared secrets at the org level. These are the environment variables we use:
127117

128-
<version>0.0.1-SNAPSHOT</version>
118+
- DATAVERSEBOT_GPG_KEY
129119

130-
<distributionManagement>
131-
<snapshotRepository>
132-
<id>ossrh</id>
133-
<url>https://s01.oss.sonatype.org/content/repositories/snapshots</url>
134-
</snapshotRepository>
135-
<repository>
136-
<id>ossrh</id>
137-
<url>https://s01.oss.sonatype.org/service/local/staging/deploy/maven2/</url>
138-
</repository>
139-
</distributionManagement>
120+
- DATAVERSEBOT_GPG_PASSWORD
140121

141-
<plugin>
142-
<groupId>org.sonatype.plugins</groupId>
143-
<artifactId>nexus-staging-maven-plugin</artifactId>
144-
<version>${nexus-staging.version}</version>
145-
<extensions>true</extensions>
146-
<configuration>
147-
<serverId>ossrh</serverId>
148-
<nexusUrl>https://s01.oss.sonatype.org</nexusUrl>
149-
<autoReleaseAfterClose>true</autoReleaseAfterClose>
150-
</configuration>
151-
</plugin>
122+
- DATAVERSEBOT_SONATYPE_TOKEN
152123

153-
Configuring Secrets
154-
~~~~~~~~~~~~~~~~~~~
155-
156-
In GitHub, you will likely need to configure the following secrets:
157-
158-
- DATAVERSEBOT_GPG_KEY
159-
- DATAVERSEBOT_GPG_PASSWORD
160-
- DATAVERSEBOT_SONATYPE_TOKEN
161-
- DATAVERSEBOT_SONATYPE_USERNAME
162-
163-
Note that some of these secrets might be configured at the org level (e.g. gdcc or IQSS).
164-
165-
Many of the automated tasks are performed by the dataversebot account on GitHub: https://github.com/dataversebot
124+
- DATAVERSEBOT_SONATYPE_USERNAME
125+
- If you need an entire new namespace, look at previous issues such as https://issues.sonatype.org/browse/OSSRH-94575 and https://issues.sonatype.org/browse/OSSRH-94577
166126

167127
npm (JavaScript/TypeScript)
168128
---------------------------

doc/sphinx-guides/source/developers/tips.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -124,7 +124,7 @@ Here's an example of using these credentials from within the PostgreSQL containe
124124

125125
.. code-block:: bash
126126
127-
pdurbin@beamish dataverse % docker exec -it postgres-1 bash
127+
pdurbin@beamish dataverse % docker exec -it dev_postgres bash
128128
root@postgres:/# export PGPASSWORD=secret
129129
root@postgres:/# psql -h localhost -U dataverse dataverse
130130
psql (16.3 (Debian 16.3-1.pgdg120+1))

0 commit comments

Comments
 (0)