You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: doc/sphinx-guides/source/admin/integrations.rst
-5Lines changed: 0 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -240,11 +240,6 @@ Discoverability
240
240
241
241
A number of builtin features related to data discovery are listed under :doc:`discoverability` but you can further increase the discoverability of your data by setting up integrations.
242
242
243
-
SHARE
244
-
+++++
245
-
246
-
`SHARE <http://www.share-research.org>`_ is building a free, open, data set about research and scholarly activities across their life cycle. It's possible to add a Dataverse installation as one of the `sources <https://share.osf.io/sources>`_ they include if you contact the SHARE team.
Copy file name to clipboardExpand all lines: doc/sphinx-guides/source/api/native-api.rst
+51-4Lines changed: 51 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4208,24 +4208,24 @@ Delete files from a dataset. This API call allows you to delete multiple files f
4208
4208
4209
4209
curl -H "X-Dataverse-key:$API_TOKEN" -X PUT "$SERVER_URL/api/datasets/:persistentId/deleteFiles?persistentId=$PERSISTENT_IDENTIFIER" \
4210
4210
-H "Content-Type: application/json" \
4211
-
-d '{"fileIds": [1, 2, 3]}'
4211
+
-d '[1, 2, 3]'
4212
4212
4213
4213
The fully expanded example above (without environment variables) looks like this:
4214
4214
4215
4215
.. code-block:: bash
4216
4216
4217
4217
curl -H "X-Dataverse-key:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -X PUT "https://demo.dataverse.org/api/datasets/:persistentId/deleteFiles?persistentId=doi:10.5072/FK2ABCDEF" \
4218
4218
-H "Content-Type: application/json" \
4219
-
-d '{"fileIds": [1, 2, 3]}'
4219
+
-d '[1, 2, 3]'
4220
4220
4221
-
The ``fileIds``in the JSON payload should be an array of file IDs that you want to delete from the dataset.
4221
+
The JSON payload should be an array of file IDs that you want to delete from the dataset.
4222
4222
4223
4223
You must have the appropriate permissions to delete files from the dataset.
4224
4224
4225
4225
Upon success, the API will return a JSON response with a success message and the number of files deleted.
4226
4226
4227
4227
The API call will report a 400 (BAD REQUEST) error if any of the files specified do not exist or are not in the latest version of the specified dataset.
4228
-
The ``fileIds``in the JSON payload should be an array of file IDs that you want to delete from the dataset.
4228
+
The JSON payload should be an array of file IDs that you want to delete from the dataset.
4229
4229
4230
4230
.. _api-dataset-role-assignment-history:
4231
4231
@@ -4357,6 +4357,53 @@ The CSV response for this call is the same as for the /api/datasets/{id}/assignm
4357
4357
4358
4358
Note: This feature requires the "role-assignment-history" feature flag to be enabled (see :ref:`feature-flags`).
4359
4359
4360
+
Update Dataset License
4361
+
~~~~~~~~~~~~~~~~~~~~~~
4362
+
4363
+
Updates the license of a dataset by applying it to the draft version, or by creating a draft if none exists.
4364
+
4365
+
The JSON representation of a license can take two forms, depending on whether you want to specify a predefined license or define custom terms of use and access.
4366
+
4367
+
To set a predefined license (e.g., CC BY 4.0), provide a JSON body with the license name:
4368
+
4369
+
.. code-block:: json
4370
+
4371
+
{
4372
+
"name": "CC BY 4.0"
4373
+
}
4374
+
4375
+
To define custom terms of use and access, provide a JSON body with the following properties. All fields within ``customTerms`` are optional, except for the ``termsOfUse`` field, which is required:
Copy file name to clipboardExpand all lines: doc/sphinx-guides/source/developers/making-library-releases.rst
+10-50Lines changed: 10 additions & 50 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,7 +13,9 @@ Note: See :doc:`making-releases` for Dataverse itself.
13
13
We release Java libraries to Maven Central that are used by Dataverse (and perhaps `other <https://github.com/gdcc/xoai/issues/141>`_ `software <https://github.com/gdcc/xoai/issues/170>`_!):
We release JavaScript/TypeScript libraries to npm:
19
21
@@ -109,60 +111,18 @@ Releasing a New Library to Maven Central
109
111
At a high level:
110
112
111
113
- Start with a snapshot release.
112
-
- Use an existing pom.xml as a starting point.
113
-
- Use existing GitHub Actions workflows as a starting point.
114
-
- Create secrets in the new library's GitHub repo used by the workflow.
115
-
- If you need an entire new namespace, look at previous issues such as https://issues.sonatype.org/browse/OSSRH-94575 and https://issues.sonatype.org/browse/OSSRH-94577
116
-
117
-
Updating pom.xml for a Snapshot Release
118
-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
119
-
120
-
Before publishing a final version to Maven Central, you should publish a snapshot release or two. For each snapshot release you publish, the jar name will be unique each time (e.g. ``foobar-0.0.1-20240430.175110-3.jar``), so you can safely publish over and over with the same version number.
121
-
122
-
We use the `Nexus Staging Maven Plugin <https://github.com/sonatype/nexus-maven-plugins/blob/main/staging/maven-plugin/README.md>`_ to push snapshot releases to https://s01.oss.sonatype.org/content/groups/staging/io/gdcc/ and https://s01.oss.sonatype.org/content/groups/staging/org/dataverse/
123
-
124
-
Add the following to your pom.xml:
125
-
126
-
.. code-block:: xml
114
+
- Use an existing pom.xml as a starting point, such as from `Croissant <https://github.com/gdcc/exporter-croissant>`_, that inherits from the common Maven parent (https://github.com/gdcc/maven-parent). You can also play around with the "hello" project (https://github.com/gdcc/hello) and even make releases from it since it is designed to be a sandbox for publishing to Maven Central.
115
+
- Use existing GitHub Actions workflows as a starting point, such as from `Croissant <https://github.com/gdcc/exporter-croissant>`_. As of this writing we have separate actions for ``maven-snapshot.yml`` and ``maven-release.yml``.
116
+
- For repos under https://github.com/IQSS, create secrets in the new library's GitHub repo used by the workflow. This is necessary for the IQSS org because "organization secrets are not available for organizations on legacy per-repository billing plans." For repos under https://github.com/gdcc you can make use of shared secrets at the org level. These are the environment variables we use:
In GitHub, you will likely need to configure the following secrets:
157
-
158
-
- DATAVERSEBOT_GPG_KEY
159
-
- DATAVERSEBOT_GPG_PASSWORD
160
-
- DATAVERSEBOT_SONATYPE_TOKEN
161
-
- DATAVERSEBOT_SONATYPE_USERNAME
162
-
163
-
Note that some of these secrets might be configured at the org level (e.g. gdcc or IQSS).
164
-
165
-
Many of the automated tasks are performed by the dataversebot account on GitHub: https://github.com/dataversebot
124
+
- DATAVERSEBOT_SONATYPE_USERNAME
125
+
- If you need an entire new namespace, look at previous issues such as https://issues.sonatype.org/browse/OSSRH-94575 and https://issues.sonatype.org/browse/OSSRH-94577
Copy file name to clipboardExpand all lines: doc/sphinx-guides/source/installation/config.rst
+19Lines changed: 19 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -115,6 +115,23 @@ See the :ref:`payara` section of :doc:`prerequisites` for details and init scrip
115
115
116
116
Related to this is that you should remove ``/root/.payara/pass`` to ensure that Payara isn't ever accidentally started as root. Without the password, Payara won't be able to start as root, which is a good thing.
117
117
118
+
.. _payara-ports-localhost-only:
119
+
120
+
Restricting Payara's Ports to localhost
121
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
122
+
123
+
In the recommended setup of Dataverse, you do not expose Payara's ports directly to the Internet. Rather, you front Payara with a proxy such as Apache.
124
+
125
+
If you are running Payara and your proxy on the same server, we recommend having Payara listen only to localhost, which is how your proxy talks to it, with the following command:
126
+
127
+
``./asadmin set server-config.network-config.network-listeners.network-listener.http-listener-1.address=127.0.0.1``
128
+
129
+
(You should **NOT** use the configuration option above if you are running in a load-balanced environment, or otherwise have your proxy on a different host than Payara.)
130
+
131
+
To test that Payara is now only listening on localhost, try hitting port 8080 from the Internet. Payara should not respond.
132
+
133
+
See also :ref:`network-ports`.
134
+
118
135
.. _secure-password-storage:
119
136
120
137
Secure Password Storage
@@ -246,6 +263,8 @@ If you are running an installation with Apache and Payara on the same server, an
246
263
247
264
You should **NOT** use the configuration option above if you are running in a load-balanced environment, or otherwise have the web server on a different host than the application server.
248
265
266
+
This security tip is also mentioned at :ref:`payara-ports-localhost-only`.
Copy file name to clipboardExpand all lines: doc/sphinx-guides/source/user/dataset-management.rst
+3-1Lines changed: 3 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -704,7 +704,7 @@ If you have a Contributor role (can edit metadata, upload files, and edit files,
704
704
Preview URL to Review Unpublished Dataset
705
705
=========================================
706
706
707
-
Creating a Preview URL for a draft version of your dataset allows you to share your dataset (for viewing and downloading of files) before it is published to a wide group of individuals who may not have a user account on the Dataverse installation. Anyone you send the Preview URL to will not have to log into the Dataverse installation to view the unpublished dataset. Once a dataset has been published you may create new General Preview URLs for subsequent draft versions, but the Anonymous Preview URL will no longer be available.
707
+
Creating a Preview URL for a draft version of your dataset allows you to share your dataset (for viewing and downloading files, including :ref:`restricted <restricted-files>` and :ref:`embargoed <embargoes>` files) before it is published to a wide group of people who might not have a user account on the Dataverse installation. Anyone you send the Preview URL to will not have to log in to the Dataverse installation to view the unpublished dataset. Once a dataset has been published, you may create new General Preview URLs for subsequent draft versions, but the Anonymous Preview URL will no longer be available.
708
708
709
709
**Note:** To create a Preview URL, you must have the *ManageDatasetPermissions* permission for your draft dataset, usually given by the :ref:`roles <permissions>` *Curator* or *Administrator*.
710
710
@@ -726,6 +726,8 @@ To disable a Preview URL and to revoke access, follow the same steps as above un
726
726
727
727
Note that only one Preview URL (normal or with anonymized access) can be configured per dataset at a time.
0 commit comments