Skip to content

Commit d2b6a46

Browse files
authored
Merge pull request #11997 from IQSS/11987-storage-quotas-on-datasets
Adds configurable storage quotas on individual datasets
2 parents 37388cc + 4c7b190 commit d2b6a46

File tree

16 files changed

+611
-24
lines changed

16 files changed

+611
-24
lines changed
Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
It is now possible to define storage quotas on individual datasets. See the API guide for more information.
2+
The practical use case is for datasets in the top-level, root collection. This does not address the use case of a user creating multiple datasets. But there is an open dev. issue for adding per-user storage quotas as well.
3+
4+
A convenience API `/api/datasets/{id}/uploadlimits` has been added to show the remaining storage and/or number of files quotas, if present.

doc/sphinx-guides/source/admin/dataverses-datasets.rst

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -271,6 +271,8 @@ The effective store can be seen using::
271271

272272
curl http://$SERVER/api/datasets/$dataset-id/storageDriver
273273

274+
The output of the API will include the id, label, type (for example, "file" or "s3") as well as the support for direct download and upload.
275+
274276
To remove an assigned store, and allow the dataset to inherit the store from it's parent collection, use the following (only a superuser can do this) ::
275277

276278
curl -H "X-Dataverse-key: $API_TOKEN" -X DELETE http://$SERVER/api/datasets/$dataset-id/storageDriver

doc/sphinx-guides/source/api/native-api.rst

Lines changed: 72 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1250,16 +1250,22 @@ Collection Storage Quotas
12501250
12511251
curl -H "X-Dataverse-key:$API_TOKEN" "$SERVER_URL/api/dataverses/$ID/storage/quota"
12521252
1253-
Will output the storage quota allocated (in bytes), or a message indicating that the quota is not defined for the specific collection. The user identified by the API token must have the ``Manage`` permission on the collection.
1253+
Will output the storage quota allocated (in bytes), or a message indicating that the quota is not defined for the collection. If this is an unpublished collection, the user must have the ``ViewUnpublishedDataverse`` permission.
1254+
With an optional query parameter ``showInherited=true`` it will show the applicable quota potentially defined on the nearest parent when the collection does not have a quota configured directly.
12541255

1256+
.. code-block::
1257+
1258+
curl -H "X-Dataverse-key:$API_TOKEN" "$SERVER_URL/api/dataverses/$ID/storage/use"
1259+
1260+
Will output the dynamically cached total storage size (in bytes) used by the collection. The user identified by the API token must have the ``Edit`` permission on the collection.
12551261

12561262
To set or change the storage allocation quota for a collection:
12571263

12581264
.. code-block::
12591265
1260-
curl -X POST -H "X-Dataverse-key:$API_TOKEN" "$SERVER_URL/api/dataverses/$ID/storage/quota/$SIZE_IN_BYTES"
1266+
curl -X PUT -H "X-Dataverse-key:$API_TOKEN" -d $SIZE_IN_BYTES "$SERVER_URL/api/dataverses/$ID/storage/quota"
12611267
1262-
This is API is superuser-only.
1268+
This API is superuser-only.
12631269

12641270

12651271
To delete a storage quota configured for a collection:
@@ -1268,9 +1274,70 @@ To delete a storage quota configured for a collection:
12681274
12691275
curl -X DELETE -H "X-Dataverse-key:$API_TOKEN" "$SERVER_URL/api/dataverses/$ID/storage/quota"
12701276
1271-
This is API is superuser-only.
1277+
This API is superuser-only.
1278+
1279+
Storage Quotas on Individual Datasets
1280+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1281+
1282+
.. code-block::
1283+
1284+
curl -H "X-Dataverse-key:$API_TOKEN" "$SERVER_URL/api/datasets/$ID/storage/quota"
1285+
1286+
Will output the storage quota allocated (in bytes), or a message indicating that the quota is not defined for this dataset. If this is an unpublished dataset, the user must have the ``ViewUnpublishedDataset`` permission.
1287+
With an optional query parameter ``showInherited=true`` it will show the applicable quota potentially defined on the nearest parent collection when the dataset does not have a quota configured directly.
1288+
1289+
.. code-block::
1290+
1291+
curl -H "X-Dataverse-key:$API_TOKEN" "$SERVER_URL/api/datasets/$ID/storage/use"
1292+
1293+
Will output the dynamically cached total storage size (in bytes) used by the dataset. The user identified by the API token must have the ``Edit`` permission on the dataset.
1294+
1295+
To set or change the storage allocation quota for a dataset:
1296+
1297+
.. code-block::
1298+
1299+
curl -X PUT -H "X-Dataverse-key:$API_TOKEN" -d $SIZE_IN_BYTES "$SERVER_URL/api/datasets/$ID/storage/quota"
1300+
1301+
This API is superuser-only.
1302+
1303+
1304+
To delete a storage quota configured for a dataset:
1305+
1306+
.. code-block::
1307+
1308+
curl -X DELETE -H "X-Dataverse-key:$API_TOKEN" "$SERVER_URL/api/datasets/$ID/storage/quota"
1309+
1310+
This API is superuser-only.
1311+
1312+
The following convenience API shows the dynamic values of the *remaining* storage size and/or file number quotas on the dataset, if present. For example:
1313+
1314+
.. code-block::
1315+
1316+
curl -H "X-Dataverse-key: $API_TOKEN" "http://localhost:8080/api/datasets/$dataset-id/uploadlimits"
1317+
{
1318+
"status": "OK",
1319+
"data": {
1320+
"uploadLimits": {
1321+
"numberOfFilesRemaining": 20,
1322+
"storageQuotaRemaining": 1048576
1323+
}
1324+
}
1325+
}
1326+
1327+
Or, when neither limit is present:
1328+
1329+
.. code-block::
1330+
1331+
{
1332+
"status": "OK",
1333+
"data": {
1334+
"uploadLimits": {}
1335+
}
1336+
}
1337+
1338+
This API requires the Edit permission on the dataset.
12721339

1273-
Use the ``/settings`` API to enable or disable the enforcement of storage quotas that are defined across the instance via the following setting. For example,
1340+
Use the ``/settings`` API to enable or disable the enforcement of storage quotas that are defined across the instance via the following setting:
12741341

12751342
.. code-block::
12761343

src/main/java/edu/harvard/iq/dataverse/DatasetServiceBean.java

Lines changed: 22 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,7 @@
2424
import edu.harvard.iq.dataverse.search.IndexServiceBean;
2525
import edu.harvard.iq.dataverse.settings.FeatureFlags;
2626
import edu.harvard.iq.dataverse.settings.SettingsServiceBean;
27+
import edu.harvard.iq.dataverse.storageuse.StorageQuota;
2728
import edu.harvard.iq.dataverse.util.BundleUtil;
2829
import edu.harvard.iq.dataverse.util.SystemConfig;
2930
import edu.harvard.iq.dataverse.workflows.WorkflowComment;
@@ -1118,5 +1119,25 @@ public int getDataFileCountByOwner(long id) {
11181119
Long c = em.createNamedQuery("Dataset.countFilesByOwnerId", Long.class).setParameter("ownerId", id).getSingleResult();
11191120
return c.intValue(); // ignoring the truncation since the number should never be too large
11201121
}
1121-
1122+
1123+
/**
1124+
*
1125+
* @todo: consider moving the quota method, from here and the DataverseServiceBean,
1126+
* to DvObjectServiceBean.
1127+
*/
1128+
public void saveStorageQuota(Dataset target, Long allocation) {
1129+
StorageQuota storageQuota = target.getStorageQuota();
1130+
1131+
if (storageQuota != null) {
1132+
storageQuota.setAllocation(allocation);
1133+
em.merge(storageQuota);
1134+
} else {
1135+
storageQuota = new StorageQuota();
1136+
storageQuota.setDefinitionPoint(target);
1137+
storageQuota.setAllocation(allocation);
1138+
target.setStorageQuota(storageQuota);
1139+
em.persist(storageQuota);
1140+
}
1141+
em.flush();
1142+
}
11221143
}

src/main/java/edu/harvard/iq/dataverse/DataverseServiceBean.java

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1307,7 +1307,12 @@ static String getBaseSchemaStringFromFile(String pathToJsonFile) {
13071307
" },\n" +
13081308
" \"required\": [\"datasetVersion\"]\n" +
13091309
"}\n";
1310-
1310+
1311+
/**
1312+
*
1313+
* @todo: consider moving these quota methods, and the DatasetServiceBean
1314+
* equivalent to DvObjectServiceBean.
1315+
*/
13111316
public void saveStorageQuota(Dataverse target, Long allocation) {
13121317
StorageQuota storageQuota = target.getStorageQuota();
13131318

src/main/java/edu/harvard/iq/dataverse/api/Datasets.java

Lines changed: 114 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6158,4 +6158,118 @@ public Response updateLicense(@Context ContainerRequestContext crc,
61586158
}
61596159
}, getRequestUser(crc));
61606160
}
6161+
6162+
/**
6163+
* Storage quotas and use. Note that these methods replicate the
6164+
* collection-level equivalents 1:1. Both the quotas and the system for
6165+
* caching the size of the storage in use are implemented on
6166+
* DvObjectContainers internally and therefore work identically in both
6167+
* cases.
6168+
*/
6169+
6170+
@GET
6171+
@AuthRequired
6172+
@Path("{identifier}/storage/quota")
6173+
public Response getDatasetQuota(@Context ContainerRequestContext crc, @PathParam("identifier") String dvIdtf, @QueryParam("showInherited") boolean showInherited) throws WrappedResponse {
6174+
try {
6175+
Long bytesAllocated = execCommand(new GetDatasetQuotaCommand(createDataverseRequest(getRequestUser(crc)), findDatasetOrDie(dvIdtf), showInherited));
6176+
if (bytesAllocated != null) {
6177+
return ok(MessageFormat.format(BundleUtil.getStringFromBundle("dataset.storage.quota.allocation"),bytesAllocated));
6178+
}
6179+
return ok(BundleUtil.getStringFromBundle("dataset.storage.quota.notdefined"));
6180+
} catch (WrappedResponse ex) {
6181+
return ex.getResponse();
6182+
}
6183+
}
6184+
6185+
@PUT
6186+
@AuthRequired
6187+
@Path("{identifier}/storage/quota")
6188+
public Response setDatasetQuota(@Context ContainerRequestContext crc, @PathParam("identifier") String dvIdtf, String value) throws WrappedResponse {
6189+
try {
6190+
Long bytesAllocated;
6191+
try {
6192+
bytesAllocated = Long.parseLong(value);
6193+
} catch (NumberFormatException nfe){
6194+
return error(Status.BAD_REQUEST, value + " is not a valid number of bytes");
6195+
}
6196+
execCommand(new SetDatasetQuotaCommand(createDataverseRequest(getRequestUser(crc)), findDatasetOrDie(dvIdtf), bytesAllocated));
6197+
return ok(BundleUtil.getStringFromBundle("dataset.storage.quota.updated"));
6198+
} catch (WrappedResponse ex) {
6199+
return ex.getResponse();
6200+
}
6201+
}
6202+
6203+
@DELETE
6204+
@AuthRequired
6205+
@Path("{identifier}/storage/quota")
6206+
public Response deleteDatasetQuota(@Context ContainerRequestContext crc, @PathParam("identifier") String dvIdtf) throws WrappedResponse {
6207+
try {
6208+
execCommand(new DeleteDatasetQuotaCommand(createDataverseRequest(getRequestUser(crc)), findDatasetOrDie(dvIdtf)));
6209+
return ok(BundleUtil.getStringFromBundle("dataset.storage.quota.deleted"));
6210+
} catch (WrappedResponse ex) {
6211+
return ex.getResponse();
6212+
}
6213+
}
6214+
6215+
/**
6216+
*
6217+
* @param crc
6218+
* @param identifier
6219+
* @return
6220+
* @throws edu.harvard.iq.dataverse.api.AbstractApiBean.WrappedResponse
6221+
* @todo: add an optional parameter that would force the recorded storage use
6222+
* to be recalculated (or should that be a POST version of this API?)
6223+
*/
6224+
@GET
6225+
@AuthRequired
6226+
@Path("{identifier}/storage/use")
6227+
public Response getDatasetStorageUse(@Context ContainerRequestContext crc, @PathParam("identifier") String identifier) throws WrappedResponse {
6228+
return response(req -> ok(MessageFormat.format(BundleUtil.getStringFromBundle("dataset.storage.use"),
6229+
execCommand(new GetDatasetStorageUseCommand(req, findDatasetOrDie(identifier))))), getRequestUser(crc));
6230+
}
6231+
6232+
@GET
6233+
@AuthRequired
6234+
@Path("{identifier}/uploadlimits")
6235+
public Response getUploadLimits(@Context ContainerRequestContext crc, @PathParam("identifier") String dvIdtf,
6236+
@Context UriInfo uriInfo,
6237+
@Context HttpHeaders headers) throws WrappedResponse {
6238+
6239+
Dataset dataset;
6240+
6241+
try {
6242+
dataset = findDatasetOrDie(dvIdtf);
6243+
} catch (WrappedResponse ex) {
6244+
return error(Response.Status.NOT_FOUND, "No such dataset");
6245+
}
6246+
6247+
AuthenticatedUser user;
6248+
try {
6249+
user = getRequestAuthenticatedUserOrDie(crc);
6250+
} catch (WrappedResponse ex) {
6251+
return error(Response.Status.BAD_REQUEST, "This API call requires authentication.");
6252+
}
6253+
if (!permissionSvc.requestOn(createDataverseRequest(user), dataset).has(Permission.EditDataset)) {
6254+
return error(Response.Status.FORBIDDEN, "This API call requires EditDataset permission.");
6255+
}
6256+
6257+
JsonObjectBuilder limits = new NullSafeJsonBuilder();
6258+
6259+
// Add optional elements - storage size and file count limits, if present:
6260+
if (systemConfig.isStorageQuotasEnforced()) {
6261+
UploadSessionQuotaLimit uploadSessionQuota = fileService.getUploadSessionQuotaLimit(dataset);
6262+
if (uploadSessionQuota != null) {
6263+
limits.add("storageQuotaRemaining", uploadSessionQuota.getRemainingQuotaInBytes());
6264+
}
6265+
}
6266+
6267+
Integer effectiveFileCountLimit = dataset.getEffectiveDatasetFileCountLimit();
6268+
6269+
if (effectiveFileCountLimit != null) {
6270+
limits.add("numberOfFilesRemaining", effectiveFileCountLimit - datasetService.getDataFileCountByOwner(dataset.getId()));
6271+
}
6272+
6273+
return ok(new NullSafeJsonBuilder().add("uploadLimits", limits));
6274+
}
61616275
}

src/main/java/edu/harvard/iq/dataverse/api/Dataverses.java

Lines changed: 11 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1236,9 +1236,9 @@ public Response getStorageSize(@Context ContainerRequestContext crc, @PathParam(
12361236
@GET
12371237
@AuthRequired
12381238
@Path("{identifier}/storage/quota")
1239-
public Response getCollectionQuota(@Context ContainerRequestContext crc, @PathParam("identifier") String dvIdtf) throws WrappedResponse {
1239+
public Response getCollectionQuota(@Context ContainerRequestContext crc, @PathParam("identifier") String dvIdtf, @QueryParam("showInherited") boolean showInherited) throws WrappedResponse {
12401240
try {
1241-
Long bytesAllocated = execCommand(new GetCollectionQuotaCommand(createDataverseRequest(getRequestUser(crc)), findDataverseOrDie(dvIdtf)));
1241+
Long bytesAllocated = execCommand(new GetCollectionQuotaCommand(createDataverseRequest(getRequestUser(crc)), findDataverseOrDie(dvIdtf), showInherited));
12421242
if (bytesAllocated != null) {
12431243
return ok(MessageFormat.format(BundleUtil.getStringFromBundle("dataverse.storage.quota.allocation"),bytesAllocated));
12441244
}
@@ -1248,11 +1248,17 @@ public Response getCollectionQuota(@Context ContainerRequestContext crc, @PathPa
12481248
}
12491249
}
12501250

1251-
@POST
1251+
@PUT
12521252
@AuthRequired
1253-
@Path("{identifier}/storage/quota/{bytesAllocated}")
1254-
public Response setCollectionQuota(@Context ContainerRequestContext crc, @PathParam("identifier") String dvIdtf, @PathParam("bytesAllocated") Long bytesAllocated) throws WrappedResponse {
1253+
@Path("{identifier}/storage/quota")
1254+
public Response setCollectionQuota(@Context ContainerRequestContext crc, @PathParam("identifier") String dvIdtf, String value) throws WrappedResponse {
12551255
try {
1256+
Long bytesAllocated;
1257+
try {
1258+
bytesAllocated = Long.parseLong(value);
1259+
} catch (NumberFormatException nfe){
1260+
return error(Status.BAD_REQUEST, value + " is not a valid number of bytes");
1261+
}
12561262
execCommand(new SetCollectionQuotaCommand(createDataverseRequest(getRequestUser(crc)), findDataverseOrDie(dvIdtf), bytesAllocated));
12571263
return ok(BundleUtil.getStringFromBundle("dataverse.storage.quota.updated"));
12581264
} catch (WrappedResponse ex) {
Lines changed: 55 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,55 @@
1+
package edu.harvard.iq.dataverse.engine.command.impl;
2+
3+
import edu.harvard.iq.dataverse.Dataset;
4+
import edu.harvard.iq.dataverse.authorization.users.AuthenticatedUser;
5+
import edu.harvard.iq.dataverse.engine.command.AbstractVoidCommand;
6+
import edu.harvard.iq.dataverse.engine.command.CommandContext;
7+
import edu.harvard.iq.dataverse.engine.command.DataverseRequest;
8+
import edu.harvard.iq.dataverse.engine.command.RequiredPermissions;
9+
import edu.harvard.iq.dataverse.engine.command.exception.CommandException;
10+
import edu.harvard.iq.dataverse.engine.command.exception.IllegalCommandException;
11+
import edu.harvard.iq.dataverse.engine.command.exception.PermissionException;
12+
import edu.harvard.iq.dataverse.storageuse.StorageQuota;
13+
import edu.harvard.iq.dataverse.util.BundleUtil;
14+
import java.util.logging.Logger;
15+
16+
/**
17+
*
18+
* @author landreev
19+
*
20+
* A superuser-only command:
21+
*/
22+
@RequiredPermissions({})
23+
public class DeleteDatasetQuotaCommand extends AbstractVoidCommand {
24+
25+
private static final Logger logger = Logger.getLogger(DeleteDatasetQuotaCommand.class.getCanonicalName());
26+
27+
private final Dataset targetDataset;
28+
29+
public DeleteDatasetQuotaCommand(DataverseRequest aRequest, Dataset target) {
30+
super(aRequest, target);
31+
targetDataset = target;
32+
}
33+
34+
@Override
35+
public void executeImpl(CommandContext ctxt) throws CommandException {
36+
// first check if user is a superuser
37+
if ( (!(getUser() instanceof AuthenticatedUser) || !getUser().isSuperuser() ) ) {
38+
throw new PermissionException(BundleUtil.getStringFromBundle("dataset.storage.quota.superusersonly"),
39+
this, null, targetDataset);
40+
}
41+
42+
if (targetDataset == null) {
43+
throw new IllegalCommandException("", this);
44+
}
45+
46+
StorageQuota storageQuota = targetDataset.getStorageQuota();
47+
48+
if (storageQuota != null && storageQuota.getAllocation() != null) {
49+
// The method below, in dataverseServiceBean, can be used to delete
50+
// quotas defined on either of the DvObjectContainer classes:
51+
ctxt.dataverses().disableStorageQuota(storageQuota);
52+
}
53+
// ... and if no quota was enabled on the dataset - nothing to do = success
54+
}
55+
}

0 commit comments

Comments
 (0)