Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: "Sample App"
title: "Sample app"
slug: "sample-app"
excerpt: "Set up an instance of UserClouds in less than 5 minutes!"
hidden: false
Expand Down
Original file line number Diff line number Diff line change
@@ -1,11 +1,12 @@
---
title: "Modeling Hierarchy with Attribute Scopes"
title: "Modeling hierarchy with attribute scopes"
slug: "modelling-hierarchy-with-attribute-types"
excerpt: ""
hidden: false
createdAt: "Thu Aug 03 2023 23:14:02 GMT+0000 (Coordinated Universal Time)"
updatedAt: "Fri Jun 21 2024 16:55:25 GMT+0000 (Coordinated Universal Time)"
---

In this article, you will learn how to model relationships and arbitrarily deep hierarchy between <Glossary>object</Glossary>s in your system with attribute scopes. By the end of this article, you will understand the fundamentals of modeling:

- **Folder-like structures** where objects contain other objects (ad infinitum)
Expand All @@ -16,26 +17,24 @@ The article assumes you know what <Glossary>object</Glossary>, <Glossary>edge</G

## Attribute Scopes

Attributes give one object permissions on another object. Each attribute has an <Glossary>attribute name</Glossary> (like `edit`) and an <Glossary>attribute scope</Glossary>. The value describes the permission and the scope describes which two objects are affected by the attribute. There are three scopes of attribute: direct, inherit and propagate.
Attributes give one object permissions on another object. Each attribute has an <Glossary>attribute name</Glossary> (like `edit`) and an <Glossary>attribute scope</Glossary>. The value describes the permission and the scope describes which two objects are affected by the attribute. There are three scopes of attribute: direct, inherit and propagate.

## Direct Attributes

The direct scope is the simplest attribute scope. It gives the source object the permission on the target object. It is used for non-hierarchical relationships.
The direct scope is the simplest attribute scope. It gives the source object the permission on the target object. It is used for non-hierarchical relationships.

![An edge with a view:direct attribute gives the source object view permissions on the target object.](/assets/images/Attribute_Direct.webp)


## Inherit Attributes

The inherit attribute scope states:

> Inherit: if the target <Glossary>object</Glossary> has the attribute on a third object, the source object ‘inherits’ that attribute on the third object.

**Inherit attributes are used to pass a permission from one user or group to another**. In the example below, Gloria is a member of a department that owns a particular project. This is modeled by giving the department direct view access on the project, and passing that view access to Gloria with an inherit attribute.
**Inherit attributes are used to pass a permission from one user or group to another**. In the example below, Gloria is a member of a department that owns a particular project. This is modeled by giving the department direct view access on the project, and passing that view access to Gloria with an inherit attribute.

![Inherit attributes are used to pass permissions from one user or group to another.](/assets/images/Attribute_Inherit.webp)


## Propagate Attributes

The propagate attribute scope states:
Expand Down
Original file line number Diff line number Diff line change
@@ -1,11 +1,12 @@
---
title: "Quickstart Guide"
title: "Quickstart guide"
slug: "sql-shim"
excerpt: ""
hidden: false
createdAt: "Thu Jun 13 2024 13:53:04 GMT+0000 (Coordinated Universal Time)"
updatedAt: "Thu Sep 19 2024 19:09:40 GMT+0000 (Coordinated Universal Time)"
---

## In the UserClouds Console:

1. **Create a Tenant**: Set up your tenant if you haven't already.
Expand All @@ -16,7 +17,7 @@ updatedAt: "Thu Sep 19 2024 19:09:40 GMT+0000 (Coordinated Universal Time)"

## In Your Application Codebase:

1. **Repoint Connection Strings**:
1. **Repoint Connection Strings**:
- **For SQL Proxies**: Replace the existing database URI and port with the SQL proxy host name and proxy port.
- **For NoSQL Proxies**: Replace the NoSQL connection details with the NoSQL proxy host name and port.
- **For API Proxies**: This is not yet implemented.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,11 +1,12 @@
---
title: "Browser Plug-in"
title: "Browser plug-in"
slug: "userclouds-browser-plug-in-documentation"
excerpt: ""
hidden: false
createdAt: "Thu Jun 13 2024 13:49:30 GMT+0000 (Coordinated Universal Time)"
updatedAt: "Fri Jun 28 2024 22:21:26 GMT+0000 (Coordinated Universal Time)"
---

### What Does It Do?

The UserClouds Browser Plug-in is designed to help you minimize, control, and log data access within web applications. It can be deployed in 15 minutes with no code changes, so is particularly useful in scenarios where:
Expand All @@ -19,11 +20,11 @@ Since the plug-in is installed locally, it is primarily aimed at internal data a

In combination with the UserClouds Proxy, the UserClouds Browser Plug-in allows you to:

- Tokenize and de-tokenize data in web applications without changing the application code.
- Tokenize and de-tokenize data in web applications without changing the application code.
- Control data access with fully expressive, context-aware access policies
- Log the who, when, why and how of data access

This can be achieved with minimal code changes or disruption to your development team or colleagues.
This can be achieved with minimal code changes or disruption to your development team or colleagues.

The approach helps:

Expand All @@ -43,7 +44,6 @@ The plug-in performs four core functions:

![The proxy sits between any database and application, intercepting queries to enforce access policies, log access and mask or tokenize data. (2) With no code changes, the application runs entirely on secure tokens, not sensitive data. (3) The browser plug-in resolves tokens for trusted employees, enforcing access policies and zero trust at the data level, via a single central control plane.](/assets/images/data-flow.webp)


## Quickstart Guide

### In the UserClouds Console:
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: "Proxy and Plug-In Implementation"
title: "Proxy and plug-in implementation"
slug: "proxy-and-plug-in-implementation"
excerpt: ""
hidden: false
Expand Down
53 changes: 37 additions & 16 deletions content/docs/guides/(data-storage)/data-lifecycle.mdx
Original file line number Diff line number Diff line change
@@ -1,59 +1,80 @@
---
title: "Data Lifecycle"
title: "Data lifecycle"
slug: "data-lifecycle"
excerpt: ""
hidden: false
metadata:
metadata:
image: []
robots: "index"
createdAt: "Tue Aug 15 2023 20:17:32 GMT+0000 (Coordinated Universal Time)"
updatedAt: "Fri Aug 25 2023 21:26:25 GMT+0000 (Coordinated Universal Time)"
---
import { Step, Steps } from 'fumadocs-ui/components/steps';

import { Step, Steps } from "fumadocs-ui/components/steps";

Purpose lifetimes and data deletion mechanisms play a crucial role in ensuring compliance with regulations like GDPR (General Data Protection Regulation). For example, a key principle of GDPR is storage limitation, which dictates that personal data must be kept in a form which permits identification of data subjects for no longer than is necessary for the purposes for which the personal data is processed.

This articles explains the mechanisms and configurations related to data deletion in UserClouds, especially in the context of user-defined data processing purposes and purpose lifetimes. The article assumes you are familiar with:

- How <Glossary>purpose</Glossary>s are used to track, enforce and audit user <Glossary>consent</Glossary> in User Store. Learn more [here](/docs/guides/definitions/purpose-and-consent).
- How the User Store is built from <Glossary>column</Glossary>s and populated with user records. Learn more [here](/docs/manage-your-columns).
- How <Glossary>mutator</Glossary>s and <Glossary>accessor</Glossary>s are used to write data to, and retrieve data from, the store. Learn more [here](/docs/accessors-read-apis).
- How the User Store is built from <Glossary>column</Glossary>s and populated with user records. Learn more [here](/docs/manage-your-columns).
- How <Glossary>mutator</Glossary>s and <Glossary>accessor</Glossary>s are used to write data to, and retrieve data from, the store. Learn more [here](/docs/accessors-read-apis).

## Introduction

Each piece of data in UserClouds is stored in an end user record and a column. It is also associated with a set of purposes, which describe the consents the end user has given for data processing. Data can exist in UserClouds in two lifecycle states:
Each piece of data in UserClouds is stored in an end user record and a column. It is also associated with a set of purposes, which describe the consents the end user has given for data processing. Data can exist in UserClouds in two lifecycle states:

- **Live data** is data that has not been deleted or marked for deletion
- **Soft-deleted data** is data that has been marked for deletion but is retained in a recoverable state for a specified period and set of purposes (like fraud detection), before being permanently erased

Data accessors must exclusively retrieve either live data _or_ soft-deleted data. No accessor can retrieve both. Only tenant admins can create and edit accessors for soft-deleted data.
Data accessors must exclusively retrieve either live data _or_ soft-deleted data. No accessor can retrieve both. Only tenant admins can create and edit accessors for soft-deleted data.

When a live piece of data changes, the old value becomes soft-deleted for the associated purposes if they have a non-zero post-deletion retention duration. Similarly, if a purpose is removed for a live piece of data, the data and purpose are soft-deleted if the post-deletion retention duration is non-zero. In either case the old value (or old purpose) will no longer be retrievable via a pre-deletion accessor.

## Configuring Purpose Lifetimes

Purpose lifetimes are set at the purpose-column level. Developers can configure two distinct time-based settings for each purpose-column pair:
Purpose lifetimes are set at the purpose-column level. Developers can configure two distinct time-based settings for each purpose-column pair:

- **The Pre-deletion Retention Duration** determines how long a specific purpose associated with a piece of data will be retained. Once this duration elapses, the purpose for that data expires. Once all the purposes have expired for the data, the data is soft-deleted. The clock for the purpose is reset can be reset by re-writing the data to the store. This duration is most commonly used to reflect the Storage Limitation principle of legislation like GDPR. The default setting is indefinite, meaning the purpose will not expire and that the value will be accessible for that purpose, until the data is deleted or changed, or the purpose is removed.
- **The Post-deletion Retention Duration** specifies the duration for which data should be retained in a "soft-deleted" state after a deletion event occurs. Once the Post-deletion Retention Duration elapses, the associated purpose expires. When all purposes are deleted for a specific piece of data, the data is no longer accessible by any means. This duration is most commonly used to enable account recovery and fraud/integrity investigations. The default setting is 0, meaning that old data is immediately hard-deleted when it is deleted or changed.

Any changes to pre- or post-deletion retention durations for a purpose only apply to newly written data. Retention timeouts for existing data cannot be retroactively changed by changing the retention duration associated with that column or purpose. However, if a retention duration is updated, any newly written data after the fact will have a timeout based on the new retention duration.

## Deletion Process Flow

<Steps>
<Step>End user data is saved to the store with associated purposes.</Step>
<Step>Pre-deletion Retention Duration countdown begins for each associated purpose.</Step>
<Step>If data is updated with the same purpose before Pre-deletion Retention Duration elapses, the purpose retention clock is reset.</Step>
<Step>If the pre-deletion retention duration elapses, the data is no longer visible for that purpose as live, pre-delete data.</Step>
<Step>Deletion event occurs (e.g. a value update, a value deletion, a column deletion or a user deletion) triggering the Post-deletion Retention Duration.</Step>
<Step>Once Post-deletion Retention Duration elapses, the associated purpose is deleted.</Step>
<Step>When all purposes are deleted for a piece of data, the data is no longer accessible by any means.</Step>
<Step>
Pre-deletion Retention Duration countdown begins for each associated
purpose.
</Step>
<Step>
If data is updated with the same purpose before Pre-deletion Retention
Duration elapses, the purpose retention clock is reset.
</Step>
<Step>
If the pre-deletion retention duration elapses, the data is no longer
visible for that purpose as live, pre-delete data.
</Step>
<Step>
Deletion event occurs (e.g. a value update, a value deletion, a column
deletion or a user deletion) triggering the Post-deletion Retention
Duration.
</Step>
<Step>
Once Post-deletion Retention Duration elapses, the associated purpose is
deleted.
</Step>
<Step>
When all purposes are deleted for a piece of data, the data is no longer
accessible by any means.
</Step>
</Steps>

## Example

- Data: Email address
- User Consents:
- User Consents:
- `Marketing` (Pre-deletion Retention: 6 months, Post-deletion Retention: 0 days)
- `FraudAndIntegrity` (Pre-deletion Retention: 1 year, Post-deletion Retention: 3 years)
- If data is re-written with a new `Marketing` or `FraudAndIntegrity` consent within 1 year, the clock for that consent resets.
Expand All @@ -62,5 +83,5 @@ Any changes to pre- or post-deletion retention durations for a purpose only appl
- At this point, the data can no longer be accessed for marketing purposes.
- The data is retained in a soft-deleted state for 3 years for `FraudAndIntegrity` purposes
- During this time, it can only be accessed by accessors with the `FraudAndIntegrity` purpose, which are specifically configured by a tenant admin to access soft-deleted data
- After 3 years, the `FraudAndIntegrity` consent is deleted.
- After 3 years, the `FraudAndIntegrity` consent is deleted.
- At this point, since this data has no consents for data processing attached to it, it is permanently and irrecoverably deleted from the store.
Original file line number Diff line number Diff line change
@@ -1,11 +1,12 @@
---
title: "Demo Video"
title: "Demo video"
slug: "demo-video"
excerpt: "This video covers what tokenization is and why to use it, as well as how to manage your tokenization policies in the UserClouds Console and UserClouds API."
hidden: true
createdAt: "Thu Aug 03 2023 21:54:59 GMT+0000 (Coordinated Universal Time)"
updatedAt: "Wed Jun 19 2024 17:13:12 GMT+0000 (Coordinated Universal Time)"
---

<iframe
src="https://www.loom.com/embed/88666408b15f4ff3b55b0753e4cc9155"
title="Tokenizer Demo - Google Slides - 5 December 2022"
Expand Down
Original file line number Diff line number Diff line change
@@ -1,54 +1,55 @@
---
title: "Docker Set-up Guide"
title: "Docker set-up guide"
slug: "docker-set-up-guide"
excerpt: ""
hidden: false
createdAt: "Tue Jul 30 2024 18:43:55 GMT+0000 (Coordinated Universal Time)"
updatedAt: "Tue Jul 30 2024 19:05:55 GMT+0000 (Coordinated Universal Time)"
---

> 🚧 Note: because this configuration provides no monitoring, automatic restarts, failover, etc, this configuration is intended ONLY for development, test, and CI environments. Do not use this configuration for production - otherwise outages and data loss may occur.

This guide provides step-by-step instructions on deploying the UserClouds Docker container on an Amazon EC2 instance. This setup is useful for environments where you need to manage the Docker container lifecycle manually.

## Prerequisites
### Prerequisites:

- AWS Account
- AWS CLI configured
- Docker installed locally

Steps
### Steps:

1. **Launch an EC2 Instance**
1. **Log in to the AWS Management Console.**
1. **Log in to the AWS Management Console**
2. **Launch an Instance:**
1. Navigate to EC2 Dashboard and click "Launch Instance".
1. Navigate to EC2 Dashboard and click "Launch Instance"
2. Choose an Amazon Machine Image (AMI). For this guide, we will use the Amazon Linux 2 AMI.
3. Select an instance type. A t2.micro instance is sufficient for testing, but choose according to your needs.
4. Configure other instance details as required.
5. Configure security groups:
1. Allow SSH (port 22) from your IP address.
2. Allow HTTP (port 80) and HTTPS (port 443)
3. **Review and Launch:**
1. Review your instance settings and click "Launch".
2. Select an existing key pair or create a new one to access your instance.
3. Click "Launch Instances".
1. Review your instance settings and click "Launch"
2. Select an existing key pair or create a new one to access your instance
3. Click "Launch Instances"
4. **Connect to Your Instance:**
1. Once the instance is running, click "Connect" and follow the instructions to SSH into your instance.
1. Once the instance is running, click "Connect" and follow the instructions to SSH into your instance
2. **Install Docker on the EC2 Instance**
1. Update the Installed Packages: `sudo yum update -y`
2. Install Docker: `sudo amazon-linux-extras install docker -y`
3. Start the Docker Service: `sudo service docker start`
4. Add the ec2-user to the Docker Group: `sudo usermod -a -G docker ec2-user`
5. Log Out and Log Back In (to ensure your user permissions are updated)
3. **Pull the UserClouds Docker Image**
1. Reach out to your UserClouds point of contact to obtain the Docker image and any necessary credentials for accessing the Docker registry where the image is hosted.
1. Reach out to your UserClouds point of contact to obtain the Docker image and any necessary credentials for accessing the Docker registry where the image is hosted
4. **Run the UserClouds Docker Container**
1. Run the Docker Container: `docker run -d --name userclouds-container -p 80:80 name-goes-here`
1. Replace `name-goes-here` with the name of the UserClouds Docker image.
1. Replace `name-goes-here` with the name of the UserClouds Docker image
2. Adjust the port mapping (`-p 80:80`) as needed.
5. **Verify the Deployment**
1. Check Running Containers: `docker ps`
2. Access Your Application: Open a web browser and navigate to the public IP address of your EC2 instance. You should see the UserClouds application running.
2. Access Your Application: Open a web browser and navigate to the public IP address of your EC2 instance. You should see the UserClouds application running
6. **Manual Lifecycle Management**
1. Since the Docker container will not automatically restart if the EC2 instance is terminated, you need to manage the lifecycle manually. Here are some commands to help:
1. Stop the Docker Container: `docker stop userclouds-container`
Expand Down
Loading