diff --git a/.gitbook/assets/data-cleaning-workflow.png b/.gitbook/assets/data-cleaning-workflow.png deleted file mode 100644 index 2d22fe8f..00000000 Binary files a/.gitbook/assets/data-cleaning-workflow.png and /dev/null differ diff --git a/.gitbook/assets/image (1).png b/.gitbook/assets/image (1).png deleted file mode 100644 index e2acf9c5..00000000 Binary files a/.gitbook/assets/image (1).png and /dev/null differ diff --git a/.gitbook/assets/image.png b/.gitbook/assets/image.png deleted file mode 100644 index d6bf4461..00000000 Binary files a/.gitbook/assets/image.png and /dev/null differ diff --git a/.gitbook/assets/remote.png b/.gitbook/assets/remote.png deleted file mode 100644 index f60296f8..00000000 Binary files a/.gitbook/assets/remote.png and /dev/null differ diff --git a/.gitbook/assets/simple-workflow.png b/.gitbook/assets/simple-workflow.png deleted file mode 100644 index ec03510d..00000000 Binary files a/.gitbook/assets/simple-workflow.png and /dev/null differ diff --git a/README.md b/README.md index cc48adfa..8b694edc 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,2 @@ -# Welcome to Documentation - -## Introduction - -Learn how to integrate with Lucidtech's APIs here. +# Initial page diff --git a/SUMMARY.md b/SUMMARY.md index 1cb98330..5da1733d 100644 --- a/SUMMARY.md +++ b/SUMMARY.md @@ -1,50 +1,4 @@ # Table of contents -* [Welcome to Documentation](README.md) - -## Getting Started - -* [Introduction](introduction/README.md) - * [Documents](introduction/documents.md) - * [Predictions](introduction/predictions.md) - * [Transitions and Workflows](introduction/transitions_and_workflows.md) - * [Assets and Secrets](introduction/assets_and_secrets.md) - * [Logs](introduction/logs.md) - * [Models](introduction/models.md) - * [Batches and Consents](introduction/batches_and_consents.md) -* [Quickstart](getting-started/dev/README.md) - * [Using the CLI](getting-started/dev/cli.md) - * [Python](getting-started/dev/python.md) - * [JavaScript](getting-started/dev/js.md) - * [.NET](getting-started/dev/net.md) - * [Java](getting-started/dev/java.md) - -* [Tutorials](tutorials/README.md) - * [Setup workflow](tutorials/setup_predict_and_approve.md) - * [Setup data cleaning workflow](tutorials/data_cleaning.md) - * [Setup approve view](tutorials/setup_approve_view.md) - * [Setup docker transition](tutorials/create_your_own_docker_transition.md) - * [Docker samples](docker-image-samples/README.md) -* [Authentication](authentication/README.md) -* [Quotas](quotas/README.md) -* [FAQ](getting-started/faq.md) -* [Help](getting-started/help.md) - -## Data Training - -* [Custom Data Training](data-training/data-training.md) -* [What is Confidence?](data-training/confidence.md) - -## Reference - -* [Rest API](reference/restapi/README.md) - * [latest](reference/restapi/latest/README.md) -* [Python SDK](reference/python/README.md) - * [latest](reference/python/latest.md) -* [.NET SDK](reference/dotnet/README.md) - * [latest](reference/dotnet/latest.md) -* [JavaScript SDK](reference/js/README.md) - * [latest](reference/js/latest.md) -* [Java SDK](reference/java/README.md) - * [latest](reference/java/latest.md) +* [Initial page](README.md) diff --git a/authentication/README.md b/authentication/README.md index 9fa83f21..93537a3b 100644 --- a/authentication/README.md +++ b/authentication/README.md @@ -1,36 +1,38 @@ ## Authenticating to Lucidtech -Lucidtech APIs require you to authenticate using the OAuth2 [protocol](https://tools.ietf.org/html/rfc6749). Our SDKs -will typically handle authentication for you but should you wish to use the REST API, you would need to do this -yourself. Here is a brief introduction to get you started +Lucidtech APIs require you to authenticate using the [Oath2 protocol](https://tools.ietf.org/html/rfc6749). Our SDKs +will typically handle authentication for you, but should you wish to use the REST API (TTNote: Consider a link here), you would need to do this +yourself. Here is a brief introduction to get you started. #### Credentials -You should already have acquired a client id, client secret and api key before continuing. The client id and client -secret will be used to get an access token from the auth endpoint and the api key will be used together with the +You should already have acquired a *client id*, *client secret* and *api key* before continuing. (TTNote: Consider info here on what to do if you don't +have these items.) The *client id* and *client +secret* will be used to get an access token from the *auth endpoint*, and the *api key* will be used together with the access token to authorize to the API. -Unless specified otherwise in the credentials file you have received, the endpoint for authentication is -https://auth.lucidtech.ai and the endpoint for the API is https://api.lucidtech.ai +Unless specified otherwise in the credentials file that you have received (TTNote: Is this received upon purchase of the API? +If not consider info on how to obtain the file), the endpoint for authentication is +https://auth.lucidtech.ai and the endpoint for the API is https://api.lucidtech.ai. #### Getting an access token -To acquire an access token we need to ask the auth endpoint with our client id and client secret for access. This is -done by performing a HTTP POST request to the token endpoint /oauth2/token with two headers provided. One header -should be 'Authorization' with base64 encoded client_id and client secret and one header should be 'Content-Type' which -will always contain the same value 'application/x-www-form-urlencoded'. +To acquire an access token, we need to ask the *auth endpoint* for access using our *client id* and *client secret*. This is +done by performing an HTTP POST request to the token endpoint /oauth2/token with two headers included. One header +should be *Authorization* with the base64-encoded *client_id* and the *client secret*. The other header should be *Content-Type* which +will always contain the same value: 'application/x-www-form-urlencoded'. | Header name | Header value | | ----------- | ------------------------------------------- | | Authorization | Basic Base64Encode(client_id:client_secret) | | Content-Type | application/x-www-form-urlencoded | -Read more about Base64Encode [here](https://en.wikipedia.org/wiki/Basic_access_authentication#Client_side) +You can read more about Base64Encode [here](https://en.wikipedia.org/wiki/Basic_access_authentication#Client_side). -Since we are dealing with 'client_credentials' we need to specify this in the url as a query parameter. The final URL -to make the request to is https://auth.lucidtech.ai/oauth2/token?grant_type=client_credentials +Since we are working with `client_credentials`, we need to specify this in the url as a query parameter. The final URL +to make the request to is https://auth.lucidtech.ai/oauth2/token?grant_type=client_credentials. -Here is an example getting access token using curl in bash. +Here is an example of obtaining the access token using curl in bash: ```bash $ credentials=":" @@ -38,7 +40,7 @@ $ base64_encoded_credentials=`echo -n $credentials | base64 -w 0` $ curl -X POST https://auth.lucidtech.ai/oauth2/token?grant_type=client_credentials -H "Content-Type: application/x-www-form-urlencoded" -H "Authorization: Basic $base64_encoded_credentials" ``` -If everything is working as expected, the response should look similar to the following +If everything is working as expected, the response should look like this: ```json { @@ -49,15 +51,16 @@ If everything is working as expected, the response should look similar to the fo ``` {% hint style="info" %} -The access token will expire after some time, currently after 3600 seconds (1 hour). When the token expires +The access token will expire after some time, currently after 3600 seconds (1 hour). When the token expires, you will need to get a new access token using the same procedure. {% endhint %} #### Calling the API -Upon successfully acquiring access token from previous step, we are ready to call the API! To do that we need to -provide two headers to the API. One header 'x-api-key' with our api key and one header 'Authorization' with the -newly acquired access token. +Upon successfully acquiring the access token from previous step, you are ready to call the API. To do this, you need to +provide two headers to the API. The first header will be *x-api-key* which will contain the api key, and the other header will be *Authorization* +which will contain the +newly acquired access token: | Header name | Header value | | ----------- | ------------------------------------- | @@ -72,16 +75,15 @@ $ curl https://api.lucidtech.ai/v1/documents -H "x-api-key: $api_key" -H "Author #### Using an SDK -Our SDKs will handle acquiring access token for you. The only thing you need to do is put the credentials -in a file in the correct location on your computer and the SDK will discover them. The credentials file should -be placed on the following location based on the OS you are running +Our SDKs will acquire the access token for you. +Simply enter the credentials into the credentials.cfg file in the location shown below based on your OS, and the SDK will auto-discover them: | Operating System | Location | | ---------------- | ----------------------------------------------------------------------------- | | Linux/Mac | ~/.lucidtech/credentials.cfg or $HOME/.lucidtech/credentials.cfg | | Windows | %USERPROFILE%\.lucidtech\credentials.cfg or %HOME%\.lucidtech\credentials.cfg | -The credentials.cfg file should look like the following +The credentials.cfg file should look like the following: ```ini [default] diff --git a/data-training/confidence.md b/data-training/confidence.md index a7a1e162..4f34bffb 100644 --- a/data-training/confidence.md +++ b/data-training/confidence.md @@ -6,16 +6,16 @@ description: What you need to know about confidence ## End-to-end confidence -Every field the model extracts has a corresponding confidence value. The confidence is different from a traditional OCR confidence in that it does not only estimates the probability that the characters are interpreted correctly, but also that it has extracted the correct information (e.g. the total amount and not the VAT amount). +Every field that the model extracts has a corresponding confidence value. The confidence is different from a traditional OCR confidence in that it does not only estimate the probability that the characters are interpreted correctly, but also that it has extracted the correct information (e.g. the total amount and not the VAT amount). ![The figure shows example predictions together with confidence values](confidence1.png) # End-to-end confidence increases automation -## You can trust that the model is correct when it says so. +## You can trust that the model is correct when it says so When the confidence of a prediction is above a given threshold, the field can be hidden from the human validator. -This ensures that that only fields that the AI is uncertain about will be manually inspected, while the rest of the fields are fully automated. This means that users will save time and cost by not having to validate high-confidence predictions! +This ensures that only fields that the API is uncertain about will be manually inspected, while the rest of the fields are fully automated. This means that users will save time and cost by not having to validate high-confidence predictions. ![The figure shows how confidence can be used to automate validation of data extraction](confidence2.png) diff --git a/data-training/data-training.md b/data-training/data-training.md index 807aa7ea..a0640fed 100644 --- a/data-training/data-training.md +++ b/data-training/data-training.md @@ -6,48 +6,50 @@ description: Getting started with custom data training. ![Data Training](https://lucidtech.ai/assets/img/illustrations/data-training.png) -Lucidtech offers APIs for document data extracting. The core technology is a general machine learning architecture which can be used to interpret a wide range of document types, including invoices, receipts, ID-documents, purchase orders or virtually any other type of document. +Lucidtech offers APIs for document data extracting. The core technology is a general machine learning architecture which can be used to interpret a wide range of document types, including invoices, receipts, ID documents, purchase orders and virtually any other type of document. -To make sure that our API provides optimal accuracy we train our models on your data. We use supervised learning for training our machine learning models. This means that the algorithms learn by observing thousands of examples of documents together with their ground truth. The goal of the training process is that Lucidtech's models learn to produce the correct output for new and previously unseen documents. +To make sure that our API provides optimal accuracy, we train our models on your data. We use supervised learning for training our machine learning models. This means that the algorithms learn by observing thousands of document examples together with their ground truth. The goal of the training process is that Lucidtech's models learn to produce the correct output for new and previously unseen documents. ## 1. Data requirements ### Volume -The amount of data needed to create a high quality model depends on the expected variation of the data as well as the quality of the training data. As a general rule of thumb we require at least 10 000 documents when training a new model, but 30 000+ documents is recommended for an optimal result. When the API is deployed in production, the _feedback endpoints_ should be used to enable continuous training on new data. +The amount of data needed to create a high-quality model depends on the expected variation of the data as well as the quality of the training data. As a general rule of thumb, we require at least 10,000 documents when training a new model, but 30,000+ documents is recommended for an optimal result. When the API is deployed in production, the _feedback endpoints_ should be used to enable continuous training on new data. ### Representative data -The training data should be representative for the expected data. For example, if the expected data consists of invoices from thousands of different vendors, then the training data should not only consist of invoices from five different vendors. +The training data should be representative for the expected data. For example, if the expected data consists of invoices from thousands of different vendors, then the training data should not consist of invoices from only five different vendors. {% hint style="success" %} -A good way to select representative training data can be to choose data randomly from your database or document archive. +A good way to select representative training data is to choose data randomly from your database or document archive. {% endhint %} ### Correctness of data -Incorrect or missing ground truth information can be detrimental to the training process. For this reason it is important that the training data is as accurate as possible. +Incorrect or missing ground truth information can be detrimental to the training process. For this reason, it is important that the training data be as accurate as possible. ### Consistency -Ground truth data should adhere to a common format. For example, when extracting dates, all ground truth dates should be listed on the same date format regardless of how the date appears in the document. Examples of inconsistencies: +Ground truth data should adhere to a common format. For example, when extracting dates, all ground truth dates should be listed on the same date format, regardless of how the date appears in the document. Examples of inconsistencies: * The same date is written as 17.05.18 in one ground truth file and as 17th of May, 2018 in another. * Different conventions are used to denote amounts, e.g. 1200.00, 1,200.00 and 1200. {% hint style="info" %} -Consistency is only required in the ground truth data. The corresponding information as written on the actual documents in the data set may be on arbitrary formats. +Consistency is only required in the ground truth data. The corresponding information as written on the actual documents in the data set may use arbitrary formats. {% endhint %} ## 2. Data preparation ### Deciding what to extract -The first step is to decide which data fields you want to extract from your documents. For an invoice this can be total amount, due date and bank account, or it can also be only total amount. For an ID document it can be first name, last name, id-number and nationality. For a travel ticket it can be price, departure date, arrival date, seat number and mean of transportation. Which data fields you want to extract is up to you to decide. We generally recommend to keep it as simple as possible. In particular, avoid adding fields that you will not use, and make sure that the majority of the data you provide contain the fields you specify. +The first step is to decide which data fields you want to extract from your documents. For an invoice, this can be total amount, due date and bank account, or it can also be only total amount. For an ID document, it can be first name, last name, id number and nationality. For a travel ticket, it can be price, departure date, arrival date, seat number and means of transportation. + +Which data fields you want to extract is up to you to decide. We generally recommend keeping it as simple as possible. In particular, avoid adding fields that you will not use, and make sure that the majority of the data that you provide contain the fields you specify. ### Every document needs a ground truth -To start training your custom model you need pairs of documents and their corresponding ground truth. The ground truth is the information you want to extract from the document. Note that every single document needs its own ground truth file. +To start training your custom model, you need pairs of documents (TTNote: Unclear if you mean 'pairs of documents' or simply 'documents paired with their corresponding ground truth) and their corresponding ground truth. The ground truth is the information that you want to extract from the document. Note that every single document needs its own ground truth file. ![Ground Truth](https://lucidtech.ai/assets/img/illustrations/illustration-10.png) diff --git a/docker-image-samples/README.md b/docker-image-samples/README.md index 5952f7f6..1ac557e7 100644 --- a/docker-image-samples/README.md +++ b/docker-image-samples/README.md @@ -1,6 +1,6 @@ # Docker Image Samples -Docker images are the essence of an automatic transition; -they are the building blocks of a workflow. +Docker images are the essence of an automatic transition. +They are the building blocks of a workflow. ## Introduction @@ -9,18 +9,22 @@ or as a starting point for a customized step in a workflow. ## Getting started -To make a workflow that consist of the samples in this folder -there is no need to dwell here. Just check out the -[tutorials](https://github.com/LucidtechAI/las-docs/tree/master/tutorials/README.md) +To make a workflow that consists of the samples in this folder, +there is no need to dwell here, just check out our +[tutorials](https://github.com/LucidtechAI/las-docs/tree/master/tutorials/README.md). +(TTNote: Consider if the above link should point to gitbook not github.) ## Sample images -* make-predictions: Get predictions from Lucidtechs world class OCR-models -* feedback-to-model: Make sure the OCR-models stays state-of-the-art by feeding corrected results back to the model -* export-to-semine: One of many standard integration modules. +(TTNote: Should these be images or links to images?) +* make-predictions: Get predictions from Lucidtech's world-class OCR models +* feedback-to-model: Make sure the OCR models stay state-of-the-art by feeding corrected results back to the model +* export-to-semine: One of many standard integration modules -## For developers -When updating an image we use the following repo and naming convention: + +## Naming convention +(TTNote: If all of the documentation is for developers, suggestion to relabel this header.) +When updating an image, we use the following repository and naming convention: ``` name=lucidtechai/transition-samples: docker build . -t $name && docker push $name -``` \ No newline at end of file +``` diff --git a/getting-started/dev/README.md b/getting-started/dev/README.md index 9638210a..b8fe3fd1 100644 --- a/getting-started/dev/README.md +++ b/getting-started/dev/README.md @@ -1,2 +1,4 @@ # Quickstart +(TTNote: Suggestion to add some verbiage here to note these are quick start guides depending on the API method used and that more details can be found under the Introduction sections.) + diff --git a/getting-started/dev/cli.md b/getting-started/dev/cli.md index c20335bf..5406ca68 100644 --- a/getting-started/dev/cli.md +++ b/getting-started/dev/cli.md @@ -2,7 +2,7 @@ ## Installation -Install the CLI via the Python package manager [pip](https://pip.pypa.io/en/stable/) +Install the CLI via the Python package manager [pip](https://pip.pypa.io/en/stable/): ```bash >> $ pip install lucidtech-las-cli @@ -10,7 +10,7 @@ Install the CLI via the Python package manager [pip](https://pip.pypa.io/en/stab ## Make a prediction on a document -List models that are available for predictions +First, list models that are available for predictions: ```bash >> $ las models list { @@ -24,7 +24,7 @@ List models that are available for predictions } ``` -Upload a document +Next, upload a document: ```bash >> $ las documents create invoice.pdf { @@ -33,7 +33,7 @@ Upload a document } ``` -Run inference on the document using a model +Finally, run inference on the document using a model: ```bash >> $ las predictions create las:document: las:model: { @@ -46,8 +46,11 @@ Run inference on the document using a model ## Set ground truth of document -When uploading data that will be used for training and evaluation, we need to provide a ground truth. +When uploading data that will be used for training and evaluation, we need to provide a ground truth. +(TTNote: Consider removing this note on why ground truth is needed as this was mentioned earlier, and/or perhaps add a link here +to the relevant section under 'documents.') We can then use the optional parameters `--ground-truth-path` or `--ground-truth-fields`. +(TTNote: Consider whether these parameters should be mentioned in the earlier 'documents' section.) ```bash >> $ las documents create invoice.pdf --ground-truth-path ground_truth.json @@ -59,16 +62,17 @@ We can then use the optional parameters `--ground-truth-path` or `--ground-truth ] } ``` -In this case the `ground_truth.json` should be on the following format +In this case, the `ground_truth.json` file should use the following format: ```json { "total_amount": "299.00", "due_date": "2020-03-20" } ``` + ### Update an existing document -If for instance a prediction reveals incorrect values in the ground truth of a document, -we can update the existing document with new ground truth values. +If a prediction reveals incorrect values in the ground truth of a document, +we can update the existing document with new ground truth values: ```bash >> $ las documents update las:document: --ground-truth-fields total_amount=300.00 due_date=2020-02-28 { @@ -79,12 +83,14 @@ we can update the existing document with new ground truth values. } ``` -## Create a document with consent id +## Create a document with a consentId {% hint style="info" %} -Consent ID is an identifier you can assign to documents to keep track of document ownership for your customers. +ConsentID is an identifier that you can assign to documents to keep track of document ownership for your customers. {% endhint %} +(TTNote: Consider a link here to the 'consents' section for more information.) + ```bash >> $ las documents create invoice.pdf --consent-id las:consent: { @@ -95,6 +101,9 @@ Consent ID is an identifier you can assign to documents to keep track of documen ``` ## Get document and download document content +(TTNote: Consider some verbiage here to explain the steps we are seeing in the example below, or to point out items that might need clarification.) + +(TTNote: Should invoice2.pdf below match invoice.pdf mentioned above?) ```bash >> $ las documents create invoice.pdf --consent-id las:consent: @@ -112,9 +121,10 @@ Consent ID is an identifier you can assign to documents to keep track of documen } ``` -## Revoking consent and deleting documents +## Revoke consent and delete documents -Suppose we wish to delete all documents associated with a customer in our ERP database or other systems. We need to provide a consent\_id to the prediction method that uniquely identifies the customer and use that consent\_id to delete documents. +To delete all documents associated with a customer in your ERP or other systems, first provide the `consentId` (which uniquely identifies the customer) to the prediction method, then use that `consentId` to delete the documents: +(TTNote: Consider a link here to the batches and consents details.) ```bash >> $ las consents delete las:consent: @@ -126,9 +136,10 @@ Suppose we wish to delete all documents associated with a customer in our ERP da } ``` -## Create a batch and associate a few documents with it +## Create a batch and associate documents with it Creating a batch is a way to group documents. This is useful for specifying batches of documents to use in improving the model later. +(TTNote: Consider a link here to the batches and consents details.) ```bash >> $ las batches create diff --git a/getting-started/dev/java.md b/getting-started/dev/java.md index f27c8528..fc020d45 100644 --- a/getting-started/dev/java.md +++ b/getting-started/dev/java.md @@ -1,7 +1,9 @@ -# Java +# Using Java [API reference](../../reference/java/latest.md) +(TTNote: Consider if an 'Installation' section is needed here for consistency, or if the below section is in lieu of that.) + ## Create a client to talk to the API ```java @@ -16,7 +18,8 @@ Credentials credentials = new Credentials( client = new Client(credentials); ``` -## Upload a document to get a document id +## Upload a document to get a documentId +(TTNote: For consistency, consider including 'upload' as a subsection under the heading of 'make predictions' with verbiage as in the CLI (i.e. 'first list, next upload, finally run inference...') Or alternately, consider whether all other pages should match the syntax here instead. ```java public void createDocument() throws IOException, APIException, MissingAccessTokenException { @@ -33,6 +36,7 @@ public void createDocument() throws IOException, APIException, MissingAccessToke ## Make a prediction on a document Suppose we wish to run inference on a document using one of the available models. +(TTNote: Also for consistency with CLI, consider if a `list` (or in this case `listModel`) is needed first and whether the description should be updated to match.) ```java public void createPrediction() throws IOException, APIException, MissingAccessTokenException { @@ -44,13 +48,17 @@ public void createPrediction() throws IOException, APIException, MissingAccessTo ``` {% hint style="info" %} -See what models you have available and their model id by using the method `listModels()` +See what models are available along with their `modelId` by using the method `listModels()` {% endhint %} +(TTNote: For consistency, consider if this info window should be part of the steps as in the CLI instead, i.e. 'first list models, next upload, finally run inference') ## Set ground truth of document -When uploading data that will be used for training and evaluation, we need to provide a ground truth. +When uploading data that will be used for training and evaluation, we need to provide a ground truth: + +(TTNote: Consider referencing more detailed section on ground truth from 'documents') + ```java File file = new File("myReceipt.pdf"); InputStream content = new FileInputStream(file); @@ -62,24 +70,23 @@ JSONObject document = this.client.createDocument(content, ContentType.PDF, optio ``` ### Update an existing document -If for instance a prediction reveals incorrect values in the ground truth of a document, -we can update the existing document with new ground truth values. +If a prediction reveals incorrect values in the ground truth of a document, +we can update the existing document with new ground truth values: ```java JSONArray groundTruth = new JSONArray(); groundTruth.put(new JSONObject(){{ put("label", "totalAmount"); put("value", "199.00"); }}); groundTruth.put(new JSONObject(){{ put("label", "dueDate"); put("value", "2020-03-20"); }}); JSONObject document = this.client.createDocument("las:document:", groundTruth); ``` -## Set ground truth of document - -Suppose we make a prediction that returns incorrect values and we wish to improve the model for future use. -We can do so by sending groundTruth to the model, telling it what the expected values should have been. +(TTNote: Consider if 'Create a document with a consentId', 'Get document and download document content', 'Revoke consent and delete documents' sections +are needed for consistency.) -## Create a batch and associate a few documents with it +## Create a batch and associate documents with it Creating a batch is a way to group documents. -This is useful for specifying batches of documents to use in improving the model later. +This is useful for specifying batches of documents to use in improving the model later: +(TTNote: Consider a link here to the batches and consents details.) ```java public void createBatch() throws IOException, APIException, MissingAccessTokenException { diff --git a/getting-started/dev/js.md b/getting-started/dev/js.md index 257cee92..dca0243d 100644 --- a/getting-started/dev/js.md +++ b/getting-started/dev/js.md @@ -1,6 +1,6 @@ -# JavaScript +# Using JavaScript -## Use browser version +## Install browser version ```text npm install --save las-sdk-browser @@ -14,7 +14,7 @@ const credentials = new AuthorizationCodeCredentials('', '' const client = new Client(credentials); ``` -## Use node version +## Install node version ```text npm install --save las-sdk-node @@ -31,6 +31,7 @@ const client = new Client(credentials); ## Make a prediction on a document Suppose we wish to run inference on a document using Lucidtech’s invoice model. +(TTNote: For consistency with CLI, consider if a `list` is needed first. Then consider a description to replace the above line such as: 'Upload a document then run inference using the invoice model:') ```javascript const { documentId } = await client.createDocument('', '', { consentId: '' }); @@ -40,7 +41,10 @@ console.log(predictions); ## Set ground truth of document -When uploading data that will be used for training and evaluation, we need to provide a ground truth. +When uploading data that will be used for training and evaluation, we need to provide a ground truth: + +(TTNote: Consider referencing more detailed section on ground truth from 'documents') + ```javascript const groundTruth = [ { 'label': 'total_amount', 'value': '240.01' }, @@ -51,8 +55,8 @@ const { documentId } = await client.createDocument('', '', groundTruth); ``` {% hint style="info" %} -Providing ground truth is a necessary to re-train a model whether the model got it right or wrong. So always provide +Providing ground truth is necessary to retrain a model, whether the model was right or wrong. So always provide the ground truth if it is available. {% endhint %} -## Create a document with consent id +## Create a document with a consentId {% hint style="info" %} -Consent ID is an identifier you can assign to documents to keep track of document ownership for your customers. +ConsentID is an identifier that you can assign to documents to keep track of document ownership for your customers. {% endhint %} +(TTNote: Consider a link here to the 'consents' section for more information.) + ```javascript const { documentId } = await client.createDocument('', '', { consentId: '' }); ``` -## Create a batch and associate a few documents with it +(TTNote: Consider if sections 'get document and download document content' and 'revoke consent and delete documents' are needed here for consistency.) + + +## Create a batch and associate documents with it Creating a batch is a way to group documents. This is useful for specifying batches of documents to use in improving the model later. +(TTNote: Consider a link here to the batches and consents details.) ```javascript const { batchId } = await client.createBatch(batchDescription); diff --git a/getting-started/dev/net.md b/getting-started/dev/net.md index dd28b46c..6050e1bf 100644 --- a/getting-started/dev/net.md +++ b/getting-started/dev/net.md @@ -1,9 +1,13 @@ -# .NET +# Using .NET + +(TTNote: Consider if 'installation' section is needed.) ## Make a prediction on a document Suppose we wish to run inference on a document using Lucidtech’s invoice model. +(TTNote: For consistency with CLI, consider if a `list` is needed first. Then consider a description to replace the above line such as: 'Upload a document then run inference using the invoice model:') + ```cs using Lucidtech.Las; @@ -15,7 +19,10 @@ var response = client.CreatePrediction(documentId, modelId); ## Set ground truth of document -When uploading data that will be used for training and evaluation, we need to provide a ground truth. +When uploading data that will be used for training and evaluation, we need to provide a ground truth: + +(TTNote: Consider referencing more detailed section on ground truth from 'documents') + ```cs using Lucidtech.Las; @@ -27,21 +34,22 @@ var groundTruth = new List>() }; var response = client.CreateDocument(content, "image/jpeg", groundTruth: groundTruth); ``` + ### Update an existing document -If for instance a prediction reveals incorrect values in the ground truth of a document, -we can update the existing document with new ground truth values. +If a prediction reveals incorrect values in the ground truth of a document, +we can update the existing document with new ground truth values: ```cs var response = client.UpdateDocument("las:document:", groundTruth: groundTruth); ``` - -var response = client.UpdateDocument(documentId: "", groundTruth: groundTruth); -## Create a document with consent id +## Create a document with a consentId {% hint style="info" %} -Consent ID is an identifier you can assign to documents to keep track of document ownership for your customers. +ConsentId is an identifier that you can assign to documents to keep track of document ownership for your customers. {% endhint %} +(TTNote: Consider a link here to the 'consents' section for more information.) + ```cs using Lucidtech.Las; @@ -50,9 +58,12 @@ byte[] body = File.ReadAllBytes("invoice.pdf"); var response = client.CreateDocument(body, "application/pdf", ""); ``` -## Revoking consent and deleting documents +(TTNote: Consider if 'Get document and download document content' is needed here for consistency.) -Suppose we wish to delete all documents associated with a customer in our ERP database or other systems. We need to provide a consent\_id to the prediction method that uniquely identifies the customer and use that consent\_id to delete documents. +## Revoke consent and delete documents + +To delete all documents associated with a customer in your ERP or other systems, first provide the `consentId` (which uniquely identifies the customer) to the prediction method, then use that `consentId` to delete documents. +(TTNote: Consider a link here to the batches and consents details.) ```cs using Lucidtech.Las; @@ -60,3 +71,5 @@ using Lucidtech.Las; Client client = new Client(); var response = client.DeleteDocuments(consentId: ""); ``` + +(TTNote: Consider if 'create a batch and associate documents with it' section is needed here for consistency.) diff --git a/getting-started/dev/python.md b/getting-started/dev/python.md index c9a62809..5397f228 100644 --- a/getting-started/dev/python.md +++ b/getting-started/dev/python.md @@ -1,8 +1,8 @@ -# Python +# Using Python ## Installation -Install the package via the Python package manager [pip](https://pip.pypa.io/en/stable/) +Install the package via the Python package manager [pip](https://pip.pypa.io/en/stable/): ```bash >> $ pip install lucidtech-las @@ -10,7 +10,8 @@ Install the package via the Python package manager [pip](https://pip.pypa.io/en/ ## Make a prediction on a document -Suppose we wish to run inference on a document using Lucidtech’s invoice model. +Suppose we wish to run inference on a document using Lucidtech’s invoice model: +(TTNote: For consistency with CLI, consider if a `list` is needed first. Then consider a description to replace the above line such as: 'Upload a document then run inference using the invoice model:') ```python from las import Client @@ -24,7 +25,10 @@ print(prediction) ## Set ground truth of document -When uploading data that will be used for training and evaluation, we need to provide a ground truth. +When uploading data that will be used for training and evaluation, we need to provide a ground truth: + +(TTNote: Consider referencing more detailed section on ground truth from 'documents') + ```python from las import Client @@ -37,23 +41,26 @@ document = client.create_document('invoice.pdf', 'application/pdf', ground_truth ``` ### Update an existing document -If for instance a prediction reveals incorrect values in the ground truth of a document, -we can update the existing document with new ground truth values. +If a prediction reveals incorrect values in the ground truth of a document, +we can update the existing document with new ground truth values: ```python document = client.update_document('las:document:', ground_truth=ground_truth) ``` {% hint style="info" %} -Providing ground truth is a necessary to re-train a model whether the model got it right or wrong. So always provide +Providing ground truth is necessary to retrain a model, whether the model was right or wrong. So always provide the ground truth if it is available. {% endhint %} +(TTNote: Consider replicating this note in all other sections for consistency.) -## Create a document with consent id +## Create a document with a consentId {% hint style="info" %} -Consent ID is an identifier you can assign to documents to keep track of document ownership for your customers. +ConsentID is an identifier that you can assign to documents to keep track of document ownership for your customers. {% endhint %} +(TTNote: Consider a link here to the 'consents' section for more information.) + ```python from las import Client @@ -61,9 +68,12 @@ client = Client() document = client.create_document('invoice.pdf', 'application/pdf', consent_id='las:consent:') ``` -## Revoking consent and deleting documents +(TTNote: Consider if a section 'get document and download document content' is needed here for consistency.) + +## Revoke consent and delete documents -Suppose we wish to delete all documents associated with a customer in our ERP database or other systems. We need to provide a consent\_id to the prediction method that uniquely identifies the customer and use that consent\_id to delete documents. +To delete all documents associated with a customer in your ERP or other systems, first provide the `consentId` (which uniquely identifies the customer) to the prediction method, then use that `consentId` to delete the documents. +(TTNote: Consider a link here to the batches and consents details.) ```python from las import Client @@ -74,10 +84,11 @@ document = client.create_document('invoice.pdf', 'application/pdf', consent_id=c client.delete_documents(consent_id=consent_id) ``` -## Create a batch and associate a few documents with it +## Create a batch and associate documents with it Creating a batch is a way to group documents. This is useful for specifying batches of documents to use in improving the model later. +(TTNote: Consider a link here to the batches and consents details.) ```python from las import Client diff --git a/getting-started/faq.md b/getting-started/faq.md index 3719586d..37071978 100644 --- a/getting-started/faq.md +++ b/getting-started/faq.md @@ -1,14 +1,17 @@ # FAQ -## Do you have plans to support SDKs for language X? +## Do you have plans to support SDKs in other languages? -We provide SDKs on request for commonly used programming languages. Please send us a request to hello@lucidtech.ai. +We provide SDKs upon request for commonly used programming languages. If you use a language that you would like +us to support, please send us a request at hello@lucidtech.ai. (TTNote: The email address appears as a mailto link in github, but does not appear as +such in another browser like Safari, consider whether this needs updating.) ## What documents do Lucidtech's APIs understand? -We currently have an invoice API for norwegian invoices available for testing, but customized APIs matching the customer's need. +We currently have an invoice API for Norwegian invoices available for testing, but we can provide customized APIs matching the customer's need. See [link](../data-training/data-training.md) for more information. +(TTNote: Consider if link should go to gitbook instead of github) ## What file formats do Lucidtech's APIs understand? @@ -17,5 +20,5 @@ We have restricted the support to the following file formats: * image/jpeg * application/pdf -The restriction is for security reasons to limit the attack surface on commonly used image decoders. +The restriction is for security reasons to limit the attack exposure from commonly used image decoders. diff --git a/introduction/README.md b/introduction/README.md index 16c8045a..5a421948 100644 --- a/introduction/README.md +++ b/introduction/README.md @@ -2,5 +2,5 @@ ## Introduction -Learn about the key concepts in Lucidtech's APIs here. +Learn about the key concepts of Lucidtech's APIs here. diff --git a/introduction/assets_and_secrets.md b/introduction/assets_and_secrets.md index 2e55c81c..4065a8db 100644 --- a/introduction/assets_and_secrets.md +++ b/introduction/assets_and_secrets.md @@ -1,36 +1,41 @@ # *Assets* and *Secrets* - - An *Asset* can be any kind of resource you want to use across your transitions. - It is flexible, easy to use and can be modified at any moment. + - An *Asset* can be any kind of resource that you want to use across your transitions. + It is flexible, easy to use and can be modified at any time. - A *Secret* is a set of key-value pairs that are stored safely and used as environment variables or docker credentials. - It is not possible to peek on a Secret, but it can be changed at any moment. + Note: It is not possible to view the contents of a *Secret*, but it can be changed at any time. -Assets and *Secrets* are necessary components when designing Transitions and Workflows. +*Assets* and *Secrets* are necessary components when designing Transitions and Workflows. +(TTNote: Consider moving this sentence to the top just below the heading on this page.) An *Asset* can be a list of customers that are used for cross-reference, -or a remote-component for your own custom UI. -Any file or piece of information can be stored as an Asset, +or as a remote component for your own custom UI. +Any file or piece of information can be stored as an *Asset* and is the recommended way to store non-secret information that you want to configure. -Whether you want to call an external API from one of the automatic transitions, -or pull a docker image from a private repository *Secrets* is the safe and recommended way to do this. -Instead of bundling along environment files with your docker image you can use Secrets. +A *Secret* is a safe and recommended way to call an external API from one of the automatic transitions, +or to pull a docker image from a private repository. +Instead of bundling environment files along with your docker image, you can use *Secrets*. This way your credentials are not only safe, -but they can be utilized in other docker images, and modified whenever required. +but they can be utilized in other docker images and modified as required. ## Working with *Assets* -Since *Assets* are often used as components in transitions it is wise to create your *Assets* first. -If you realize that you need to change it later that is no problem, so don't be afraid of early mistakes. -Say we have a list of companies we want to use in out workflow; +Since *Assets* are often used as components in transitions, it is recommended to create your *Assets* first. +You can update *Assets* later if needed, so don't be afraid of early mistakes. +(TTNote: Consider resequencing the Assets and Secrets section to be located before the Transition section, since it is recommended to complete this step first.) + +Perhaps you have a list of companies that you want to use in your workflow. You can follow this example below: ```commandline >> las assets create companies.json --name companies --description 'A list of approved companies' ``` -You have now created an *Asset*! If you want to see all your *Assets* you can use *list*, -which should look like this; +You have now created an *Asset*! + +If you want to see all of your *Assets*, you can use *list*, +which should look like this: ```commandline >> las assets list @@ -47,21 +52,25 @@ which should look like this; ``` -The assetId for each *Asset* is the way to reference it when for instance creating a transition. +Note: Use the `assetId` to reference the *Asset* later as needed, for example, when creating a transition. + +Note: An *Asset* can be downloaded, updated and deleted as needed. -An *Asset* can be downloaded, updated and deleted as suits the user. ## Working with *Secrets* -Secrets behave in a similar fashion as Assets, but they are restricted to key-value pairs and are therefore +*Secrets* behave in a similar fashion as *Assets*, but they are restricted to key-value pairs and are therefore suited for credentials and environment variables. -Say you want to store the credentials to your private docker repository; + +Perhaps you want to store the credentials to your private docker repository. You can follow this example below: ```commandline >> las secrets create username=foo password=bar --name credentials --description 'docker credentials to my private repo' ``` -You have now created an *Secret*! If you want to see all your *Secrets* you can use *list*, -which should look like this; +You have now created a *Secret*! + +If you want to see all of your *Secrets*, you can use *list*, +which should look like this: ```commandline >> las secrets list @@ -77,7 +86,7 @@ which should look like this; } ``` -The secretId for each *Secret* is the way to reference it when for instance creating a transition. +Note: Use the `secretId` to reference the *Secret* later as needed, for example, when creating a transition. -A *Secret* cannot be downloaded for inspection, but can be modified by using update. +Note: A *Secret* cannot be downloaded for inspection, but it can be updated. diff --git a/introduction/batches_and_consents.md b/introduction/batches_and_consents.md index 7c54c7f7..1cfd2501 100644 --- a/introduction/batches_and_consents.md +++ b/introduction/batches_and_consents.md @@ -1,15 +1,13 @@ # *Batches* and *Consents* - - A *Batch* is a way to group your documents. The main purpose of grouping documents together in a batch - is that they can later be used to train a model. A document can only belong to one batch. + - A *Batch* is a way to group your documents so they can later be used to train a model. Note: A document can only belong to one batch. - - A *Consent* is another way of grouping your documents. The main purpose of grouping documents - together under the same *consentId* is that they can be removed at the same time. - Typically used to make it easier to execise GDPR right to be forgotten. A document can only - have one *consentId*. + - A *Consent* is another way of grouping your documents, but here the primary purpose is to group together customer data, rather than to train a model. +With *Consents*, the data is grouped under the same `consentID` which makes it easier to remove all of the data for a given customer in order to exercise their rights under GDPR. +Note: A document can only have one `consentId`. -## Create and list your batches -When creating a batch it is recommended to provide a name and a description +## Create and list batches +When creating a batch, it is recommended to provide a name and a description, as in the example below: ```commandline >> las batches create --name train --description "documents for training a new model" @@ -22,7 +20,7 @@ When creating a batch it is recommended to provide a name and a description } ``` -Too see what batches already exists and how many documents belong in them you can use *list* +Use *list* to see what batches already exists and how many documents belong to each one: ```commandline >> las batches list @@ -41,27 +39,32 @@ Too see what batches already exists and how many documents belong in them you ca ``` +## Create and list consents + +(TTNote: Unclear what an endpoint in the comment below is referencing, but the assumption is that the commands above for batches is not yet available for consents, so suggesting a 'header' for consents here so that this note doesn't look like it belongs to batches.) + {% hint style="info" %} -There is no endpoint for *consents* yet, so please feel free to invent your own on the format `las:consent:` +There is no endpoint for *consents* yet, so please feel free to invent your own using the format `las:consent:` {% endhint %} -## Listing and deleting documents -Both *Batches* and *Consents* can be used as input when listing documents; +## List and delete grouped documents + +Both *Batches* and *Consents* can be used as input when listing documents: ```commandline >> las documents list --batch-id las:batch: >> las documents list --consent-id las:consent: ``` -But only *consents* can be used as input when deleting documents; +Only *Consents* can be used as input when deleting documents: ```commandline >> las documents delete --consent-id las:consent: ``` -See more examples on how to use *batches* and *consents* in the -[documents page](./documents.md) +See more examples on how to use *batches* and *consents* in +[Documents](./documents.md). diff --git a/introduction/documents.md b/introduction/documents.md index b482899d..21a8ed1b 100644 --- a/introduction/documents.md +++ b/introduction/documents.md @@ -1,15 +1,22 @@ # *Documents* - - A *Document* can be a .pdf or a .jpeg file along with some meta information. - -Lucidtech delivers services that helps you control and automate the flow -of your documents, and a *Document* is therefore an important concept, and in this -introduction you will see how a *Document* can be created, controlled and used together with + - A *Document* can be a .pdf or a .jpeg file along with some meta information. (TTNote: Reference note below about clarity on terminology.) + +(TTNote: Consider moving this bullet to be displayed after the first paragraph below.) + +Lucidtech delivers services that help you control and automate the flow +of your documents. Therefore, the *Document* is an important component of the API. In this +introduction, you will see how a *Document* can be created, controlled and used together with *Batches*, *Consents*, *Predictions*, and *Models*. -# Creating a *Document* -The simplest way to create a *Document* is to use the CLI -and use the path of the PDF or JPEG that you would like to upload. +## Creating a *Document* +The simplest way to create a *Document* is to use the command line interface (CLI) +and specify the path of the PDF or JPEG file that you would like to upload. See the example below: + +(TTNote: Consider a link for the CLI text to reference how to install/use the CLI, perhaps to https://docs.lucidtech.ai/getting-started/dev/cli ) + +(TTNote: Suggest some clarity between 'creating' vs. 'uploading' a document and/or clarity between use of lowercase 'document' and uppercase '*Document*'. It seems the pdf/jpg file (which is referenced in the first bullet as 'a *Document*') is already 'created' in the file system, and here it's being 'uploaded' to use into the API. Both are referenced as '*Document*'. Perhaps the pdf/jpg is the source file (and should be referenced as lowercase 'document' or simply 'file'), and it is used to create the '*Document*' item/component in the API?) + ```commandline >> las documents create path/to/my/document.pdf { @@ -17,12 +24,16 @@ and use the path of the PDF or JPEG that you would like to upload. "contentType": "application/pdf" } ``` -Use this `documentlId` along with a `modelId` to make a prediction on the document. -See [predictions](./predictions.md) for more details. +Note: You will use this `documentlId` along with a `modelId` to make a prediction on the document. +See [Predictions](./predictions.md) for more details. + + +## Grouping Documents with *Batches* or *Consents* + +### *Batches* -### *Batches* and *Consents* -Now let's say you have several documents that you want to group together -with a purpose of constructing a dataset for training a model. This is where *Batches* enter the picture +You can use batches when you have several documents that you want to group together +for the purpose of constructing a dataset for training a model. See the example below on how to setup a batch: ```commandline >> las batches create --name train --description "documents for training a new model" { @@ -40,21 +51,34 @@ with a purpose of constructing a dataset for training a model. This is where *Ba "batchId": "las:batch:84ed1bb2d2634072bd3134274ed56ebe" } ``` -The exact same can be done for *Consents*, -but the purpose is to separate customers data rather that grouping them together for a training purposes. + +### *Consents* + +The same grouping can be done for *Consents*, +but the purpose of *Consents* is to separate customer data rather that to group them together for training purposes. + +(TTNote: Suggest an example be added here for Consents.) For more information on *batches* and *consents* see the page on [batches and consents](./batches_and_consents.md). +(TTNote: Consider combining these items above with the 'Batches and Consents' section (https://docs.lucidtech.ai/getting-started/introduction/batches_and_consents) so that the grouping information is all together.) + + ## Attaching *Ground Truth* to a document -In order to train or evaluate a model we need a ground truth along with each document. + +(TTNote: Since 'model' is referenced in this section, consider resequencing the 'model' section to somewhere earlier than this section.) + +(TTNote: Consider whether this 'ground truth' section should be a separate section in the menu bar under 'Documents' since it's an important component.) + +In order to train or evaluate a model, we first need to define a ground truth for each document. See our tutorial on [data training](https://docs.lucidtech.ai/data-training/data-training) for more details. -The ground truth of a document can be provided as additional info when we create it, or it can be appended afterwards. -Either way the syntax is pretty much the same: +The ground truth of a document can be provided as additional information when we create the document, or it can be appended afterwards. +Either way, the syntax is the same: ```commandline >> las documents create path/to/document.pdf --fields amount=100.00 due_date='2021-05-20' >> las documents update --fields amount=100.00 due_date='2021-05-20' ``` -By providing this information we are able to train our models by comparing our predictions to the ground truth. +By providing this information, we are able to train our models by comparing our predictions to the ground truth. diff --git a/introduction/logs.md b/introduction/logs.md index b2f3fb12..d27b6c3b 100644 --- a/introduction/logs.md +++ b/introduction/logs.md @@ -1,12 +1,19 @@ #*Logs* -- Why did the *execution* of my docker *transition* fail? +- Why did the execution of my *docker transition* fail? -All docker *transitions* that are executed will be provided with a `logId` that contains the logs from -the *execution* of the docker image that corresponds to that *transition*. +(TTNote: Do log exists for manual transitions or only docker transitions?) + +All executed docker *transitions* will be provided with a `logId` which references the log that was generated from +the execution of that *transition*. The logs provide all relevant details about the execution, including any error messages received. This can be very useful when debugging if something went wrong. -Using the CLI the logs can be printet in a readable format by using `get`; +(TTNote: I hope I maintained the correct the intent here with this rephrasing.) + +Using the CLI, the logs can be printed in a readable format by using `get`: + +(TTNote: Is this command for printing or only displaying?) + ```commandline >> las logs get las:log: --pretty ``` diff --git a/introduction/models.md b/introduction/models.md index f3633354..2315d33d 100644 --- a/introduction/models.md +++ b/introduction/models.md @@ -1,16 +1,18 @@ # *Models* - - A *Model* is a machine learning model that is able to extract information from your documents. +(TTNote: Consider moving this section to an earlier section as it is referenced in earlier pages.) + + - A *Model* is a machine learning tool that is able to extract information from your documents. -Depending on the problem that needs to be solved a *Model* can be trained accordingly. -Whether it is extracting date and total amount from a receipt, specific payment information from an invoice, -name and age from an ID-card Lucidtech is able to provide an adequate machine learning model +Depending on the problem that needs to be solved, a *Model* can be trained accordingly. +Whether it is extracting date and total amount from a receipt, a specific payment information from an invoice, +or name and age from an ID card, Lucidtech is able to provide an adequate machine learning model that can be accessed via *Models*. -Request a model by contacting [support@lucidtech.ai](mailto:support@lucidtech.ai), -or by clicking [Get Started](https://lucidtech.ai) on our home page. +You can request a model by contacting us at [support@lucidtech.ai](mailto:support@lucidtech.ai), +or by clicking *Get Started* on our [home page](https://lucidtech.ai). -See what models that are available by using `models list` available via the CLI; +See what models that are available by using `models list` available via the CLI: ```commandline >> las models list { @@ -27,5 +29,5 @@ See what models that are available by using `models list` available via the CLI; } ``` -Use the `modelId` when requesting a prediction to use a specific model. See [predictions](./predictions.md) +Note: Use the `modelId` when requesting a prediction to use a specific model. See [Predictions](./predictions.md) for more details. diff --git a/introduction/predictions.md b/introduction/predictions.md index e2cc9c45..c15dc787 100644 --- a/introduction/predictions.md +++ b/introduction/predictions.md @@ -1,8 +1,11 @@ # *Predictions* - - A *Prediction* is a prediction made by a *Model* on a *Document* + - The *Prediction* component of the API is a prediction made by a *Model* on a *Document* -After a *Model* have been trained and a *Document* have been created a *Prediction* can be made. +After a *Model* has been trained and a *Document* has been created, a *Prediction* can be made as shown in the example below: + +(TTNote: Again, since 'model' is referenced in this section, consider resequencing the 'models' section (https://docs.lucidtech.ai/getting-started/introduction/models) to be listed above this one. Consider adding links to each section as well.) + ```commandline >> las predictions create las:document:fe1cfdb5de254bb68d7c2763d3861baf las:model:03bce190cf4343efb3eb5065bd999844 { @@ -36,7 +39,7 @@ After a *Model* have been trained and a *Document* have been created a *Predicti } ``` -Common for all *Predictions* is that they have a list of dictionaries on the form +Note: Common in all *Predictions* is that they have a list of dictionaries in the format shown below: ``` { "label": "", @@ -45,12 +48,16 @@ Common for all *Predictions* is that they have a list of dictionaries on the for } ``` -By delivering `confidence` with every prediction the user can make decisions on whether to use or discard the result + +By specifying a `confidence` with every prediction, you can make decisions on whether to use or discard the results depending on the accuracy required for a specific case. -In the example above the confidence reveals that the `due_date` is likely to be wrong, + +In the example above, the confidence reveals that the `due_date` is likely to be wrong, while the `bank_account` is very likely to be correct. -*Predictions* can also be listed, which can be useful for making statistics or inspecting the performance of the model. + + +*Predictions* can also be listed, which can be useful for making statistics or inspecting the performance of the model: ```commandline >>las predictions list { diff --git a/introduction/transitions_and_workflows.md b/introduction/transitions_and_workflows.md index 3e1d3be5..1c4d118f 100644 --- a/introduction/transitions_and_workflows.md +++ b/introduction/transitions_and_workflows.md @@ -1,47 +1,50 @@ # Transitions and Workflows - A *Transition* is a manual or automatic task that forms the building block of a *Workflow*. - - A *Workflow* is a set of steps combined to a finite state machine. + - A *Workflow* is a set of steps combined to form a finite state machine. -By taking advantage of Transitions and Workflows in Lucidtech's API +By taking advantage of *Transitions* and *Workflows* in Lucidtech's API, you can easily structure a complex flow for your documents. -A Transition can be either a manual processing step or an automatic operation, and the workflow can therefore +A *Transition* can be either a manual processing step or an automatic operation. Therefore, the workflow can combine the power of AI and automation with necessary manual tasks. -## Manual Transition +## Manual Transitions A manual step is often needed for approval of partly automated tasks, or tasks that are not yet automated. -A manual transition can be created in the following manner using Lucidtech's CLI. +A manual transition can be created using the example below: +(TTNote: It's okay to reference the CLI here, just removed it for consistency with previous sections.] ```commandline las transitions create manual params.json --name Approve --description 'manual approval of invoices' ``` -See the tutorials-section for more information on what to put in the params.json file. +See the tutorials section (TTNote: Consider link here perhaps to https://docs.lucidtech.ai/getting-started/tutorials]) for more details on what to include in the params.json file. -## Docker transitions -An automated task does not require any human interaction and requires only a simple docker image. +## Docker Transitions +An automated task that does not require any human intervention requires only a simple docker image. This makes it easy to test your automated task locally and update it in your favorite docker repository. -Once you have a docker image you want to use and credentials necessary to run it you are able to create -the transition by using Lucidtech's CLI: + +Once you have your preferred docker image and the necessary credentials to run it, you can create the docker transition using the example below: +(TTNote: Suggest clarification on what to do if you don't have the necessary credentials.) ```commandline las transitions create docker params.json --name Predict --description 'automatic prediction of invoices' ``` -See the tutorials-section for more information on what to put in the params.json file. +See the tutorials section (TTNote: Again, consider same link as above) for more details on what to include in the params.json file. ## Workflow -When you have all necessary transitions ready you can combine them into one workflow, -for this example we will make the following workflow. +Once you have all necessary transitions created, you can combine them into one workflow. +For this example, we will make the following workflow: ![Workflow](../.gitbook/assets/simple-workflow.png) -To achieve this we need to make a workflow-specification, this is a json-file written in a limited version of +To achieve this, we need to make a workflow specification, which is a json file written in a limited version of [Amazon States Language](https://states-language.net/spec.html) (ASL). -Just make sure that the resource in each step corresponds to the *transitionId* for the transition you want to use. +When creating the json file, be sure that the resource in each step corresponds to the *transitionId* for the transition that you want to use. + An example of a workflow specification for a simple workflow with one manual and one automatic step would look something like this: diff --git a/quotas/README.md b/quotas/README.md index 38e877d8..01b39631 100644 --- a/quotas/README.md +++ b/quotas/README.md @@ -1,7 +1,8 @@ ## Quotas -When using Lucidtech APIs you might encounter limits in how many total resources you may create and how many you may -create every month. If you want to increase this limit please contact us at [support@lucidtech.ai](mailto:support@lucidtech.ai). +When using Lucidtech APIs, you might encounter limits in how many total resources you can create and how many (TTNote: Unclear if there is a +word missing here. First is 'how many total resources', second is 'how many __'?) you can +create every month. If you want to increase these limits, please contact us at [support@lucidtech.ai](mailto:support@lucidtech.ai). #### Current default quotas diff --git a/reference/dotnet/README.md b/reference/dotnet/README.md index 75592ad4..1be956da 100644 --- a/reference/dotnet/README.md +++ b/reference/dotnet/README.md @@ -1,8 +1,13 @@ # .NET ## Installation + +(TTNote: Consider if Installation should be moved or added to Quickstart section.) + +(TTNote: Consider resequencing the .net section to below Javascript to match quickstart section sequence.) + The .NET SDK for Lucidtech AI Services (LAS) can be downloaded from -[nuget](https://www.nuget.org/packages/Lucidtech.Las) +[nuget](https://www.nuget.org/packages/Lucidtech.Las). ```commandline $ dotnet add package Lucidtech.Las @@ -11,7 +16,7 @@ $ dotnet add package Lucidtech.Las ## Getting started After Lucidtech.Las is installed and you have received credentials, -you are ready to enhance your document-flow with the las client: +you are ready to enhance your document flow with the LAS client: ```cs using System; using Lucidtech.Las; @@ -21,11 +26,12 @@ var models = client.ListModels(); var documents = client.ListDocuments(); var workflows = client.ListWorkflows(); ``` -If you are new to LAS we recommend you to check out the [key concepts](../../introduction) + +- If you are new to LAS, we recommend that you check out the [key concepts](../../introduction) for a better understanding of what is possible with LAS. -If you are in the need of explicit examples on how to create complex workflows, -check out the [tutorials](../../tutorials) +- If you would like explicit examples on how to create complex workflows, +check out our [tutorials](../../tutorials). -The .NET SDK is open-source, and the code can be found [here](https://github.com/LucidtechAI/las-sdk-net). +The .NET SDK is open source, and the code can be found [here](https://github.com/LucidtechAI/las-sdk-net). Contributions are more than welcome. diff --git a/reference/dotnet/latest.md b/reference/dotnet/latest.md index 78018523..a45ebc51 100644 --- a/reference/dotnet/latest.md +++ b/reference/dotnet/latest.md @@ -6,6 +6,14 @@ `namespace `[`Lucidtech::Las::Core`](#a00022) | `namespace `[`Lucidtech::Las::Utils`](#a00023) | +(TTNote: Consider if summary index should contain classes as well for easy references.) + +(TTNote: The a00021/22/23 links above, along with these style links throughout this document, do not appear to work to jump to the appropriate namespace or section.) + +(TTNote: Consider additional separators between each module for clarity when reading, it's not apparent the appropriate places for those separators.) + + + # namespace `Lucidtech::Las` ## Summary @@ -45,7 +53,7 @@ `public object `[`ListPredictions`](#a00043_1aa07c60058c89b9d2464ec8ccd2037a18)`(int? maxResults,string? nextToken)` | List predictions available, calls the GET /predictions endpoint. `public object `[`ListLogs`](#a00043_1a48c31f9df10d39e5f6303032572c946d)`(string? transitionId,string? transitionExecutionId,string? workflowId,string? workflowExecutionId,int? maxResults,string? nextToken)` | List logs, calls the GET /logs endpoint. `public object `[`ListModels`](#a00043_1a2a5979f62ac58a13cdd2fce28c174508)`(int? maxResults,string? nextToken)` | List models available, calls the GET /models endpoint. -`public object `[`CreateSecret`](#a00043_1ac9ee5b8c1cedfd849aa258bccdcd1de9)`(Dictionary< string, string > data,Dictionary< string, string?>? attributes)` | Creates an secret, calls the POST /secrets endpoint. +`public object `[`CreateSecret`](#a00043_1ac9ee5b8c1cedfd849aa258bccdcd1de9)`(Dictionary< string, string > data,Dictionary< string, string?>? attributes)` | Creates a secret, calls the POST /secrets endpoint. `public object `[`ListSecrets`](#a00043_1a4bf28ad750cf50ad0f6e0d8a3558f69f)`(int? maxResults,string? nextToken)` | List secrets available, calls the GET /secrets endpoint. `public object `[`UpdateSecret`](#a00043_1a881282cf8a8cc3618b25a25c64c7feeb)`(string secretId,Dictionary< string, string >? data,Dictionary< string, string?>? attributes)` | Updates a secret, calls the PATCH /secrets/secretId endpoint. `public object `[`DeleteSecret`](#a00043_1af74cb1bf2068af164bdc42acc033f012)`(string secretId)` | Delete a secret, calls the DELETE /secrets/{secretId} endpoint. @@ -346,7 +354,7 @@ A deserialized object that can be interpreted as a Dictionary with the fields co Create a batch handle, calls the POST /batches endpoint. -Create a new batch with the provided description. on the document specified by '' +Create a new batch with the provided description on the document specified by '' ```cpp Client client = new Client(); var response = client.CreateBatch("Data gathered from the Mars Rover Invoice Scan Mission"); @@ -358,7 +366,7 @@ var response = client.CreateBatch("Data gathered from the Mars Rover Invoice Sca * `description` A brief description of the purpose of the batch #### Returns -A deserialized object that can be interpreted as a Dictionary with the fields batchId and description. batchId can be used as an input when posting documents to make them a part of this batch. +A deserialized object that can be interpreted as a Dictionary with the fields batchId and description. BatchId can be used as an input when posting documents to make them part of this batch. #### `public object `[`DeleteBatch`](#a00043_1afca5b7b95d5c60661417e824b7b8d898)`(string batchId,bool deleteDocuments)` @@ -467,7 +475,7 @@ JSON object with two keys: #### `public object `[`CreateSecret`](#a00043_1ac9ee5b8c1cedfd849aa258bccdcd1de9)`(Dictionary< string, string > data,Dictionary< string, string?>? attributes)` -Creates an secret, calls the POST /secrets endpoint. +Creates a secret, calls the POST /secrets endpoint. ```cpp Client client = new Client(); @@ -759,7 +767,7 @@ Transition execution response from REST API #### `public object `[`SendHeartbeat`](#a00043_1a4d93ff7210887e14489f679963e38d25)`(string transitionId,string executionId)` -Send heartbeat for a manual execution, calls the POST /transitions/{transitionId}/executions/{executionId}/heartbeats endpoint. +Send heartbeat for a manual execution, calls the POST /transitions/{transitionId}/executions/{executionId}/heartbeats endpoint. (TTNote: does note need to be added here about heartbeat needed once every 60 seconds: 'Note: This must be done, at minimum, once every 60 seconds or the transition execution will time out.' ?) ```cpp Client client = new Client(); @@ -862,6 +870,9 @@ User response from REST API Creates a new workflow, calls the POST /workflows endpoint. Check out [Lucidtech](#a00020)'s tutorials for more info on how to create a workflow. +(TTNote: Link to a00020 above for tutorials is not working.) + + ```cpp Client client = new Client(); var specification = new Dictionary{ @@ -1069,7 +1080,7 @@ var response = client.UpdateWorkflowExecution("", "", * `executionId` Id of the execution -* `nextTransitionId` The next transition to transition into, to end the workflow-execution, use: las:transition:commons-failed +* `nextTransitionId` The next transition, to end the workflow execution (TTNote: Unclear why end workflow is mentioned here or if it's related to the 'nextTransitionID' or to the 'las:transition:commons-failed'), use: las:transition:commons-failed #### Returns WorkflowExecution response from REST API @@ -1270,7 +1281,7 @@ class Lucidtech::Las::Core::InvalidCredentialsException : public Lucidtech.Las.Core.ClientException ``` -An [InvalidCredentialsException](#a00051) is raised if api key, access key id or secret access key is invalid. +An [InvalidCredentialsException](#a00051) is raised if the api key, access key id or secret access key is invalid. ## Summary @@ -1336,7 +1347,7 @@ A list of the responses from a prediction #### `public `[`Prediction`](#a00071_1ad2683829a91fd8809e00aeb35c412901)`(string documentId,string consentId,string modelName,List< Dictionary< string, object >> predictionResponse)` -Constructor of s [Prediction](#a00071) object +Constructor of a [Prediction](#a00071) object #### Parameters * `documentId` The id of the document used in the prediction @@ -1442,12 +1453,12 @@ A [TooManyRequestsException](#a00055) is raised if you have reached the number o Members | Descriptions --------------------------------|--------------------------------------------- -`class `[`Lucidtech::Las::Utils::FileType`](#a00083) | Help determine the type of a file, inspired by pythons `imghdr.what()`. -`class `[`Lucidtech::Las::Utils::JsonSerialPublisher`](#a00087) | A Json publishes that allows the user to serialize and deserialize back and forth between serialized json objects and deserialized general objects and specific Dictionaries. +`class `[`Lucidtech::Las::Utils::FileType`](#a00083) | Help determine the type of a file, inspired by python's `imghdr.what()`. +`class `[`Lucidtech::Las::Utils::JsonSerialPublisher`](#a00087) | A Json publisher that allows the user to serialize and deserialize back and forth between serialized json objects and deserialized general objects and specific Dictionaries. # class `Lucidtech::Las::Utils::FileType` -Help determine the type of a file, inspired by pythons `imghdr.what()`. +Help determine the type of a file, inspired by python's `imghdr.what()`. ## Summary @@ -1464,7 +1475,7 @@ class Lucidtech::Las::Utils::JsonSerialPublisher : public IDeserializer ``` -A Json publishes that allows the user to serialize and deserialize back and forth between serialized json objects and deserialized general objects and specific Dictionaries. +A Json publisher that allows the user to serialize and deserialize back and forth between serialized json objects and deserialized general objects and specific Dictionaries. ## Summary @@ -1504,4 +1515,4 @@ Deserialize the content of an IRestResponse. #### Returns A deserialized object of type *T* -Generated by [Moxygen](https://sourcey.com/moxygen) \ No newline at end of file +Generated by [Moxygen](https://sourcey.com/moxygen) diff --git a/reference/java/README.md b/reference/java/README.md index b6f6b551..973c23b8 100644 --- a/reference/java/README.md +++ b/reference/java/README.md @@ -1 +1,3 @@ # Java SDK + +(TTNote: Consider if installation and getting started section is needed for consistency, or other additional content.) diff --git a/reference/java/latest.md b/reference/java/latest.md index 10a2b453..ca465df0 100644 --- a/reference/java/latest.md +++ b/reference/java/latest.md @@ -4,6 +4,13 @@ --------------------------------|--------------------------------------------- `namespace `[`ai::lucidtech::las::sdk`](#namespaceai_1_1lucidtech_1_1las_1_1sdk) | +(TTNote: Consider if summary index should contain classes as well for easy references.) + +(TTNote: The namespace link above, along with these style links throughout this document, do not appear to work to jump to the appropriate namespace or section.) + +(TTNote: Consider additional separators between each module for clarity when reading.) + + # namespace `ai::lucidtech::las::sdk` ## Summary @@ -167,8 +174,8 @@ class ai::lucidtech::las::sdk::APIException `public JSONObject `[`getUser`]`(String userId)` | Get user, calls the GET /users/{userId} endpoint. `public JSONObject `[`updateUser`]`(String userId,`[`UpdateUserOptions`](#classai_1_1lucidtech_1_1las_1_1sdk_1_1_update_user_options)` options)` | Updates a user, calls the PATCH /users/{userId} endpoint. `public JSONObject `[`deleteUser`]`(String userId)` | Delete a user, calls the PATCH /users/{userId} endpoint. -`public JSONObject `[`createWorkflow`]`(JSONObject specification,`[`CreateWorkflowOptions`](#classai_1_1lucidtech_1_1las_1_1sdk_1_1_create_workflow_options)` options)` | Creates a new workflow, calls the POST /workflows endpoint. Check out Lucidtech's tutorials for more info on how to create a workflow. see [https://docs.lucidtech.ai/getting-started/tutorials/setup_predict_and_approve](https://docs.lucidtech.ai/getting-started/tutorials/setup_predict_and_approve) -`public JSONObject `[`createWorkflow`]`(JSONObject specification)` | Creates a new workflow, calls the POST /workflows endpoint. Check out Lucidtech's tutorials for more info on how to create a workflow. see [https://docs.lucidtech.ai/getting-started/tutorials/setup_predict_and_approve](https://docs.lucidtech.ai/getting-started/tutorials/setup_predict_and_approve) +`public JSONObject `[`createWorkflow`]`(JSONObject specification,`[`CreateWorkflowOptions`](#classai_1_1lucidtech_1_1las_1_1sdk_1_1_create_workflow_options)` options)` | Creates a new workflow, calls the POST /workflows endpoint. Check out Lucidtech's tutorials for more information on how to create a workflow. see [https://docs.lucidtech.ai/getting-started/tutorials/setup_predict_and_approve](https://docs.lucidtech.ai/getting-started/tutorials/setup_predict_and_approve) +`public JSONObject `[`createWorkflow`]`(JSONObject specification)` | Creates a new workflow, calls the POST /workflows endpoint. Check out Lucidtech's tutorials for more information on how to create a workflow. see [https://docs.lucidtech.ai/getting-started/tutorials/setup_predict_and_approve](https://docs.lucidtech.ai/getting-started/tutorials/setup_predict_and_approve) `public JSONObject `[`listWorkflows`]`(`[`ListWorkflowsOptions`](#classai_1_1lucidtech_1_1las_1_1sdk_1_1_list_workflows_options)` options)` | List workflows, calls the GET /workflows endpoint. `public JSONObject `[`listWorkflows`]`()` | List workflows, calls the GET /workflows endpoint. `public JSONObject `[`getWorkflow`]`(String workflowId)` | Get workflow, calls the GET /workflows/{workflowId} endpoint. @@ -190,12 +197,16 @@ A client to invoke api methods from Lucidtech AI Services. **See also**: [Credentials] +(TTNote: Consider if above should be a link.) + #### `public JSONObject `[`createAppClient`]`(`[`CreateAppClientOptions`](#classai_1_1lucidtech_1_1las_1_1sdk_1_1_create_app_client_options)` options)` Create an app client, calls the POST /appClients endpoint. **See also**: [CreateAppClientOptions] +(TTNote: Consider if above should be a link.) + #### Parameters * `options` Additional options to include in request body @@ -229,6 +240,8 @@ Update an appClient, calls the PATCH /appClients/{appClientId} endpoint. **See also**: [UpdateAppClientOptions] +(TTNote: Consider if above should be a link.) + #### Parameters * `appClientId` Id of the appClient @@ -250,6 +263,8 @@ List appClients available, calls the GET /appClients endpoint. **See also**: [ListAppClientsOptions] +(TTNote: Consider if above should be a link.) + #### Parameters * `options` Additional options to pass along as query parameters @@ -300,6 +315,8 @@ Create an asset, calls the POST /assets endpoint. **See also**: [CreateAssetOptions] +(TTNote: Consider if above should be a link.) + #### Parameters * `content` Binary data @@ -321,6 +338,8 @@ Create an asset, calls the POST /assets endpoint. **See also**: [CreateAssetOptions] +(TTNote: Consider if above should be a link.) + #### Parameters * `content` Data from input stream @@ -376,6 +395,8 @@ List assets available, calls the GET /assets endpoint. **See also**: [ListAssetsOptions] +(TTNote: Consider if above should be a link.) + #### Parameters * `options` Additional options to pass along as query parameters @@ -426,6 +447,8 @@ Update an asset, calls the PATCH /assets/{assetId} endpoint. **See also**: [UpdateAssetOptions] +(TTNote: Consider if above should be a link.) + #### Parameters * `assetId` Id of the asset @@ -464,6 +487,8 @@ Create a batch, calls the POST /batches endpoint. **See also**: [CreateBatchOptions] +(TTNote: Consider if above should be a link.) + #### Parameters * `options` Additional options to include in request body @@ -497,6 +522,8 @@ Update a batch, calls the PATCH /batches/{batchId} endpoint. **See also**: [UpdateBatchOptions] +(TTNote: Consider if above should be a link.) + #### Parameters * `batchId` Id of the batch @@ -518,6 +545,8 @@ List batches available, calls the GET /batches endpoint. **See also**: [ListBatchesOptions] +(TTNote: Consider if above should be a link.) + #### Parameters * `options` Additional options to pass along as query parameters @@ -587,6 +616,8 @@ Create a document, calls the POST /documents endpoint. **See also**: [CreateDocumentOptions] +(TTNote: Consider if above should be a link.) + #### Parameters * `content` Binary data @@ -610,6 +641,8 @@ Create a document, calls the POST /documents endpoint. **See also**: [CreateDocumentOptions] +(TTNote: Consider if above should be a link.) + #### Parameters * `content` Data from input stream @@ -633,6 +666,8 @@ Create a document, calls the POST /documents endpoint. **See also**: [CreateDocumentOptions] +(TTNote: Consider if above should be a link.) + #### Parameters * `content` Data from input stream @@ -654,6 +689,8 @@ Create a document, calls the POST /documents endpoint. **See also**: [CreateDocumentOptions] +(TTNote: Consider if above should be a link.) + #### Parameters * `content` Binary data @@ -675,6 +712,8 @@ List documents, calls the GET /documents endpoint. **See also**: [ListDocumentsOptions] +(TTNote: Consider if above should be a link.) + #### Parameters * `options` Additional options to pass along as query parameters @@ -708,6 +747,8 @@ Delete documents, calls the DELETE /documents endpoint. **See also**: [DeleteDocumentsOptions] +(TTNote: Consider if above should be a link.) + #### Parameters * `options` Additional options to pass along as query parameters @@ -727,6 +768,8 @@ Delete documents, calls the DELETE /documents endpoint. **See also**: [Client::createDocument] +(TTNote: Consider if above should be a link.) + #### Returns Documents response from REST API @@ -760,6 +803,8 @@ Update document, calls the PATCH /documents/{documentId} endpoint. **See also**: [Client::createDocument] +(TTNote: Consider if above should be a link.) + #### Parameters * `documentId` The document id to post groundTruth to. @@ -798,6 +843,8 @@ List logs, calls the GET /logs endpoint. **See also**: [ListLogsOptions] +(TTNote: Consider if above should be a link.) + #### Parameters * `options` Additional options to pass along as query parameters @@ -833,6 +880,8 @@ Create a model, calls the POST /models endpoint. **See also**: [FieldConfig] +(TTNote: Consider if above 2 should be link.) + #### Parameters * `width` The number of pixels to be used for the input image width of your model @@ -858,6 +907,8 @@ Create a model, calls the POST /models endpoint. **See also**: [FieldConfig] +(TTNote: Consider if above should be a link.) + #### Parameters * `width` The number of pixels to be used for the input image width of your model @@ -881,6 +932,8 @@ Updates a model, calls the PATCH /models/{modelId} endpoint. **See also**: [UpdateModelOptions] +(TTNote: Consider if above should be a link.) + #### Parameters * `modelId` Id of the model @@ -919,6 +972,8 @@ List models, calls the GET /models endpoint. **See also**: [ListModelsOptions] +(TTNote: Consider if above should be a link.) + #### Parameters * `options` Additional options to pass along as query parameters @@ -954,6 +1009,8 @@ Create a prediction on a document using specified model, calls the POST /predict **See also**: [CreatePredictionOptions] +(TTNote: Consider if above 2 links should be links.) + #### Parameters * `documentId` The document id to run inference and create a prediction on. @@ -977,6 +1034,8 @@ Create a prediction on a document using specified model, calls the POST /predict **See also**: [Client::createDocument] +(TTNote: Consider if above should be a link.) + #### Parameters * `documentId` The document id to run inference and create a prediction on. @@ -998,6 +1057,8 @@ List predictions available, calls the GET /predictions endpoint. **See also**: [ListPredictionsOptions] +(TTNote: Consider if above should be a link.) + #### Parameters * `options` Additional options to pass along as query parameters @@ -1031,6 +1092,8 @@ Create secret, calls the POST /secrets endpoint. **See also**: [CreateSecretOptions] +(TTNote: Consider if above should be a link.) + #### Parameters * `data` Key-Value pairs to store secretly @@ -1052,6 +1115,8 @@ Create a secret, calls the POST /secrets endpoint. **See also**: [CreateSecretOptions] +(TTNote: Consider if above should be a link.) + #### Parameters * `data` Key-Value pairs to store secretly @@ -1107,6 +1172,8 @@ List secrets, calls the GET /secrets endpoint. **See also**: [ListSecretsOptions] +(TTNote: Consider if above should be a link.) + #### Parameters * `options` Additional options to pass along as query parameters @@ -1140,6 +1207,8 @@ Update a secret, calls the PATCH /secrets/{secretId} endpoint. **See also**: [UpdateSecretOptions] +(TTNote: Consider if above should be a link.) + #### Parameters * `secretId` Id of the secret @@ -1178,8 +1247,12 @@ Create a transition, calls the POST /transitions endpoint. **See also**: [CreateTransitionOptions] +(TTNote: Consider if above should be a link.) + **See also**: [TransitionType](#enumai_1_1lucidtech_1_1las_1_1sdk_1_1_transition_type) +(TTNote: Link above does not work.) + #### Parameters * `transitionType` Type of transition @@ -1201,6 +1274,8 @@ Create a transition, calls the POST /transitions endpoint. **See also**: [TransitionType](#enumai_1_1lucidtech_1_1las_1_1sdk_1_1_transition_type) +(TTNote: Link above does not work.) + #### Parameters * `transitionType` Type of transition @@ -1220,6 +1295,8 @@ List transitions, calls the GET /transitions endpoint. **See also**: [ListTransitionsOptions] +(TTNote: Consider if above should be a link.) + #### Parameters * `options` Additional options to pass along as query parameters @@ -1270,6 +1347,8 @@ Updates a transition, calls the PATCH /transitions/{transitionId} endpoint. **See also**: [UpdateTransitionOptions] +(TTNote: Consider if above should be a link.) + #### Parameters * `transitionId` Id of the transition @@ -1325,6 +1404,8 @@ List executions in a transition, calls the GET /transitions/{transitionId}/execu **See also**: [ListTransitionExecutionsOptions] +(TTNote: Consider if above should be a link.) + #### Parameters * `transitionId` Id of the transition @@ -1382,8 +1463,13 @@ Ends the processing of the transition execution, calls the PATCH /transitions/{t **See also**: [UpdateTransitionExecutionOptions] +(TTNote: Consider if above should be a link.) + **See also**: [TransitionExecutionStatus](#enumai_1_1lucidtech_1_1las_1_1sdk_1_1_transition_execution_status) +(TTNote: Link above does not work.) + + #### Parameters * `transitionId` Id of the transition @@ -1405,7 +1491,7 @@ Transition response from REST API #### `public JSONObject `[`sendHeartbeat`]`(String transitionId,String executionId)` -Send heartbeat for a manual execution to signal that we are still working on it. Must be done at minimum once every 60 seconds or the transition execution will time out, calls the POST /transitions/{transitionId}/executions/{executionId}/heartbeats endpoint. +Send heartbeat for a manual execution to signal that we are still working on it. Note: Must be done at minimum once every 60 seconds or the transition execution will time out. Calls the POST /transitions/{transitionId}/executions/{executionId}/heartbeats endpoint. #### Parameters * `transitionId` Id of the transition @@ -1428,6 +1514,8 @@ Create a user, calls the POST /users endpoint. **See also**: [CreateUserOptions] +(TTNote: Consider if above should be a link.) + #### Parameters * `email` Email of the new user @@ -1466,6 +1554,8 @@ List users, calls the GET /users endpoint. **See also**: [ListUsersOptions] +(TTNote: Consider if above should be a link.) + #### Parameters * `options` Additional options to pass along as query parameters @@ -1516,6 +1606,8 @@ Updates a user, calls the PATCH /users/{userId} endpoint. **See also**: [UpdateUserOptions] +(TTNote: Consider if above should be a link.) + #### Parameters * `userId` Id of user @@ -1550,10 +1642,12 @@ User response from REST API #### `public JSONObject `[`createWorkflow`]`(JSONObject specification,`[`CreateWorkflowOptions`](#classai_1_1lucidtech_1_1las_1_1sdk_1_1_create_workflow_options)` options)` -Creates a new workflow, calls the POST /workflows endpoint. Check out Lucidtech's tutorials for more info on how to create a workflow. see [https://docs.lucidtech.ai/getting-started/tutorials/setup_predict_and_approve](https://docs.lucidtech.ai/getting-started/tutorials/setup_predict_and_approve) +Creates a new workflow, calls the POST /workflows endpoint. Check out Lucidtech's tutorials for more information on how to create a workflow. see [https://docs.lucidtech.ai/getting-started/tutorials/setup_predict_and_approve](https://docs.lucidtech.ai/getting-started/tutorials/setup_predict_and_approve) **See also**: [CreateWorkflowOptions] +(TTNote: Consider if above should be a link.) + #### Parameters * `specification` Specification of the workflow, currently supporting ASL: [https://states-language.net/spec.html](https://states-language.net/spec.html). Check out the tutorials for more information: see [https://docs.lucidtech.ai/getting-started/tutorials/setup_predict_and_approve#creating-the-workflow](https://docs.lucidtech.ai/getting-started/tutorials/setup_predict_and_approve#creating-the-workflow) @@ -1571,7 +1665,7 @@ Workflow response from API #### `public JSONObject `[`createWorkflow`]`(JSONObject specification)` -Creates a new workflow, calls the POST /workflows endpoint. Check out Lucidtech's tutorials for more info on how to create a workflow. see [https://docs.lucidtech.ai/getting-started/tutorials/setup_predict_and_approve](https://docs.lucidtech.ai/getting-started/tutorials/setup_predict_and_approve) +Creates a new workflow, calls the POST /workflows endpoint. Check out Lucidtech's tutorials for more information on how to create a workflow. see [https://docs.lucidtech.ai/getting-started/tutorials/setup_predict_and_approve](https://docs.lucidtech.ai/getting-started/tutorials/setup_predict_and_approve) #### Parameters * `specification` Specification of the workflow, currently supporting ASL: [https://states-language.net/spec.html](https://states-language.net/spec.html). Check out the tutorials for more information: see [https://docs.lucidtech.ai/getting-started/tutorials/setup_predict_and_approve#creating-the-workflow](https://docs.lucidtech.ai/getting-started/tutorials/setup_predict_and_approve#creating-the-workflow) @@ -1592,6 +1686,8 @@ List workflows, calls the GET /workflows endpoint. **See also**: [ListWorkflowsOptions] +(TTNote: Consider if above should be a link.) + #### Parameters * `options` Additional options to pass along as query parameters @@ -1642,6 +1738,8 @@ Update a workflow, calls the PATCH /workflows/{workflowId} endpoint. **See also**: [UpdateWorkflowOptions] +(TTNote: Consider if above should be a link.) + #### Parameters * `workflowId` Id of the workflow @@ -1663,6 +1761,8 @@ Delete a workflow, calls the DELETE /workflows/{workflowId} endpoint. **See also**: [Client::createWorkflow] +(TTNote: Consider if above should be a link.) + #### Parameters * `workflowId` Id of the workflow @@ -1701,6 +1801,8 @@ List executions in a workflow, calls the GET /workflows/{workflowId}/executions **See also**: [ListWorkflowExecutionsOptions] +(TTNote: Consider if above should be a link.) + #### Parameters * `workflowId` Id of the workflow @@ -1739,6 +1841,8 @@ Delete execution from workflow, calls the DELETE /workflows/{workflowId}/executi **See also**: [Client::executeWorkflow] +(TTNote: Consider if above should be a link.) + #### Parameters * `workflowId` Id of the workflow @@ -1754,6 +1858,10 @@ WorkflowExecution response from REST API * `[MissingAccessTokenException]` Raised if access token cannot be obtained + +(TTNote: Suggestion to add more verbiage to sections below such as summary descriptions and members, etc) + + # class `ai::lucidtech::las::sdk::CreateAppClientOptions` ``` @@ -1997,6 +2105,9 @@ Used to fetch and store credentials. * `apiEndpoint` Domain endpoint of the api, e.g. [https://{prefix}.api.lucidtech.ai/{version}](https://{prefix}.api.lucidtech.ai/{version}) +(TTNote: Consider whether this should be a link since it has variables, assumption is that this is to show the format of the url not an actual url.) + + #### Exceptions * `[MissingCredentialsException]` Raised if some of credentials are missing @@ -2930,4 +3041,4 @@ class ai::lucidtech::las::sdk::WorkflowErrorConfig #### `public JSONObject `[`addOptions`]`(JSONObject body)` -Generated by [Moxygen](https://sourcey.com/moxygen) \ No newline at end of file +Generated by [Moxygen](https://sourcey.com/moxygen) diff --git a/reference/js/README.md b/reference/js/README.md index f972abb8..7b08e3c4 100644 --- a/reference/js/README.md +++ b/reference/js/README.md @@ -1,7 +1,13 @@ ## JavaScript SDK for Lucidtech AI Services API +(TTNote: Consider common headings for all SDKs, either remove 'for Lucidtech AI Services API' from this page, or add it to .net, Python, and others for consistency.) + + ### Installation +(TTNote: Consider whether Installation should be repeated here as it is in the quickstart section, or if it should be linked instead. But if it is duplicated here, the syntax should match (here, yarn is mentioned and shell is different versus how it appears in quickstart.) + + #### Browser version ``` $ yarn add @lucidtech/las-sdk-browser @@ -16,6 +22,8 @@ $ npm install @lucidtech/las-sdk-node ### Usage +(TTNote: Consider whether consistent headings is needed here (i.e. Getting Started) as in other sections.) + ```javascript import { Client } from '@lucidtech/las-sdk-core'; import { ClientCredentials } from '@lucidtech/las-sdk-node'; @@ -33,6 +41,8 @@ client.postDocuments(content, 'image/jpeg', '').then(documentResponse ### Contributing +(TTNote: Suggest an alternate title for clarity, unclear what contributing is referring to.) + Install dependencies ``` $ npm install && npm run upgrade-lucidtech diff --git a/reference/js/latest.md b/reference/js/latest.md index 2e45cba8..fce2d33f 100644 --- a/reference/js/latest.md +++ b/reference/js/latest.md @@ -3,6 +3,11 @@ @lucidtech/las-sdk-core / [Exports](#modulesmd) +(TTNote: Links to modulesmd above not working.) + +(TTNote: Suggest an index on top for each module listed below for easier access to the content.) + + # JavaScript SDK for Lucidtech AI Services API ## Installation @@ -53,11 +58,17 @@ Run tests $ npm run test test ``` +(TTNote: Consider whether above section - from header to here - is needed here as this is a duplicate of the readme.) + [@lucidtech/las-sdk-core](#readmemd) / [Exports](#modulesmd) / [client](#modulesclientmd) / Client +(TTNote: Links in the above lin aren't working.) + + + # Class: Client [client](#modulesclientmd).Client @@ -222,7 +233,7 @@ ___ ▸ **createSecret**(`data`: *Record*, `options?`: [*CreateSecretOptions*](#interfacestypescreatesecretoptionsmd)): *Promise*<[*Secret*](#secret)\> -Creates an secret handle, calls the POST /secrets endpoint. +Creates a secret handle, calls the POST /secrets endpoint. #### Parameters: @@ -975,6 +986,7 @@ Method used to get and cache an access token. Algorithm used: **Returns:** *Promise* +(TTNote: Consider additional descriptions for sections below.) diff --git a/reference/python/README.md b/reference/python/README.md index 6263526c..a4f84dac 100644 --- a/reference/python/README.md +++ b/reference/python/README.md @@ -1,25 +1,34 @@ # Python SDK ## Installation -The python SDK for Lucidtech AI Services (LAS) can be downloaded from `pip`: + +(TTNote: Consider whether Installation should be repeated here as it is in the quickstart section, or if it should be linked instead. +But if it is duplicated here, the syntax should match (in quickstart, it's referencing bash syntax, here it is simply commandline.) + + +The Python SDK for Lucidtech AI Services (LAS) can be installed using `pip`: ```commandline $ pip install lucidtech-las ``` Don't have pip? [Here](https://pip.pypa.io/en/latest/installing/) are instructions for installing pip. +(TTNote: The link here is invalid.) ## Getting started -After lucidtech-las is installed and you have received credentials, -you are ready to enhance your document-flow with the las client: +After the Lucidtech LAS is installed and you have received credentials, +you are ready to enhance your document flow with the LAS client: ```python >> from las import Client >> client = Client() ``` -If you are new to LAS we recommend you to check out the [key concepts](../../introduction) +- If you are new to LAS, we recommend that you check out the [key concepts](../../introduction) for a better understanding of what is possible with LAS. -If you are in the need of explicit examples on how to create complex workflows, -check out the [tutorials](../../tutorials) +- If you would like explicit examples on how to create complex workflows, +check out our [tutorials](../../tutorials) + +(TTNote: Consider adding the above bullets to other sections for consistency.) + -The Python SDK is open-source, and the code can be found [here](https://github.com/LucidtechAI/las-sdk-python). +The Python SDK is open source, and the code can be found [here](https://github.com/LucidtechAI/las-sdk-python). Contributions are more than welcome. diff --git a/reference/python/latest.md b/reference/python/latest.md index a589bfa7..ea8ab4bd 100644 --- a/reference/python/latest.md +++ b/reference/python/latest.md @@ -1,14 +1,22 @@ # las package +(TTNote: Suggest an index on top for each module listed below for easier access to the content.) + +(TTNote: Also consider additional separation between each module for clarity when reading, such as the separator lines that I've added.) + + ## Module contents ### class las.Client(credentials: Optional[las.credentials.Credentials] = None) Bases: `object` +(TTNote: Suggest additional notations about why 'basis' is mentioned here.) -A low level client to invoke api methods from Lucidtech AI Services. +A low-level client to invoke API methods from Lucidtech AI Services. +- - - + #### create_app_client(generate_secret=True, logout_urls=None, callback_urls=None, login_urls=None, default_login_url=None, \*\*optional_args) Creates an appClient, calls the POST /appClients endpoint. @@ -28,7 +36,7 @@ Creates an appClient, calls the POST /appClients endpoint. * **description** (*Optional**[**str**]*) – Description of the appClient - * **generate_secret** (*Boolean*) – Set to False to ceate a Public app client, default: True + * **generate_secret** (*Boolean*) – Set to False to create a Public app client, default: True * **logout_urls** (*List**[**str**]*) – List of logout urls @@ -61,6 +69,7 @@ Creates an appClient, calls the POST /appClients endpoint. `InvalidCredentialsException`, `TooManyRequestsException`, `LimitExceededException`, `requests.exception.RequestException` +- - - #### create_asset(content: Union[bytes, bytearray, str, pathlib.Path, io.IOBase], \*\*optional_args) Creates an asset, calls the POST /assets endpoint. @@ -103,6 +112,8 @@ Creates an asset, calls the POST /assets endpoint. +- - - + #### create_batch(\*\*optional_args) Creates a batch, calls the POST /batches endpoint. @@ -141,6 +152,8 @@ Creates a batch, calls the POST /batches endpoint. +- - - + #### create_document(content: Union[bytes, bytearray, str, pathlib.Path, io.IOBase], content_type: str, \*, consent_id: Optional[str] = None, batch_id: Optional[str] = None, ground_truth: Optional[Sequence[Dict[str, str]]] = None) Creates a document, calls the POST /documents endpoint. @@ -160,10 +173,10 @@ Creates a document, calls the POST /documents endpoint. * **content_type** (*str*) – MIME type for the document - * **consent_id** (*Optional**[**str**]*) – Id of the consent that marks the owner of the document + * **consent_id** (*Optional**[**str**]*) – ID of the consent that marks the owner of the document - * **batch_id** (*Optional**[**str**]*) – Id of the associated batch + * **batch_id** (*Optional**[**str**]*) – ID of the associated batch * **ground_truth** – List of items {‘label’: label, ‘value’: value} @@ -181,6 +194,8 @@ representing the ground truth values for the document +- - - + #### create_model(width: int, height: int, field_config: dict, \*, preprocess_config: Optional[dict] = None, name: Optional[str] = None, description: Optional[str] = None, \*\*optional_args) Creates a model, calls the POST /models endpoint. @@ -225,6 +240,8 @@ Creates a model, calls the POST /models endpoint. +- - - + #### create_prediction(document_id: str, model_id: str, \*, max_pages: Optional[int] = None, auto_rotate: Optional[bool] = None, image_quality: Optional[str] = None) Create a prediction on a document using specified model, calls the POST /predictions endpoint. @@ -241,16 +258,16 @@ Create a prediction on a document using specified model, calls the POST /predict * **document_id** (*str*) – Id of the document to run inference and create a prediction on - * **model_id** (*str*) – Id of the model to use for inference + * **model_id** (*str*) – ID of the model to use for inference * **max_pages** (*Optional**[**int**]*) – Maximum number of pages to run predictions on - * **auto_rotate** (*Optional**[**bool**]*) – Whether or not to let the API try different rotations on the document when running predictions + * **auto_rotate** (*Optional**[**bool**]*) – Whether or not to let the API try different rotations on the document when running predictions - * **image_quality** (*Optional**[**int**]*) – image quality for prediction “LOW|HIGH”. high quality could give better result but will also take longer time. + * **image_quality** (*Optional**[**int**]*) – Image quality for prediction “LOW|HIGH”. Note: High quality could give better results, but will also take longer. @@ -272,8 +289,10 @@ Create a prediction on a document using specified model, calls the POST /predict +- - - + #### create_secret(data: dict, \*\*optional_args) -Creates an secret, calls the POST /secrets endpoint. +Creates a secret, calls the POST /secrets endpoint. ```python >>> from las.client import Client @@ -314,6 +333,8 @@ Creates an secret, calls the POST /secrets endpoint. +- - - + #### create_transition(transition_type: str, \*, in_schema: Optional[dict] = None, out_schema: Optional[dict] = None, parameters: Optional[dict] = None, \*\*optional_args) Creates a transition, calls the POST /transitions endpoint. @@ -343,10 +364,10 @@ Creates a transition, calls the POST /transitions endpoint. * **transition_type** (*str*) – Type of transition “docker”|”manual” - * **in_schema** (*Optional**[**dict**]*) – Json-schema that defines the input to the transition + * **in_schema** (*Optional**[**dict**]*) – Json schema that defines the input to the transition - * **out_schema** (*Optional**[**dict**]*) – Json-schema that defines the output of the transition + * **out_schema** (*Optional**[**dict**]*) – Json schema that defines the output of the transition * **name** (*Optional**[**str**]*) – Name of the transition @@ -377,6 +398,8 @@ Creates a transition, calls the POST /transitions endpoint. +- - - + #### create_user(email: str, \*, app_client_id, \*\*optional_args) Creates a new user, calls the POST /users endpoint. @@ -390,7 +413,7 @@ Creates a new user, calls the POST /users endpoint. * **Parameters** - * **email** (*str*) – Email to the new user + * **email** (*str*) – Email to the new user (TTNote: Unclear if this is a flag to 'send an email to the user' or a field to include the 'email address of the user'.) * **name** (*Optional**[**str**]*) – Name of the user @@ -418,9 +441,11 @@ Creates a new user, calls the POST /users endpoint. +- - - + #### create_workflow(specification: dict, \*, error_config: Optional[dict] = None, completed_config: Optional[dict] = None, \*\*optional_args) Creates a new workflow, calls the POST /workflows endpoint. -Check out Lucidtech’s tutorials for more info on how to create a workflow. +Note: Check out Lucidtech’s tutorials (TTNote: Suggest a link here.) for more info on how to create a workflow. ```python >>> from las.client import Client @@ -470,6 +495,8 @@ Check out Lucidtech’s tutorials for more info on how to create a workflow. +- - - + #### delete_app_client(app_client_id: str) Delete the appClient with the provided appClientId, calls the DELETE /appClients/{appClientId} endpoint. @@ -482,7 +509,7 @@ Delete the appClient with the provided appClientId, calls the DELETE /appClients * **Parameters** - **app_client_id** (*str*) – Id of the appClient + **app_client_id** (*str*) – ID of the appClient @@ -504,6 +531,8 @@ Delete the appClient with the provided appClientId, calls the DELETE /appClients +- - - + #### delete_asset(asset_id: str) Delete the asset with the provided asset_id, calls the DELETE /assets/{assetId} endpoint. @@ -516,7 +545,7 @@ Delete the asset with the provided asset_id, calls the DELETE /assets/{assetId} * **Parameters** - **asset_id** (*str*) – Id of the asset + **asset_id** (*str*) – ID of the asset @@ -538,6 +567,8 @@ Delete the asset with the provided asset_id, calls the DELETE /assets/{assetId} +- - - + #### delete_batch(batch_id: str, delete_documents=False) Delete the batch with the provided batch_id, calls the DELETE /batches/{batchId} endpoint. @@ -551,10 +582,10 @@ Delete the batch with the provided batch_id, calls the DELETE /batches/{batchId} * **Parameters** - * **batch_id** (*str*) – Id of the batch + * **batch_id** (*str*) – ID of the batch - * **delete_documents** (*bool*) – Set to true to delete documents in batch before deleting batch + * **delete_documents** (*bool*) – Set to 'true' to delete documents in batch before deleting the batch @@ -576,6 +607,8 @@ Delete the batch with the provided batch_id, calls the DELETE /batches/{batchId} +- - - + #### delete_documents(\*, consent_id: Optional[Union[str, List[str]]] = None, batch_id: Optional[Union[str, List[str]]] = None, max_results: Optional[int] = None, next_token: Optional[str] = None) Delete documents with the provided consent_id, calls the DELETE /documents endpoint. @@ -589,10 +622,10 @@ Delete documents with the provided consent_id, calls the DELETE /documents endpo * **Parameters** - * **batch_id** (*Optional**[**Queryparam**]*) – Ids of the batches to be deleted + * **batch_id** (*Optional**[**Queryparam**]*) – IDs of the batches to be deleted - * **consent_id** (*Optional**[**Queryparam**]*) – Ids of the consents that marks the owner of the document + * **consent_id** (*Optional**[**Queryparam**]*) – IDs of the consents that marks the owner of the document * **max_results** (*Optional**[**int**]*) – Maximum number of documents that will be deleted @@ -620,6 +653,8 @@ Delete documents with the provided consent_id, calls the DELETE /documents endpo +- - - + #### delete_secret(secret_id: str) Delete the secret with the provided secret_id, calls the DELETE /secrets/{secretId} endpoint. @@ -632,7 +667,7 @@ Delete the secret with the provided secret_id, calls the DELETE /secrets/{secret * **Parameters** - **secret_id** (*str*) – Id of the secret + **secret_id** (*str*) – ID of the secret @@ -654,6 +689,8 @@ Delete the secret with the provided secret_id, calls the DELETE /secrets/{secret +- - - + #### delete_transition(transition_id: str) Delete the transition with the provided transition_id, calls the DELETE /transitions/{transitionId} endpoint. @@ -668,7 +705,7 @@ Delete the transition with the provided transition_id, calls the DELETE /transit * **Parameters** - **transition_id** (*str*) – Id of the transition + **transition_id** (*str*) – ID of the transition @@ -690,6 +727,8 @@ Delete the transition with the provided transition_id, calls the DELETE /transit +- - - + #### delete_user(user_id: str) Delete the user with the provided user_id, calls the DELETE /users/{userId} endpoint. @@ -702,7 +741,7 @@ Delete the user with the provided user_id, calls the DELETE /users/{userId} endp * **Parameters** - **user_id** (*str*) – Id of the user + **user_id** (*str*) – ID of the user @@ -724,6 +763,8 @@ Delete the user with the provided user_id, calls the DELETE /users/{userId} endp +- - - + #### delete_workflow(workflow_id: str) Delete the workflow with the provided workflow_id, calls the DELETE /workflows/{workflowId} endpoint. @@ -757,6 +798,8 @@ Delete the workflow with the provided workflow_id, calls the DELETE /workflows/{ `InvalidCredentialsException`, `TooManyRequestsException`, `LimitExceededException`, `requests.exception.RequestException` +- - - + #### delete_workflow_execution(workflow_id: str, execution_id: str) Deletes the execution with the provided execution_id from workflow_id, @@ -797,6 +840,8 @@ calls the DELETE /workflows/{workflowId}/executions/{executionId} endpoint. +- - - + #### execute_transition(transition_id: str) Start executing a manual transition, calls the POST /transitions/{transitionId}/executions endpoint. @@ -831,6 +876,8 @@ Start executing a manual transition, calls the POST /transitions/{transitionId}/ +- - - + #### execute_workflow(workflow_id: str, content: dict) Start a workflow execution, calls the POST /workflows/{workflowId}/executions endpoint. @@ -871,6 +918,8 @@ Start a workflow execution, calls the POST /workflows/{workflowId}/executions en +- - - + #### get_asset(asset_id: str) Get asset, calls the GET /assets/{assetId} endpoint. @@ -883,7 +932,7 @@ Get asset, calls the GET /assets/{assetId} endpoint. * **Parameters** - **asset_id** (*str*) – Id of the asset + **asset_id** (*str*) – ID of the asset @@ -905,6 +954,8 @@ Get asset, calls the GET /assets/{assetId} endpoint. +- - - + #### get_document(document_id: str) Get document, calls the GET /documents/{documentId} endpoint. @@ -917,7 +968,7 @@ Get document, calls the GET /documents/{documentId} endpoint. * **Parameters** - **document_id** (*str*) – Id of the document + **document_id** (*str*) – ID of the document @@ -939,6 +990,8 @@ Get document, calls the GET /documents/{documentId} endpoint. +- - - + #### get_log(log_id) get log, calls the GET /logs/{logId} endpoint. @@ -973,6 +1026,8 @@ get log, calls the GET /logs/{logId} endpoint. +- - - + #### get_model(model_id: str) Get a model, calls the GET /models/{modelId} endpoint. @@ -1001,6 +1056,8 @@ Get a model, calls the GET /models/{modelId} endpoint. +- - - + #### get_transition(transition_id: str) Get the transition with the provided transition_id, calls the GET /transitions/{transitionId} endpoint. @@ -1034,6 +1091,8 @@ Get the transition with the provided transition_id, calls the GET /transitions/{ `InvalidCredentialsException`, `TooManyRequestsException`, `LimitExceededException`, `requests.exception.RequestException` +- - - + #### get_transition_execution(transition_id: str, execution_id: str) Get an execution of a transition, calls the GET /transitions/{transitionId}/executions/{executionId} endpoint @@ -1073,6 +1132,8 @@ Get an execution of a transition, calls the GET /transitions/{transitionId}/exec +- - - + #### get_user(user_id: str) Get information about a specific user, calls the GET /users/{user_id} endpoint. @@ -1106,6 +1167,8 @@ Get information about a specific user, calls the GET /users/{user_id} endpoint. `InvalidCredentialsException`, `TooManyRequestsException`, `LimitExceededException`, `requests.exception.RequestException` +- - - + #### get_workflow(workflow_id: str) Get the workflow with the provided workflow_id, calls the GET /workflows/{workflowId} endpoint. @@ -1119,7 +1182,7 @@ Get the workflow with the provided workflow_id, calls the GET /workflows/{workfl * **Parameters** - **workflow_id** (*str*) – Id of the workflow + **workflow_id** (*str*) – ID of the workflow @@ -1141,6 +1204,8 @@ Get the workflow with the provided workflow_id, calls the GET /workflows/{workfl +- - - + #### get_workflow_execution(workflow_id: str, execution_id: str) Get a workflow execution, calls the GET /workflows/{workflow_id}/executions/{execution_id} endpoint. @@ -1179,6 +1244,8 @@ Get a workflow execution, calls the GET /workflows/{workflow_id}/executions/{exe +- - - + #### list_app_clients(\*, max_results: Optional[int] = None, next_token: Optional[str] = None) List appClients available, calls the GET /appClients endpoint. @@ -1216,6 +1283,8 @@ List appClients available, calls the GET /appClients endpoint. `InvalidCredentialsException`, `TooManyRequestsException`, `LimitExceededException`, `requests.exception.RequestException` +- - - + #### list_assets(\*, max_results: Optional[int] = None, next_token: Optional[str] = None) List assets available, calls the GET /assets endpoint. @@ -1255,6 +1324,8 @@ List assets available, calls the GET /assets endpoint. +- - - + #### list_batches(\*, max_results: Optional[int] = None, next_token: Optional[str] = None) List batches available, calls the GET /batches endpoint. @@ -1292,6 +1363,8 @@ List batches available, calls the GET /batches endpoint. `InvalidCredentialsException`, `TooManyRequestsException`, `LimitExceededException`, `requests.exception.RequestException` +- - - + #### list_documents(\*, batch_id: Optional[Union[str, List[str]]] = None, consent_id: Optional[Union[str, List[str]]] = None, max_results: Optional[int] = None, next_token: Optional[str] = None) List documents available for inference, calls the GET /documents endpoint. @@ -1306,10 +1379,10 @@ List documents available for inference, calls the GET /documents endpoint. * **Parameters** - * **batch_id** (*Optional**[**Queryparam**]*) – Ids of batches that contains the documents of interest + * **batch_id** (*Optional**[**Queryparam**]*) – IDs of batches that contains the documents of interest - * **consent_id** (*Optional**[**Queryparam**]*) – Ids of the consents that marks the owner of the document + * **consent_id** (*Optional**[**Queryparam**]*) – IDs of the consents that marks the owner of the document * **max_results** (*Optional**[**int**]*) – Maximum number of results to be returned @@ -1337,6 +1410,8 @@ List documents available for inference, calls the GET /documents endpoint. +- - - + #### list_logs(\*, workflow_id: Optional[Union[str, List[str]]] = None, workflow_execution_id: Optional[Union[str, List[str]]] = None, transition_id: Optional[Union[str, List[str]]] = None, transition_execution_id: Optional[Union[str, List[str]]] = None, max_results: Optional[int] = None, next_token: Optional[str] = None) List logs, calls the GET /logs endpoint. @@ -1361,6 +1436,7 @@ List logs, calls the GET /logs endpoint. * **transition_execution_id** (*Optional**[**Queryparam**]*) – +(TTNote: Suggest descriptions for the bullets above.) * **max_results** (*Optional**[**int**]*) – Maximum number of results to be returned @@ -1387,6 +1463,8 @@ List logs, calls the GET /logs endpoint. +- - - + #### list_models(\*, max_results: Optional[int] = None, next_token: Optional[str] = None) List models available, calls the GET /models endpoint. @@ -1425,6 +1503,8 @@ List models available, calls the GET /models endpoint. +- - - + #### list_predictions(\*, max_results: Optional[int] = None, next_token: Optional[str] = None) List predictions available, calls the GET /predictions endpoint. @@ -1463,6 +1543,8 @@ List predictions available, calls the GET /predictions endpoint. +- - - + #### list_secrets(\*, max_results: Optional[int] = None, next_token: Optional[str] = None) List secrets available, calls the GET /secrets endpoint. @@ -1501,6 +1583,8 @@ List secrets available, calls the GET /secrets endpoint. +- - - + #### list_transition_executions(transition_id: str, \*, status: Optional[Union[str, List[str]]] = None, execution_id: Optional[Union[str, List[str]]] = None, max_results: Optional[int] = None, next_token: Optional[str] = None, sort_by: Optional[str] = None, order: Optional[str] = None) List executions in a transition, calls the GET /transitions/{transitionId}/executions endpoint. @@ -1514,7 +1598,7 @@ List executions in a transition, calls the GET /transitions/{transitionId}/execu * **Parameters** - * **transition_id** (*str*) – Id of the transition + * **transition_id** (*str*) – ID of the transition * **status** (*Optional**[**Queryparam**]*) – Statuses of the executions @@ -1523,10 +1607,10 @@ List executions in a transition, calls the GET /transitions/{transitionId}/execu * **order** (*Optional**[**str**]*) – Order of the executions, either ‘ascending’ or ‘descending’ - * **sort_by** (*Optional**[**str**]*) – the sorting variable of the executions, either ‘endTime’, or ‘startTime’ + * **sort_by** (*Optional**[**str**]*) – The sorting variable of the executions, either ‘endTime’, or ‘startTime’ - * **execution_id** (*Optional**[**Queryparam**]*) – Ids of the executions + * **execution_id** (*Optional**[**Queryparam**]*) – IDs of the executions * **max_results** (*Optional**[**int**]*) – Maximum number of results to be returned @@ -1554,6 +1638,8 @@ List executions in a transition, calls the GET /transitions/{transitionId}/execu +- - - + #### list_transitions(\*, transition_type: Optional[Union[str, List[str]]] = None, max_results: Optional[int] = None, next_token: Optional[str] = None) List transitions, calls the GET /transitions endpoint. @@ -1595,6 +1681,8 @@ List transitions, calls the GET /transitions endpoint. +- - - + #### list_users(\*, max_results: Optional[int] = None, next_token: Optional[str] = None) List users, calls the GET /users endpoint. @@ -1633,6 +1721,8 @@ List users, calls the GET /users endpoint. +- - - + #### list_workflow_executions(workflow_id: str, \*, status: Optional[Union[str, List[str]]] = None, sort_by: Optional[str] = None, order: Optional[str] = None, max_results: Optional[int] = None, next_token: Optional[str] = None) List executions in a workflow, calls the GET /workflows/{workflowId}/executions endpoint. @@ -1646,13 +1736,13 @@ List executions in a workflow, calls the GET /workflows/{workflowId}/executions * **Parameters** - * **workflow_id** (*str*) – Id of the workflow + * **workflow_id** (*str*) – ID of the workflow * **order** (*Optional**[**str**]*) – Order of the executions, either ‘ascending’ or ‘descending’ - * **sort_by** (*Optional**[**str**]*) – the sorting variable of the executions, either ‘endTime’, or ‘startTime’ + * **sort_by** (*Optional**[**str**]*) – The sorting variable of the executions, either ‘endTime’, or ‘startTime’ * **status** (*Optional**[**Queryparam**]*) – Statuses of the executions @@ -1683,6 +1773,8 @@ List executions in a workflow, calls the GET /workflows/{workflowId}/executions +- - - + #### list_workflows(\*, max_results: Optional[int] = None, next_token: Optional[str] = None) List workflows, calls the GET /workflows endpoint. @@ -1721,10 +1813,12 @@ List workflows, calls the GET /workflows endpoint. +- - - + #### send_heartbeat(transition_id: str, execution_id: str) -Send heartbeat for a manual execution to signal that we are still working on it. -Must be done at minimum once every 60 seconds or the transition execution will time out, -calls the POST /transitions/{transitionId}/executions/{executionId}/heartbeats endpoint. +Sends a heartbeat for a manual execution to signal that we are still working on it. +Note: This must be done, at minimum, once every 60 seconds or the transition execution will time out. +Calls the POST /transitions/{transitionId}/executions/{executionId}/heartbeats endpoint. ```python >>> from las.client import Client @@ -1736,10 +1830,10 @@ calls the POST /transitions/{transitionId}/executions/{executionId}/heartbeats e * **Parameters** - * **transition_id** (*str*) – Id of the transition + * **transition_id** (*str*) – ID of the transition - * **execution_id** (*str*) – Id of the transition execution + * **execution_id** (*str*) – IDof the transition execution @@ -1761,6 +1855,8 @@ calls the POST /transitions/{transitionId}/executions/{executionId}/heartbeats e +- - - + #### update_app_client(app_client_id, \*\*optional_args) Updates an appClient, calls the PATCH /appClients/{appClientId} endpoint. @@ -1768,7 +1864,7 @@ Updates an appClient, calls the PATCH /appClients/{appClientId} endpoint. * **Parameters** - * **app_client_id** (*str*) – Id of the appClient + * **app_client_id** (*str*) – ID of the appClient * **name** (*Optional**[**str**]*) – Name of the appClient @@ -1796,6 +1892,8 @@ Updates an appClient, calls the PATCH /appClients/{appClientId} endpoint. +- - - + #### update_asset(asset_id: str, \*\*optional_args) Updates an asset, calls the PATCH /assets/{assetId} endpoint. @@ -1809,10 +1907,10 @@ Updates an asset, calls the PATCH /assets/{assetId} endpoint. * **Parameters** - * **asset_id** (*str*) – Id of the asset + * **asset_id** (*str*) – ID of the asset - * **content** (*Optional**[**Content**]*) – Content to PATCH + * **content** (*Optional**[**Content**]*) – Content to patch * **name** (*Optional**[**str**]*) – Name of the asset @@ -1840,6 +1938,8 @@ Updates an asset, calls the PATCH /assets/{assetId} endpoint. +- - - + #### update_batch(batch_id, \*\*optional_args) Updates a batch, calls the PATCH /batches/{batchId} endpoint. @@ -1847,7 +1947,7 @@ Updates a batch, calls the PATCH /batches/{batchId} endpoint. * **Parameters** - * **batch_id** (*str*) – Id of the batch + * **batch_id** (*str*) – ID of the batch * **name** (*Optional**[**str**]*) – Name of the batch @@ -1875,9 +1975,11 @@ Updates a batch, calls the PATCH /batches/{batchId} endpoint. +- - - + #### update_document(document_id: str, ground_truth: Sequence[Dict[str, Union[str, None, bool]]]) Update ground truth for a document, calls the PATCH /documents/{documentId} endpoint. -Updating ground truth means adding the ground truth data for the particular document. +Updating ground truth means updating the ground truth data for the particular document. This enables the API to learn from past mistakes. ```python @@ -1891,7 +1993,7 @@ This enables the API to learn from past mistakes. * **Parameters** - * **document_id** (*str*) – Id of the document + * **document_id** (*str*) – ID of the document * **ground_truth** (*Sequence**[**Dict**[**str**, **Union**[**Optional**[**str**]**, **bool**]**]**]*) – List of items {label: value} representing the ground truth values for the document @@ -1916,6 +2018,8 @@ This enables the API to learn from past mistakes. +- - - + #### update_model(model_id: str, \*, width: Optional[int] = None, height: Optional[int] = None, field_config: Optional[dict] = None, preprocess_config: Optional[dict] = None, status: Optional[str] = None, \*\*optional_args) Updates a model, calls the PATCH /models/{modelId} endpoint. @@ -1923,7 +2027,7 @@ Updates a model, calls the PATCH /models/{modelId} endpoint. * **Parameters** - * **model_id** (*Optional**[**str**]*) – The Id of the model + * **model_id** (*Optional**[**str**]*) – The ID of the model * **width** (*Optional**[**int**]*) – The number of pixels to be used for the input image width of your model @@ -1966,8 +2070,10 @@ Updates a model, calls the PATCH /models/{modelId} endpoint. +- - - + #### update_secret(secret_id: str, \*, data: Optional[dict] = None, \*\*optional_args) -Updates an secret, calls the PATCH /secrets/secretId endpoint. +Updates a secret, calls the PATCH /secrets/secretId endpoint. ```python >>> from las.client import Client @@ -1980,7 +2086,7 @@ Updates an secret, calls the PATCH /secrets/secretId endpoint. * **Parameters** - * **secret_id** (*str*) – Id of the secret + * **secret_id** (*str*) – ID of the secret * **data** (*Optional**[**dict**]*) – Dict containing the data you want to keep secret @@ -2011,6 +2117,8 @@ Updates an secret, calls the PATCH /secrets/secretId endpoint. +- - - + #### update_transition(transition_id: str, \*, in_schema: Optional[dict] = None, out_schema: Optional[dict] = None, assets: Optional[dict] = None, environment: Optional[dict] = None, environment_secrets: Optional[list] = None, \*\*optional_args) Updates a transition, calls the PATCH /transitions/{transitionId} endpoint. @@ -2026,7 +2134,7 @@ Updates a transition, calls the PATCH /transitions/{transitionId} endpoint. * **Parameters** - * **transition_id** (*str*) – Id of the transition + * **transition_id** (*str*) – ID of the transition * **name** (*Optional**[**str**]*) – Name of the transition @@ -2035,10 +2143,10 @@ Updates a transition, calls the PATCH /transitions/{transitionId} endpoint. * **description** (*Optional**[**str**]*) – Description of the transition - * **in_schema** (*Optional**[**dict**]*) – Json-schema that defines the input to the transition + * **in_schema** (*Optional**[**dict**]*) – Json schema that defines the input to the transition - * **out_schema** (*Optional**[**dict**]*) – Json-schema that defines the output of the transition + * **out_schema** (*Optional**[**dict**]*) – Json schema that defines the output of the transition * **assets** (*Optional**[**dict**]*) – A dictionary where the values are assetIds that can be used in a manual transition @@ -2062,6 +2170,8 @@ A list of secretIds that contains environment variables to use for a docker tran +- - - + #### update_transition_execution(transition_id: str, execution_id: str, status: str, \*, output: Optional[dict] = None, error: Optional[dict] = None, start_time: Optional[Union[str, datetime.datetime]] = None) Ends the processing of the transition execution, calls the PATCH /transitions/{transition_id}/executions/{execution_id} endpoint. @@ -2079,22 +2189,22 @@ calls the PATCH /transitions/{transition_id}/executions/{execution_id} endpoint. * **Parameters** - * **transition_id** (*str*) – Id of the transition that performs the execution + * **transition_id** (*str*) – ID of the transition that performs the execution - * **execution_id** (*str*) – Id of the execution to update + * **execution_id** (*str*) – ID of the execution to update * **status** (*str*) – Status of the execution ‘succeeded|failed’ - * **output** (*Optional**[**dict**]*) – Output from the execution, required when status is ‘succeded’ + * **output** (*Optional**[**dict**]*) – Output from the execution, required when status is ‘succeeded’ * **error** (*Optional**[**dict**]*) – Error from the execution, required when status is ‘failed’, needs to contain ‘message’ - * **start_time** (*Optional**[**str**]*) – start time that will replace the original start time of the execution + * **start_time** (*Optional**[**str**]*) – Start time that will replace the original start time of the execution @@ -2116,6 +2226,8 @@ calls the PATCH /transitions/{transition_id}/executions/{execution_id} endpoint. +- - - + #### update_user(user_id: str, \*\*optional_args) Updates a user, calls the PATCH /users/{userId} endpoint. @@ -2129,7 +2241,7 @@ Updates a user, calls the PATCH /users/{userId} endpoint. * **Parameters** - * **user_id** (*str*) – Id of the user + * **user_id** (*str*) – ID of the user * **name** (*Optional**[**str**]*) – Name of the user @@ -2157,6 +2269,8 @@ Updates a user, calls the PATCH /users/{userId} endpoint. +- - - + #### update_workflow(workflow_id: str, \*, error_config: Optional[dict] = None, completed_config: Optional[dict] = None, \*\*optional_args) Updates a workflow, calls the PATCH /workflows/{workflowId} endpoint. @@ -2172,7 +2286,7 @@ Updates a workflow, calls the PATCH /workflows/{workflowId} endpoint. * **Parameters** - * **workflow_id** – Id of the workflow + * **workflow_id** – ID of the workflow * **name** (*Optional**[**str**]*) – Name of the workflow @@ -2206,6 +2320,8 @@ Updates a workflow, calls the PATCH /workflows/{workflowId} endpoint. +- - - + #### update_workflow_execution(workflow_id: str, execution_id: str, next_transition_id: str) Retry or end the processing of a workflow execution, calls the PATCH /workflows/{workflow_id}/executions/{execution_id} endpoint. @@ -2220,13 +2336,13 @@ calls the PATCH /workflows/{workflow_id}/executions/{execution_id} endpoint. * **Parameters** - * **workflow_id** (*str*) – Id of the workflow that performs the execution + * **workflow_id** (*str*) – ID of the workflow that performs the execution - * **execution_id** (*str*) – Id of the execution to update + * **execution_id** (*str*) – ID of the execution to update - * **next_transition_id** – the next transition to transition into, to end the workflow-execution, + * **next_transition_id** – the next transition or to end the workflow execution (TTNote: Unclear why end workflow is mentioned here, unless it will end if the id is left blank?) use: las:transition:commons-failed @@ -2241,6 +2357,8 @@ use: las:transition:commons-failed +- - - + ### class las.Credentials(client_id: str, client_secret: str, api_key: str, auth_endpoint: str, api_endpoint: str) Bases: `object` diff --git a/reference/restapi/README.md b/reference/restapi/README.md index 8cc129e4..24991ecf 100644 --- a/reference/restapi/README.md +++ b/reference/restapi/README.md @@ -2,7 +2,10 @@ You can find the Open API specification file [here](https://raw.githubusercontent.com/LucidtechAI/las-docs/master/reference/restapi/oas.json) -## Changelog +(TTNote: Consider if Installation and/or Getting Started section is needed here for consistency with Python SDK and others in this section.) +(TTNote: Consider if Rest API section is also needed under "Quickstart" section also for consistency.) + +## Change Log ### 2021-06-10 @@ -15,7 +18,7 @@ You can find the Open API specification file [here](https://raw.githubuserconten ### 2021-05-26 - Added DELETE /models/:id -- description in /models fieldConfig is no longer required +- Description in /models fieldConfig is no longer required ### 2021-05-19 @@ -65,4 +68,4 @@ providing callback and logout urls - Added DELETE /secrets/:id - Added DELETE /assets/:id -- Added paging to DELETE /documents. Supports deleting up to 1000 documents each API call. \ No newline at end of file +- Added paging to DELETE /documents. Supports deleting up to 1000 documents each API call. diff --git a/reference/restapi/latest/README.md b/reference/restapi/latest/README.md index e7be3084..71f5b927 100644 --- a/reference/restapi/latest/README.md +++ b/reference/restapi/latest/README.md @@ -1,7 +1,13 @@ -#### GET /appClients +(TTNote: Suggest an index on top for each module listed below for easier access to the content. Also consider descriptions for when to use each module.) + +(TTNote: Unclear why some sections had a second 'request body JSON Schema' listed. Consider adding descriptions if needed. The first few are noted inline.) + +(TTNote: Also consider additional separation between each module for clarity when reading, such as the separator lines that I've added.) +#### GET /appClients + | Header name | Header value | @@ -114,6 +120,7 @@ } ``` +- - - #### POST /appClients @@ -179,6 +186,9 @@ ##### Response body JSON Schema + +(TTNote: Suggest verbiage here to describe why this is different from above 'response body json schema'.) + ```json { "title": "appClient", @@ -255,6 +265,8 @@ } ``` +- - - + #### DELETE /appClients/{appClientId} @@ -353,6 +365,8 @@ } ``` +- - - + #### PATCH /appClients/{appClientId} @@ -396,6 +410,10 @@ ##### Response body JSON Schema + +(TTNote: Suggest verbiage here to describe why this is different from above 'response body json schema'.) + + ```json { "title": "appClient", @@ -473,6 +491,8 @@ ``` +- - - + #### GET /assets @@ -547,6 +567,9 @@ ``` +- - - + + #### POST /assets @@ -593,6 +616,9 @@ ##### Response body JSON Schema + +(TTNote: Suggest verbiage here to describe why this is different from above 'response body json schema'.) + ```json { "title": "asset", @@ -626,6 +652,7 @@ } ``` +- - - #### DELETE /assets/{assetId} @@ -682,6 +709,8 @@ ``` +- - - + #### GET /assets/{assetId} @@ -737,6 +766,9 @@ ``` +- - - + + #### PATCH /assets/{assetId} @@ -783,6 +815,9 @@ ##### Response body JSON Schema + +(TTNote: Suggest verbiage here to describe why this is different from above 'response body json schema'.) + ```json { "title": "asset", @@ -816,6 +851,8 @@ } ``` +- - - + #### GET /batches @@ -914,6 +951,8 @@ ``` +- - - + #### POST /batches @@ -956,6 +995,9 @@ ##### Response body JSON Schema + +(TTNote: Suggest verbiage here to describe why this is different from above 'response body json schema'.) + ```json { "title": "batch", @@ -1012,6 +1054,7 @@ } ``` +- - - #### DELETE /batches/{batchId} @@ -1091,6 +1134,8 @@ ``` +- - - + #### PATCH /batches/{batchId} @@ -1133,6 +1178,9 @@ ##### Response body JSON Schema + +(TTNote: Suggest verbiage here to describe why this is different from above 'response body json schema'.) + ```json { "title": "batch", @@ -1189,6 +1237,7 @@ } ``` +- - - #### DELETE /documents @@ -1317,6 +1366,8 @@ } ``` +- - - + #### GET /documents @@ -1445,6 +1496,8 @@ } ``` +- - - + #### POST /documents @@ -1603,6 +1656,8 @@ } ``` +- - - + #### GET /documents/{documentId} @@ -1696,6 +1751,8 @@ } ``` +- - - + #### PATCH /documents/{documentId} @@ -1834,6 +1891,8 @@ } ``` +- - - + #### GET /logs @@ -1971,6 +2030,8 @@ } ``` +- - - + #### GET /logs/{logId} @@ -2053,6 +2114,8 @@ } ``` +- - - + #### GET /models @@ -2221,6 +2284,8 @@ } ``` +- - - + #### POST /models @@ -2463,6 +2528,8 @@ ``` +- - - + #### DELETE /models/{modelId} @@ -2612,6 +2679,8 @@ ``` +- - - + #### GET /models/{modelId} @@ -2761,6 +2830,8 @@ ``` +- - - + #### PATCH /models/{modelId} @@ -3008,6 +3079,8 @@ ``` +- - - + #### GET /organizations/{organizationId} @@ -3183,6 +3256,8 @@ ``` +- - - + #### PATCH /organizations/{organizationId} @@ -3379,6 +3454,8 @@ ``` +- - - + #### GET /predictions @@ -3489,6 +3566,8 @@ ``` +- - - + #### POST /predictions @@ -3615,6 +3694,8 @@ ``` +- - - + #### GET /secrets @@ -3685,6 +3766,8 @@ ``` +- - - + #### POST /secrets @@ -3760,6 +3843,8 @@ ``` +- - - + #### DELETE /secrets/{secretId} @@ -3811,6 +3896,8 @@ ``` +- - - + #### PATCH /secrets/{secretId} @@ -3886,6 +3973,8 @@ ``` +- - - + #### GET /transitions @@ -4002,6 +4091,8 @@ ``` +- - - + #### POST /transitions @@ -4188,6 +4279,8 @@ ``` +- - - + #### DELETE /transitions/{transitionId} @@ -4274,6 +4367,8 @@ ``` +- - - + #### GET /transitions/{transitionId} @@ -4360,6 +4455,8 @@ ``` +- - - + #### PATCH /transitions/{transitionId} @@ -4499,6 +4596,8 @@ ``` +- - - + #### GET /transitions/{transitionId}/executions @@ -4640,6 +4739,8 @@ ``` +- - - + #### POST /transitions/{transitionId}/executions @@ -4736,6 +4837,8 @@ ``` +- - - + #### GET /transitions/{transitionId}/executions/{executionId} @@ -4826,6 +4929,8 @@ ``` +- - - + #### PATCH /transitions/{transitionId}/executions/{executionId} @@ -4975,6 +5080,8 @@ ``` +- - - + #### POST /transitions/{transitionId}/executions/{executionId}/heartbeats @@ -5006,6 +5113,8 @@ +- - - + #### GET /users @@ -5079,6 +5188,8 @@ ``` +- - - + #### POST /users @@ -5162,6 +5273,8 @@ ``` +- - - + #### DELETE /users/{userId} @@ -5216,6 +5329,8 @@ ``` +- - - + #### GET /users/{userId} @@ -5270,6 +5385,8 @@ ``` +- - - + #### PATCH /users/{userId} @@ -5345,6 +5462,8 @@ ``` +- - - + #### GET /workflows @@ -5463,6 +5582,8 @@ ``` +- - - + #### POST /workflows @@ -5649,6 +5770,8 @@ ``` +- - - + #### DELETE /workflows/{workflowId} @@ -5748,6 +5871,8 @@ ``` +- - - + #### GET /workflows/{workflowId} @@ -5847,6 +5972,8 @@ ``` +- - - + #### PATCH /workflows/{workflowId} @@ -6009,6 +6136,8 @@ ``` +- - - + #### GET /workflows/{workflowId}/executions @@ -6172,6 +6301,8 @@ ``` +- - - + #### POST /workflows/{workflowId}/executions @@ -6294,6 +6425,8 @@ ``` +- - - + #### DELETE /workflows/{workflowId}/executions/{executionId} @@ -6401,6 +6534,8 @@ ``` +- - - + #### GET /workflows/{workflowId}/executions/{executionId} @@ -6508,6 +6643,8 @@ ``` +- - - + #### PATCH /workflows/{workflowId}/executions/{executionId} diff --git a/tutorials/README.md b/tutorials/README.md index fcc7a015..d3050505 100644 --- a/tutorials/README.md +++ b/tutorials/README.md @@ -1,10 +1,16 @@ ## Tutorials -Welcome to Lucidtechs tutorials. +Welcome to Lucidtech's API tutorials. -If you are looking for a simple plug and play tutorial the best place to start is the -[simple-demo](https://github.com/LucidtechAI/las-docs/tree/master/tutorials/simple-demo/README.md). +If you are looking for a simple plug and play tutorial, the best place to start is this +[simple demo](https://github.com/LucidtechAI/las-docs/tree/master/tutorials/simple-demo/README.md). -If you need to make something a bit more customizable check out this +(TTNote: The link above references a github page, should it instead reference the gitbook version? Also the contents of this README are just header info, perhaps you mean to link to the folder instead?) + + +If you need to make something a bit more customizable, check out this [tutorial](https://github.com/LucidtechAI/las-docs/tree/master/tutorials/setup_predict_and_approve.md). +(TTNote: Again, the above link references a github page, should it instead reference the gitbook version?) + +(TTNote: Consider adding a 3rd bullet here to explain when one would follow the below tutorals instead of using the 2 links above.) diff --git a/tutorials/create_your_own_docker_transition.md b/tutorials/create_your_own_docker_transition.md index dbac68ac..cf604bcc 100644 --- a/tutorials/create_your_own_docker_transition.md +++ b/tutorials/create_your_own_docker_transition.md @@ -1,20 +1,20 @@ -# Tutoral: Create your own docker transition +# Tutorial: Create a docker transition -In this tutorial you will learn what is required and recommended practice when -creating an automated transition. An automated transition is just a docker image, -and for this tutorial we will use python as our programming language. -There are no restrictions on which languages that are allowed here, but you will -see that it is practical to use one that has a LAS SDK, that is (Java, JavaScript, C#, Python). +In this tutorial, you will learn the required steps and recommended practices when +creating an automated transition, which is simply a docker image. +For this tutorial, we will use *Python* as our programming language. +There are no restrictions on which languages are allowed here, but you will +see that it is practical to use one that has a LAS SDK, such as Java, JavaScript, C#, or Python. -We will use the [make-predictions-image](https://github.com/LucidtechAI/las-docs/blob/master/docker-image-samples/make-predictions/main.py) -to explain some of the key concepts. For a simple python-based docker image you only need 3 files: - - `Dockerfile`: The instructions for how to make the image, - see dockers own [documentation](https://docs.docker.com/develop/develop-images/dockerfile_best-practices/) for more information - - `main.py`: This is where the code that does the work lives - - `requirements.txt`: Required packages that needs to be installed +We will use the [make predictions image](https://github.com/LucidtechAI/las-docs/blob/master/docker-image-samples/make-predictions/main.py) +to explain some of the key concepts. For a simple Python-based docker image, you only need 3 files: + - `Dockerfile`: These are the instructions for how to make the image. + Refer to Docker's [documentation](https://docs.docker.com/develop/develop-images/dockerfile_best-practices/) for more information. + - `main.py`: This is where the code resides which performs the work. + - `requirements.txt`: Here are the required packages that need to be installed. ## Dockerfile -For a simple process like this the Dockerfile can look something like this: +For a simple process, the Dockerfile can look something like this: ```dockerfile FROM python:3.8 # Use the base image of your own choice @@ -36,7 +36,8 @@ ENTRYPOINT ["python", "main.py"] ``` ## main.py -This is the module that does all the work, let us first give a brief description of the anatomy. +This is the module that performs the work. Let us first give a brief description of the anatomy. +(TTNote: Suggest additional verbiage here as this references a 'description of the anatomy' not just an outline.) ```python import ... # Remember to import the LAS SDK and other packages you need. @@ -53,15 +54,16 @@ if __name__ == '__main__': # and returning the response to the API. ``` -Our recommendation is that you re-use the last block of this file and just edit the handler to fit your needs. +Note: Our recommendation is that you reuse the last block of this file and edit the handler to fit your needs. ### Entry point -The last part of `main.py` contains a few essential steps; -- Get the input to the execution -- Run the user-spesific code in the handler -- Let the API know that you have completed successfully or failed. +The last part of `main.py` contains a few essential steps: +(TTNote: Suggest to revisit the sequence of these description, so they are described in the order they appear under main.py (i.e. the handler seems to come before this part) +- Collects the input for the execution +- Runs the user-specific code in the handler +- Lets the API know that the task completed successfully or has failed -The code below makes sure that all this happens safely. +The code below makes sure that all this happens as expected: ```python @@ -94,14 +96,15 @@ if __name__ == '__main__': ``` #### The handler -This is the part where you as a user is free to write whatever you need to automate your workflow. +This is the section where you as a user are free to write whatever you need to automate your workflow. Let us first take a look at the input: -- `las_client`: The client from the python-sdk for an easy access to LAS API. -- `event`: This is the output from the previous transition in the workflow. -If this is the first transition in the workflow then the input to the workflow will be the input to this transition. +- `las_client`: The client from the Python SDK allows for easy access to LAS API. +- `event`: The output from the previous transition in the workflow. +If this is the first transition in the workflow, then the input to the workflow will be the input to this transition. - `environ`: The environment consists of some pre-defined variables, such as the TRANSITION_ID and the EXECUTION_ID. -In addition to these you can also define variables by specifying them as `parameters` directly in your transition, -or through *secrets* if the information is sensitive. See the example at the end of this tutorial. +In addition to these, you can also define variables by specifying them as `parameters` directly in your transition, +or through *secrets* if the information is sensitive. See the example at the end of this tutorial for more details. (TTNote: Suggest +repeating the secrets example here for clarity) ```python def handler(las_client, event, environ): @@ -124,26 +127,29 @@ def handler(las_client, event, environ): ``` ## Create the transition -When you have all the building blocks that you need, we are ready to create the docker transition. -#### Building and pushing your image +When you have all the building blocks that you need, you are ready to create the docker transition. -First step is to build a docker image and push it to some repository + +#### Build and push the docker image + +1-The first step is to build a docker image and push it to an existing repository: ```commandline $ docker build . -t && docker push ``` {% hint style="info" %} It is recommended to place the docker image in a private repository, -if that is the case you need to store your credentials as a secret. +and in that case, you need to store your credentials as a *Secret*. ```commandline $ las secrets create username= password= --description 'docker credentials' ``` {% endhint %} -The next step is to create a json-file - let's call it `params.json` - -that contains the parameters you need to run the docker image. -The variables you define in `environment` and `environmentSecrets` -will end up in the `environ`-variable in the handler in the example above. +2-The next step is to create a json file, let's call it `params.json`, +that contains the parameters that you need to run the docker image. + +Note: The variables you define in `environment` and `environmentSecrets` +will end up in the `environ` variable in the handler in the example above. ```json { "imageUrl": "", @@ -157,11 +163,11 @@ will end up in the `environ`-variable in the handler in the example above. } ``` {% hint style="info" %} -The secretId field is only needed if you are using a private image. +The `secretId` field is only needed if you are using a private image. {% endhint %} -Now you are ready to create the automatic transition +3-Now you are ready to create the automatic transition: ```commandline las transitions create docker -p params.json --name MakePredictions ``` diff --git a/tutorials/data_cleaning.md b/tutorials/data_cleaning.md index 27251ca6..4726e352 100644 --- a/tutorials/data_cleaning.md +++ b/tutorials/data_cleaning.md @@ -1,37 +1,43 @@ -# Tutoral: Data cleaning - setup using LAS CLI +# Tutorial: Setup a data cleaning workflow using LAS CLI -In this tutorial you will learn how to setup a simple workflow that -allows you to evaluate the models performance and at the same time -clean up poorly labeled data. -Check out the complete folder with example values to get started +(TTNote: Again, consider whether 'using the LAS CLI' is needed here to make the title shorter.) + +In this tutorial, you will learn how to setup a simple workflow that +allows you to evaluate the model's performance while cleaning up poorly labeled data. +Check out the completed folder with examples to get started [here](https://github.com/LucidtechAI/las-docs/tree/master/tutorials/data-cleaning/backend/src/). +(TTNote: This link references a github page, consider whether it should be gitbook.) The workflow in this tutorial will consist of four steps: * automatic prediction * automatic comparison of the existing ground truth with the prediction to filter out poorly labeled data -* manual verification and correction of ground truth (this will only be performed when necessary) +* manual verification and correction of ground truth (which will only be performed when necessary) * automatic feedback of the corrected ground truth back to the API ![Workflow](../.gitbook/assets/data-cleaning-workflow.png) + ## Prerequisites -* Download the [lucidtech CLI](https://github.com/LucidtechAI/las-cli) +* Download the [Lucidtech CLI](https://github.com/LucidtechAI/las-cli) +(TTNote: Note this link references github folder, unclear if it should go to gitbook or a github readme instead.) * Create a remote component by following [this tutorial](setup_approve_view.md) or just use [this standard remote component](https://github.com/LucidtechAI/las-docs/tree/master/tutorials/simple-demo/backend/src/Invoice/assets/jsRemoteComponent.js) +(TTNote: The tutorial link above references a tutorial later in the sequence. Consider if the approval tutorial section should be re-sequenced above this one.) +## Manual inspection and correction (manual transition) +To create a manual step, you first need a remote component that will serve +as a user interface. -## Manual inspect and correct (manual transition) -To create a manual step you first need a remote component that will serve -as a user interface. If you are using [this standard remote component](https://github.com/LucidtechAI/las-docs/tree/master/tutorials/simple-demo/backend/src/Invoice/assets/jsRemoteComponent.js) -you can also configure the fields to show and manipulate by adding an asset called `fieldConfig`. +Note: If you use [this standard remote component](https://github.com/LucidtechAI/las-docs/tree/master/tutorials/simple-demo/backend/src/Invoice/assets/jsRemoteComponent.js), +you can also configure which fields to show and manipulate by adding an asset called `fieldConfig`. [Here](https://github.com/LucidtechAI/las-docs/tree/master/tutorials/simple-demo/backend/src/Invoice/assets/fieldConfig.json) -is an example of a field config for a typical invoice. +is an example of a fieldconfig json file for a typical invoice. #### Create the remote component *asset* -When you have created your javascript remote component, -lets call it `remote_component.js` you are ready to create an asset. +After creating the javascript remote component +(referenced below as `remote_component.js`), you are ready to create an asset: ```commandline $ las assets create remote_component.js ``` @@ -43,7 +49,7 @@ This should give the following output: "description": null } ``` -You now have an assetId that we will use to refer to this specific asset in the future. +You now have an `assetId` that you can use to refer to this specific asset in the future. Note that you can also add a name and a description to help you identify the asset. @@ -57,17 +63,20 @@ Create a json file, let's call it `params.json` with the following structure: } } ``` + +Note: The `las:asset:` above should be replaced with the `assetId` that you received in the previous step. + {% hint style="info" %} `jsRemoteComponent` and `fieldConfig` are used to find the assets, -so they have to be named like this if you want to use them. +so they should be named as shown here. {% endhint %} -Where `las:asset:` is replaced with the `assetId` you got in the previous step. -Now you are ready to create the manual step +Now you are ready to create the manual step: ```commandline las transitions create manual -p params.json ``` -This should give the following output + +This should give the following output: ```commandline { "transitionId": "las:transition:", @@ -87,32 +96,38 @@ This should give the following output } } ``` -As you can see from the output the transition can also accept name and description arguments, -that is common for most resources in LAS. -In addition we recommend to provide input and output [json-schemas](https://json-schema.org/understanding-json-schema/) -that can help you catch bad input immediately instead of triggering bugs at a later point in the workflow. +As you can see from the output, the transition can also accept name and description arguments. +This is common for most resources in LAS. + +In addition, we recommend providing input and output [json schemas](https://json-schema.org/understanding-json-schema/) +that can help you catch invalid input immediately instead of triggering bugs at a later point in the workflow. Use `las transitions update --help` for more information on how to update your transitions. ## Automatic transitions (docker transitions) -An automatic step is made by creating a docker image that will perform a task without any user involved. -Check our [sample images](https://github.com/LucidtechAI/las-docs/tree/master/docker-image-samples) and +An automatic step is made by creating a docker image that will perform a task without any user involvement. +Check our [sample images](https://github.com/LucidtechAI/las-docs/tree/master/docker-image-samples) +(TTNote: This link references a github page, consider whether it should be gitbook.) +and [tutorial](create_your_own_docker_transition.md) +(TTNote: This link references a tutorial later in the sequence. Consider if the docker tutorial should be re-sequenced above this one.) for inspiration and best practices. -The first step is to build a docker image and push it to some repository +1-The first step is to build a docker image and push it an existing repository: ```commandline $ docker build . -t && docker push ``` {% hint style="info" %} -*It is recommended to place the docker image in a private repository, -if that is the case you need to store your credentials as a secret.* +It is recommended to place the docker image in a private repository, +and in that case, you need to store your credentials as a *Secret*. ```commandline $ las secrets create username= password= --description 'docker credentials' ``` {% endhint %} -The next step is to create a json-file that contains the parameters you need to run the docker image. + + +2-The next step is to create a json file that contains the parameters that you need to run the docker image: ```json { @@ -121,32 +136,35 @@ The next step is to create a json-file that contains the parameters you need to } ``` {% hint style="info" %} -The secretId field is only needed if you are using a private image. +The `secretId` field is only needed if you are using a private image. {% endhint %} -Now we are ready to create the transition +3-Now, you are ready to create the transition: ```commandline las transitions create docker params.json ``` -### Configuration of the *compare* transition +### Configure the *compare* transition As you can see in main function of the -[compare docker image](https://github.com/LucidtechAI/las-docs/tree/master/docker-image-samples/compare-prediction-and-ground-truth) +[compare docker image](https://github.com/LucidtechAI/las-docs/tree/master/docker-image-samples/compare-prediction-and-ground-truth), there are two environment variables that can be adjusted according to your needs. -Check out [this](https://github.com/LucidtechAI/las-docs/tree/master/tutorials/data-cleaning/backend/src/Compare) -folder to see an example of files that can be used to configure this transition. +Check out [this folder](https://github.com/LucidtechAI/las-docs/tree/master/tutorials/data-cleaning/backend/src/Compare) +to see an example of files that can be used to configure this transition. +(TTNote: Note the links above references github folder, unclear if they should go to gitbook instead.) #### CONFIDENCE_THRESHOLD This variable sets the confidence limit that decides when to consider the predictions as relevant. CONFIDENCE_THRESHOLD = 0.9 would mean that only those documents that were predicted with a confidence higher than 0.9 AND were different from the ground truth will be subject for manual inspection. -The default value is 0.0, which means that every prediction different from the ground truth will be subject for inspection. +The default value is 0.0, which means that every prediction that is different from the ground truth will be subject for inspection. #### GROUND_TRUTH_CONFIDENCE -If you want to inspect every sample manually you can set this value to less than 1.0. +If you want to inspect every sample manually, you can set this to a value less than 1.0. + +(TTNote: Consider if these variables should be mentioned or referenced in earlier sections under 'documents/ground truth'). -To adjust these variables you can use the CLI and update your transition: +To adjust these variables, you can use the CLI and update your transition: ```json environment.json: @@ -162,19 +180,19 @@ las transitions update las:transition: --environment-path environ.json ``` {% hint style="info" %} -When you update the environment you need to specify all variables, not only those you want to change. +When you update the environment, you need to specify **all** variables, not only those you want to change. {% endhint %} ## Creating the workflow -Now that we have created the two transitions we are ready to put them -together in a single workflow. The workflow definition must be provided using +Now that we have created the transitions, we are ready to put them +together into a single workflow. The workflow definition must be provided using [Amazon States Language](https://docs.aws.amazon.com/step-functions/latest/dg/concepts-amazon-states-language.html). {% hint style="info" %} The Resource name of each state must be a transition ID. {% endhint %} {% hint style="info" %} -The only allowed *types* of States are **Task** and **Choice**. +The only allowed *types* of states are **Task** and **Choice**. {% endhint %} @@ -224,13 +242,13 @@ The only allowed *types* of States are **Task** and **Choice**. } ``` -store the file and use it as input for creating the workflow +Save the file and use it as input for creating the workflow: ```commandline $ las workflows create workflow.json --name 'Data Cleaning' ``` -###Execute workflow -You can now define your `input.json` and execute your workflow with a simple call from the CLI +### Execute the workflow +You can now define your `input.json` and execute your workflow with a simple call from the CLI: ```json { "documentId": "las:document:", @@ -241,7 +259,7 @@ You can now define your `input.json` and execute your workflow with a simple cal ```commandline $ las workflows execute las:workflow: input.json ``` -You can also use [this](data-cleaning/start_execution.py) script for execution, -or use or standard email-integration that will allow you to send in your documents by email. +You can also use [this script](data-cleaning/start_execution.py) for execution, +or use our standard email integration (TTNote: Suggest a link on how to setup email integration) that will allow you to submit your documents by email. diff --git a/tutorials/setup_approve_view.md b/tutorials/setup_approve_view.md index 7e9a832e..72659f00 100644 --- a/tutorials/setup_approve_view.md +++ b/tutorials/setup_approve_view.md @@ -1,21 +1,22 @@ -# Tutorial for creating a custom approve view in Flyt +# Tutorial: Create a custom approval view in Flyt -With access to Flyt, you have the option to create your own React component to be loaded for each queue in its approval view. +With access to *Flyt*, you have the option to create your own React component to be loaded for each queue in its approval view. The entire view is yours to do with as you see fit. -For now we'll create a simple view with half the page having a PDF/image preview, and the other half being a form -for the information we care about from our documents, along with some action buttons to either approve, skip, or reject a task. +For now, we'll create a simple view with a PDF/image preview on half of the page, and a form +for the relevant document information on the other half. In addition, we'll include action buttons to either approve, skip, or reject a task. ## Build setup -Before we start creating the component, lets take care of the build setup and everything that is necessary to actually get a working remote component built. +Before we start creating the component, let's complete the build setup and everything that is necessary to get a working remote component built. -To have a functioning remote component, you need to create a CommonJS bundle because of the usage through [remote-component](https://github.com/Paciolan/remote-component). -You are free to use whatever bundler or build tool you feel like, but in this example we'll make use of Webpack just like in the examples in the above link. +To have a functioning remote component, you need to create a `CommonJS` bundle because of the usage through [remote component](https://github.com/Paciolan/remote-component). +(TTNote: Link above is to github, consider if a link to gitbook is needed instead.) +You are free to use the bundler or build tool of your choice, but in this example, we'll make use of *Webpack*, as in in the examples in the above link. ### `src/index.js` -Create `src/index.js` and expose your component as the `default` export. +Create `src/index.js` and expose your component as the `default` export: ```javascript import React from "react"; @@ -31,8 +32,8 @@ export default RemoteComponent; The `libraryTarget` of the `RemoteComponent` must be set to `commonjs`. -Flyt provides you with a React global already, so it is very much recommended that you add it as an external library in your bundle. -If you are using React with hooks, you could see some buggy behavior if you do not do this step, as there will be two different React runtimes. +Flyt already provides you with a React global (TTNote: Unclear if 'global' is the noun here or if a word is missing afterwards - such as 'React global variable' or 'React global library', etc), so it is highly recommended that you add it as an external library in your bundle. +If you are using React with hooks, you may see some unexpected behavior if you do not include this step, as there will be two different React runtimes. ```javascript @@ -47,12 +48,12 @@ module.exports = { }; ``` -Running a Webpack build will now create a CommonJS bundle file that can be loaded successfully! With the build setup done, we can start creating the actual component itself. +Running a Webpack build will now create a `CommonJS` bundle file that can be loaded successfully. With the build setup done, we can start creating the components. -## Creating our component +## Creating the component ### Props -The remote component will receive a few props we can make use of. Here are a few Typescript types from the API as well to make things a bit clearer: +The remote component will receive a few props we can utilize. For clarity, here are a few Typescript types from the API: ```ts /** All props received from the Flyt app */ @@ -143,4 +144,4 @@ Here's what we're making: ![Screenshot](../.gitbook/assets/remote.png) -See [the source](https://github.com/LucidtechAI/las-docs/tree/master/tutorials/simple-demo/frontend/src/) for the code itself. +View [the source code here](https://github.com/LucidtechAI/las-docs/tree/master/tutorials/simple-demo/frontend/src/). diff --git a/tutorials/setup_predict_and_approve.md b/tutorials/setup_predict_and_approve.md index 724def9c..0beed552 100644 --- a/tutorials/setup_predict_and_approve.md +++ b/tutorials/setup_predict_and_approve.md @@ -1,8 +1,10 @@ -# Tutoral: Predict and approve - setup using LAS CLI +# Tutorial: Setup a workflow to predict and approve using the LAS CLI -In this tutorial you will learn how to setup a simple workflow that -allows you to handle any type of documents in a safe and semi-automatic way. +(TTNote: Consider whether 'using the LAS CLI' is needed here to make the title shorter. If there is no non-LAS CLI version of this tutorial, perhaps it can be removed. Alternately it can be mentioned in a line below as a note. 'Note: This tutorial uses the LAS CLI.') + +In this tutorial, you will learn how to setup a simple workflow that +allows you to handle many types of documents in a safe and semi-automatic way. The workflow in this tutorial will consist of two steps: @@ -10,26 +12,36 @@ The workflow in this tutorial will consist of two steps: * manual verification ![Workflow](../.gitbook/assets/simple-workflow.png) + ## Prerequisites * Download the [lucidtech CLI](https://github.com/LucidtechAI/las-cli) +(TTNote: Note this link references github folder, unclear if it should go to gitbook or a github readme instead.) + * Create a remote component by following [this tutorial](setup_approve_view.md) or just use -[this standard remote component](https://github.com/LucidtechAI/las-docs/tree/master/tutorials/simple-demo/backend/src/Invoice/assets/jsRemoteComponent.js) +[this standard remote component](https://github.com/LucidtechAI/las-docs/tree/master/tutorials/simple-demo/backend/src/Invoice/assets/jsRemoteComponent.js) javascript file. +(TTNote: The tutorial link above references a tutorial later in the sequence. Consider if the approval tutorial section should be re-sequenced above this one.) ## Manual approval (manual transition) -To create a manual step you first need a remote component that will serve -as a user interface. If you are using [this standard remote component](https://github.com/LucidtechAI/las-docs/tree/master/tutorials/simple-demo/backend/src/Invoice/assets/jsRemoteComponent.js) +To create a manual step, you first need a remote component that will serve +as a user interface. + +Note: If you use [this standard remote component](https://github.com/LucidtechAI/las-docs/tree/master/tutorials/simple-demo/backend/src/Invoice/assets/jsRemoteComponent.js), you can also configure the fields to show and manipulate by adding an asset called `fieldConfig`. [Here](https://github.com/LucidtechAI/las-docs/tree/master/tutorials/simple-demo/backend/src/Invoice/assets/fieldConfig.json) -is an example of a field config for a typical invoice. +is an example of a fieldconfig json file for a typical invoice. + +(TTNote: Phrases 'manual approval', 'manual transition', 'manual step' are all used seemingly interchangeably in this section. Consider whether a consistent phrase should be used throughout.) #### Create the remote component *asset* -When you have created your javascript remote component, -lets call it `remote_component.js` you are ready to create an asset. +After creating the javascript remote component +(referenced below as `remote_component.js`), you are ready to create an asset: + ```commandline $ las assets create remote_component.js ``` + This should give the following output: ```commandline { @@ -38,12 +50,12 @@ This should give the following output: "description": null } ``` -You now have an assetId that we will use to refer to this specific asset in the future. +You now have an `assetId` that you can use to refer to this specific asset in the future. Note that you can also add a name and a description to help you identify the asset. #### Create the *transition* -Create a json file, let's call it `params.json` with the following structure: +Create a json file, let's call it `params.json`, with the following structure: ```json { "assets": { @@ -52,17 +64,18 @@ Create a json file, let's call it `params.json` with the following structure: } } ``` +Note: The `las:asset:` above should be replaced with the `assetId` that you received in the previous step. + {% hint style="info" %} `jsRemoteComponent` and `fieldConfig` are used to find the assets, -so they have to be named like this if you want to use them. +so they should be named as shown here. {% endhint %} -Where `las:asset:` is replaced with the `assetId` you got in the previous step. -Now you are ready to create the manual step +Now you are ready to create the manual step: ```commandline las transitions create manual -p params.json ``` -This should give the following output +This should give the following output: ```commandline { "transitionId": "las:transition:", @@ -82,32 +95,41 @@ This should give the following output } } ``` -As you can see from the output the transition can also accept name and description arguments, -that is common for most resources in LAS. -In addition we recommend to provide input and output [json-schemas](https://json-schema.org/understanding-json-schema/) -that can help you catch bad input immediately instead of triggering bugs at a later point in the workflow. +As you can see from the output, the transition can also accept name and description arguments. +This is common for most resources in LAS. + +In addition, we recommend providing input and output [json schemas](https://json-schema.org/understanding-json-schema/) +that can help you catch invalid input immediately instead of triggering bugs at a later point in the workflow. Use `las transitions update --help` for more information on how to update your transitions. ## Automatic prediction (docker transition) -An automatic step is made by creating a docker image that will perform a task without any user involved. -Check our [sample images](https://github.com/LucidtechAI/las-docs/tree/master/docker-image-samples) and +An automatic step is made by creating a docker image that will perform a task without any user involvement. +Check our [sample images](https://github.com/LucidtechAI/las-docs/tree/master/docker-image-samples) +(TTNote: This link references a github page, consider whether it should be gitbook.) +and [tutorial](create_your_own_docker_transition.md) +(TTNote: This link references a tutorial later in the sequence. Consider if the approval tutorial should be re-sequenced above this one.) for inspiration and best practices. -The first step is to build a docker image and push it to some repository +(TTNote: Phrases 'automatic prediction', 'docker transition', 'automatic step' are all used seemingly interchangeably in this section. Consider whether a consistent phrase should be used throughout.) + + +1-The first step is to build a docker image and push it to an existing repository: ```commandline $ docker build . -t && docker push ``` {% hint style="info" %} -*It is recommended to place the docker image in a private repository, -if that is the case you need to store your credentials as a secret.* +It is recommended to place the docker image in a private repository, +and in that case, you need to store your credentials as a *Secret*. ```commandline $ las secrets create username= password= --description 'docker credentials' ``` {% endhint %} -The next step is to create a json-file that contains the parameters you need to run the docker image. + + +2-The next step is to create a json file that contains the parameters that you need to run the docker image: ```json { @@ -116,23 +138,23 @@ The next step is to create a json-file that contains the parameters you need to } ``` {% hint style="info" %} -The secretId field is only needed if you are using a private image. +The `secretId` field is only needed if you are using a private image. {% endhint %} -Now we are ready to create the transition +3-Now, you are ready to create the transition: ```commandline las transitions create docker params.json ``` ## Creating the workflow -Now that we have created the two transitions we are ready to put them -together in a single workflow. The workflow definition must be provided using +Now that we have created the two transitions, we are ready to put them +together into a single workflow. The workflow definition must be provided using [Amazon States Language](https://docs.aws.amazon.com/step-functions/latest/dg/concepts-amazon-states-language.html). {% hint style="info" %} The Resource name of each state must be a transition ID. {% endhint %} {% hint style="info" %} -The only allowed *types* of States are **Task** and **Choice**. +The only allowed *types* of states are **Task** and **Choice**. {% endhint %} @@ -157,13 +179,13 @@ The only allowed *types* of States are **Task** and **Choice**. } ``` -store the file and use it as input for creating the workflow +Save the file and use it as input for creating the workflow: ```commandline $ las workflows create workflow.json --name 'Predict and Approve' ``` -###Execute workflow -You can now define your `input.json` and execute your workflow with a simple call from the CLI +### Execute the workflow +You can now define your `input.json` and execute your workflow with a simple call from the CLI: ```json { "documentId": "las:document:", @@ -173,7 +195,8 @@ You can now define your `input.json` and execute your workflow with a simple cal ```commandline $ las workflows execute las:workflow: input.json ``` -You can also use [this](simple-demo/start_execution.py) script for execution, -or use or standard email-integration that will allow you to send in your documents by email. +You can also use [this script](simple-demo/start_execution.py) for execution, +or use our standard email integration (TTNote: Suggest a link on how to setup email integration) +that will allow you to submit your documents by email.