diff --git a/python/OpenVINO_EP/azureml/README.md b/python/OpenVINO_EP/azureml/README.md new file mode 100644 index 000000000..0f9394fae --- /dev/null +++ b/python/OpenVINO_EP/azureml/README.md @@ -0,0 +1,32 @@ +# Introduction +In these notebooks we are demonstrating quantization aware training using Neural Networks Compression Framework and inference of quantized model using OpenVINO Execution provider through Optimum library. +We are using a BERT Model(bert-large-uncased-whole-word-masking-finetuned-squad) with Stanford Question Answering Dataset (SQuAD) dataset from the HuggingFace hub (transformers library) for Question-Answering usecase. + +The question answer scenario takes a question and a piece of text called a context, and produces answer to the question extracted from the context. The questions & contexts are tokenized and encoded, fed as inputs into the transformer model. The answer is extracted from the output of the model which is the most likely start and end tokens in the context, which are then mapped back into words. + +## Prerequisites + +To run on AzureML, you need: +* Azure subscription +* Azure Machine Learning Workspace (see this notebook for creation of the workspace if you do not already have one: [AzureML configuration notebook](https://github.com/Azure/MachineLearningNotebooks/blob/56e0ebc5acb9614fac51d8b98ede5acee8003820/configuration.ipynb)) +* the Azure Machine Learning SDK +* the Azure CLI and the Azure Machine learning CLI extension (> version 2.2.2) + + +## Quantization Aware Training using Neural Networks Compression Framework + - The training notebook is to demonstrate quantization aware training using Neural Networks Compression Framework [NNCF](https://github.com/AlexKoff88/nncf_pytorch/tree/ak/qdq_per_channel) through + [Optimum Intel](https://github.com/huggingface/optimum-intel/tree/v1.5.2). +- The output of training is an INT8 optimized model and will stored on a local/cloud storage. +- Inorder to run Quantization Aware Training we need to provide few arguments as an input to the training script like model name, dataset name etc. +
For more details please refer to the training notebook. + +## Inference on quantized model using OpenVINO Execution provider through Optimum ONNX Runtime +- In the Inference notebook we will be using the Finetuned model for inference. +- Options to run inference on single and multiple inputs: + + - In case of multiple input, we need to provide input csv file. The sample csv file is available [here](https://github.com/intel/nlp-training-and-inference-openvino/blob/v1.1/question-answering-bert-qat/onnxovep_optimum_inference/data/input.csv). + After running inference on input csv the corresponding outputs will be written onto output.csv. + - In case of single input, variables i.e., context and question are used. + These variables are passed as an argument to the inference script. If they are empty strings, then the default behaviour is to read the input csv file and run inference. + +
For more details please refer to the inference notebook. diff --git a/python/OpenVINO_EP/azureml/bert-squad-qa-inference-using-int8-model.ipynb b/python/OpenVINO_EP/azureml/bert-squad-qa-inference-using-int8-model.ipynb new file mode 100644 index 000000000..531c95c1a --- /dev/null +++ b/python/OpenVINO_EP/azureml/bert-squad-qa-inference-using-int8-model.ipynb @@ -0,0 +1,743 @@ +{ + "cells": [ + { + "attachments": {}, + "cell_type": "markdown", + "metadata": { + "nteract": { + "transient": { + "deleting": false + } + } + }, + "source": [ + "Copyright (C) 2022-2023, Intel Corporation\n", + "\n", + "SPDX-License-Identifier: MIT\n", + "\n", + "# ONNX Runtime Inference using AzureML\n", + "\n", + "In this sample we are running inference on a Question-Answering usecase, with quantized BERT Model using OpenVINO Execution provider through Optimum ONNX Runtime. The quantized BERT model is generated through Quantization Aware Training\n", + "\n", + "The question answer scenario takes a question and a piece of text called a context, and produces answer to the question extracted from the context. The questions & contexts are tokenized and encoded, fed as inputs into the transformer model. The answer is extracted from the output of the model which is the most likely start and end tokens in the context, which are then mapped back into words.\n", + "\n", + "# Prerequisites\n", + "To run on AzureML, you will need:\n", + "\n", + "- Azure subscription\n", + "- Azure Machine Learning Workspace (see this notebook for creation of the workspace if you do not already have one: AzureML configuration notebook)\n", + "- the Azure Machine Learning SDK\n", + "- the Azure CLI and the Azure Machine learning CLI extension (> version 2.2.2)\n", + "\n", + "The following resources can be of help:\n", + "\n", + "- Understand the [architecture and terms](https://learn.microsoft.com/en-us/azure/machine-learning/concept-azure-machine-learning-v2?tabs=cli) introduced by Azure Machine Learning\n", + "- The [Azure Portal](https://portal.azure.com/#home) allows you to track the status of your deployments.\n", + "\n", + "This notebook is made with the reference of examples mentioned on the below links: \n", + "- [Train with custom image](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-train-with-custom-image)\n", + "- [Quickstart create resources](https://learn.microsoft.com/en-us/azure/machine-learning/quickstart-create-resources)" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": { + "nteract": { + "transient": { + "deleting": false + } + } + }, + "source": [ + "# Getting Required Files\n", + "We need to get the required files from the repository [here](https://github.com/intel/nlp-training-and-inference-openvino/tree/v1.1)." + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "metadata": { + "gather": { + "logged": 1670306213883 + }, + "jupyter": { + "outputs_hidden": false, + "source_hidden": false + }, + "nteract": { + "transient": { + "deleting": false + } + } + }, + "outputs": [], + "source": [ + "import os\n", + "SCRIPT_DIR = \"bert_inference_onnxruntime/\"\n", + "if not os.path.exists(SCRIPT_DIR):\n", + " os.mkdir(SCRIPT_DIR)\n", + "\n", + "if not os.path.exists(SCRIPT_DIR+\"bert_inference_optimum_ort_ovep.py\"):\n", + " !cd bert_inference_onnxruntime/ && wget https://raw.githubusercontent.com/intel/nlp-training-and-inference-openvino/v1.1/question-answering-bert-qat/onnxovep_optimum_inference/bert_inference_optimum_ort_ovep.py\n", + "if not os.path.exists(SCRIPT_DIR+\"input.csv\"):\n", + " !cd bert_inference_onnxruntime/ && wget https://raw.githubusercontent.com/intel/nlp-training-and-inference-openvino/v1.1/question-answering-bert-qat/onnxovep_optimum_inference/data/input.csv\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "nteract": { + "transient": { + "deleting": false + } + } + }, + "source": [ + "# Import necessary Libraries\n", + "First we need to import the necessary libraries to perform the desired task" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": { + "collapsed": false, + "gather": { + "logged": 1670306214257 + }, + "jupyter": { + "outputs_hidden": false, + "source_hidden": false + }, + "nteract": { + "transient": { + "deleting": false + } + } + }, + "outputs": [], + "source": [ + "from pathlib import Path\n", + "from azureml.core import Workspace\n", + "from azureml.core import ScriptRunConfig, Experiment, Environment" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "nteract": { + "transient": { + "deleting": false + } + } + }, + "source": [ + "# Initialize Workspace\n", + "\n", + "The Azure Machine Learning workspace is the top-level resource for the service. It gives you a centralized place to work with all the artifacts that you create. In the Python SDK, you can access the workspace artifacts by creating a Workspace object." + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": { + "gather": { + "logged": 1670306215426 + } + }, + "outputs": [], + "source": [ + "from azureml.core import Workspace\n", + "ws = Workspace.from_config()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "nteract": { + "transient": { + "deleting": false + } + } + }, + "source": [ + "# Create or attach a compute target\n", + "\n", + "A Compute target is a machine where we intend to run our code. It can be a compute instance or a compute clusters. \n", + "Here we are using a compute cluster. If the cluster already exists it'll attach our workspace to that cluster, else it'll create a cluster according to the specification mentioned and attach to our workspace." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false, + "gather": { + "logged": 1670306215797 + }, + "jupyter": { + "outputs_hidden": false, + "source_hidden": false + }, + "nteract": { + "transient": { + "deleting": false + } + } + }, + "outputs": [], + "source": [ + "from azureml.core.compute import ComputeTarget, AmlCompute\n", + "from azureml.core.compute_target import ComputeTargetException\n", + "\n", + "# Choose a name for your cluster.\n", + "cluster_name = \"cpu-clusters4\"\n", + "\n", + "try:\n", + " compute_target = ComputeTarget(workspace=ws, name=cluster_name)\n", + " print('Found existing compute target.')\n", + "except ComputeTargetException:\n", + " print('Creating a new compute target...')\n", + " compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D16DS_V4',\n", + " max_nodes=4)\n", + " # Create the cluster.\n", + " compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n", + "\n", + " compute_target.wait_for_completion(show_output=True)\n", + "\n", + "# Use get_status() to get a detailed status for the current AmlCompute.\n", + "print(compute_target.get_status().serialize())" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "nteract": { + "transient": { + "deleting": false + } + } + }, + "source": [ + "# Getting the Paths right\n", + "All scripts & files present in the `script_dir` script folder are uploaded to the compute target, data stores are mounted or copied, and the script is executed. \n", + "Outputs from stdout and the ./logs folder are streamed to the run history and can be used to monitor the run. For further details please refer [here](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-train-pytorch#what-happens-during-run-execution)." + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": { + "collapsed": false, + "gather": { + "logged": 1670306216070 + }, + "jupyter": { + "outputs_hidden": false, + "source_hidden": false + }, + "nteract": { + "transient": { + "deleting": false + } + } + }, + "outputs": [], + "source": [ + "script_dir = \"bert_inference_onnxruntime/\"\n", + "script_name = \"bert_inference_optimum_ort_ovep.py\"" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "nteract": { + "transient": { + "deleting": false + } + } + }, + "source": [ + "## Azure ML Environment Declarations\n", + "\n", + "Assigning a name to our environment for easier tracking and monitoring" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "metadata": { + "collapsed": false, + "gather": { + "logged": 1670306216327 + }, + "jupyter": { + "outputs_hidden": false, + "source_hidden": false + }, + "nteract": { + "transient": { + "deleting": false + } + } + }, + "outputs": [], + "source": [ + "environment_name = \"int8_inf-example\"\n", + "experiment_name = \"int8_inf-test\"" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "nteract": { + "transient": { + "deleting": false + } + } + }, + "source": [ + "## Environment Definition\n", + "\n", + "Environment definition allows us to define a custom Docker Environment with all the required dependencies making sure our script runs as expected." + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "metadata": { + "collapsed": false, + "gather": { + "logged": 1670306216554 + }, + "jupyter": { + "outputs_hidden": false, + "source_hidden": false + }, + "nteract": { + "transient": { + "deleting": false + } + } + }, + "outputs": [], + "source": [ + "# Copyright (C) 2022-2023 Intel Corporation\n", + "# SPDX-License-Identifier: MIT\n", + "# Specify Docker steps as a string. \n", + "dockerfile = r\"\"\"\n", + "FROM openvino/ubuntu20_runtime:2022.2.0\n", + "\n", + "USER root\n", + "\n", + "RUN apt-get update && apt-get install -y \\\n", + " python3.8 \\\n", + " python3.8-venv; \\\n", + " rm -rf /var/lib/apt/lists/*;\n", + "\n", + "RUN update-alternatives --install /usr/bin/python python /usr/bin/python3.8 70; \\\n", + " update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.8 70;\n", + "\n", + "RUN apt-get update\n", + "RUN apt-get install -y cifs-utils\n", + "RUN python3 -m pip install --upgrade pip\n", + "RUN python3 -m pip install --no-cache-dir onnxruntime-openvino==1.13.1 pandas==1.5.2\n", + "RUN python3 -m pip install --no-cache-dir optimum==1.5.1\n", + "\n", + "USER openvino\n", + "\n", + "\"\"\"" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "nteract": { + "transient": { + "deleting": false + } + } + }, + "source": [ + "## Creating Environment" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "metadata": { + "collapsed": false, + "gather": { + "logged": 1670306216906 + }, + "jupyter": { + "outputs_hidden": false, + "source_hidden": false + }, + "nteract": { + "transient": { + "deleting": false + } + } + }, + "outputs": [], + "source": [ + "env = Environment(environment_name)\n", + "env.docker.base_image = None\n", + "env.docker.base_dockerfile = dockerfile\n", + "env.python.user_managed_dependencies = True" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "nteract": { + "transient": { + "deleting": false + } + } + }, + "source": [ + "## Mounting Azure Storage for easier Access.\n", + "- We need access to the corresponding keys for our storage.\n", + "- The keys can be accessed from [here](https://learn.microsoft.com/en-us/azure/storage/common/storage-account-keys-manage?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json&tabs=azure-portal#view-account-access-keys)\n", + "- We mount the storage in the container for easy access of large files.\n", + "\n", + "## Creating job config for runnning the model\n", + "\n", + "\n", + "Defining the set of commands that will be used to perform our desired task. \n", + " 1. Exporting some environment variables which are accessed for execution configurations during inference. \n", + " 2. Running the inference using OpenVINOExecutionProvider" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "nteract": { + "transient": { + "deleting": false + } + } + }, + "source": [ + "## Azure Credentials\n", + "Here we declare variables which will be used to connect to our Azure account. Storage account details such as storage account name and key can be accessed from :- Go to Azure portal > Storage Account > Security + networking > Access keys. Also file share name can accessed from:- Go to corresponding file shares > Settings > Properties." + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "metadata": { + "gather": { + "logged": 1670306217245 + }, + "jupyter": { + "outputs_hidden": false, + "source_hidden": false + }, + "nteract": { + "transient": { + "deleting": false + } + } + }, + "outputs": [], + "source": [ + "# Storage account name\n", + "STORAGEACCT= \"\"\n", + "# Storage account key\n", + "STORAGEKEY = ''\n", + "# Share name\n", + "SHARE = ''\n", + "\n", + "MOUNT_PATH = \"/mnt/MyAzureFileShare\"\n", + "modelname = \"bert-large-uncased-whole-word-masking-finetuned-squad\"\n", + "\n", + "# Path to the Finetuned INT8 ONNX Model directory.\n", + "# e.g. modelpath = f\"{MOUNT_PATH}/models\"\n", + "modelpath = f\"{MOUNT_PATH}/\"\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "nteract": { + "transient": { + "deleting": false + } + } + }, + "source": [ + "## Azure Storage Paths (Models & Inputs)\n", + "Here we are declaring the paths to the quantized model and the inputs file.\n", + "The input file is a csv file with 2 columns:\n", + "- Context\n", + "- Question\n", + "\n", + "This file will be read and inference will be performed and the corresponding outputs will be saved as `outputs.csv`." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "nteract": { + "transient": { + "deleting": false + } + } + }, + "source": [ + "## Custom Single Inputs\n", + "\n", + "You also have an option to pass sample inputs for testing. \n", + "In the below cell, the variables `context` and `question` are used to facilitate this. \n", + "These variables are passed as an argument to the inference script. If they are empty strings,\n", + "then the default behaviour is to read the `inputs.csv` file, otherwise they will be processed for question answering.\n" + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "metadata": { + "gather": { + "logged": 1670306217595 + }, + "jupyter": { + "outputs_hidden": false, + "source_hidden": false + }, + "nteract": { + "transient": { + "deleting": false + } + } + }, + "outputs": [], + "source": [ + "# -------------------------------------------------------------------------- #\n", + "# Multiple User Input - using csv file\n", + "# Path to Input csv file. In input csv, first field should be context and second field should be question.\n", + "# e.g. inputpath = f\"{MOUNT_PATH}/input.csv\"\n", + "inputpath = f\"{MOUNT_PATH}/\"\n", + "# Path to Output csv file.\n", + "# e.g. outputpath = f\"{MOUNT_PATH}/output.csv\"\n", + "outputpath = f\"{MOUNT_PATH}/\"\n", + "\n", + "# -------------------------------------------------------------------------- #\n", + "# Single User Input - using context and question parameters\n", + "# if you are providing input csv please pass empty string to context and question(e.g context='\"\"' ,question ='\"\"')\n", + "context = \"\"\" \"In its early years, the new convention center failed to meet attendance and revenue expectations.[12] By 2002, many Silicon Valley businesses were choosing the much larger Moscone Center in San Francisco over the San Jose Convention Center due to the latter's limited space. A ballot measure to finance an expansion via a hotel tax failed to reach the required two-thirds majority to pass. In June 2005, Team San Jose built the South Hall, a $6.77 million, blue and white tent, adding 80,000 square feet (7,400 m2) of exhibit space\" \"\"\"\n", + "question = \"\"\" \"how may votes did the ballot measure need?\" \"\"\"\n", + "# -------------------------------------------------------------------------- #\n", + "\n", + "provider = \"OpenVINOExecutionProvider\"\n", + "\n", + "command = \"\"\"\n", + "mkdir -p {mount_path}\n", + "mount -t cifs //{storageacct}.file.core.windows.net/{share} /mnt/MyAzureFileShare -o vers=3.0,username={storageacct},password={storagekey},dir_mode=0777,file_mode=0777,serverino\n", + "\n", + "ls -al {modelpath}\n", + "echo Checking Model Path...\n", + "ls -al {inputpath}\n", + "\n", + "python bert_inference_optimum_ort_ovep.py --modelname {modelname} \\\n", + " --modelpath {modelpath} \\\n", + " --provider {provider} \\\n", + " --inputpath {inputpath} \\\n", + " --outputpath {outputpath} \\\n", + " --context {context} \\\n", + " --question {question}\n", + "\"\"\".format(modelpath = modelpath,\n", + " modelname = modelname,\n", + " provider = provider,\n", + " mount_path = MOUNT_PATH,\n", + " inputpath = inputpath,\n", + " outputpath = outputpath,\n", + " context=context,\n", + " question=question,\n", + " storageacct = STORAGEACCT,\n", + " storagekey = STORAGEKEY,\n", + " share = SHARE)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Create job config\n", + "\n", + "Job config allows us to define how we want to execute our training procedure. We need to pass the following informations to `ScriptRunConfig` object to initialize the job config instance.\n", + "- `source_directory` $\\rightarrow$ All the contents of this source directory are copied to the compute target instance. \n", + "- `command` $\\rightarrow$ Our desired set of commands we wish to execute to perform the task.\n", + "- `env` $\\rightarrow$ Our target environment to execute the `script`\n", + "- `compute_target` $\\rightarrow$ Our target compute preference (i.e. cluster or instance) to run the `script`\n", + "\n", + "### Running the Script" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false, + "gather": { + "logged": 1670310075525 + }, + "jupyter": { + "outputs_hidden": false, + "source_hidden": false + }, + "nteract": { + "transient": { + "deleting": false + } + } + }, + "outputs": [], + "source": [ + "src = ScriptRunConfig(source_directory=script_dir,\n", + " command=command,\n", + " environment=env,\n", + " compute_target=cluster_name)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "nteract": { + "transient": { + "deleting": false + } + } + }, + "source": [ + "## Submit job\n", + "After submitting the job, we can see logs from the Outputs + logs tab of the Web View link generated." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false, + "gather": { + "logged": 1670306608804 + }, + "jupyter": { + "outputs_hidden": false, + "source_hidden": false + }, + "nteract": { + "transient": { + "deleting": false + } + } + }, + "outputs": [], + "source": [ + "exp = Experiment(ws, experiment_name)\n", + "run = exp.submit(src, tags={\"tag\": \"OVEP\"})\n", + "run.wait_for_completion(show_output=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "jupyter": { + "outputs_hidden": false, + "source_hidden": false + }, + "nteract": { + "transient": { + "deleting": false + } + } + }, + "source": [ + "### Accessing the user output logs" + ] + }, + { + "cell_type": "code", + "execution_count": 13, + "metadata": { + "gather": { + "logged": 1670306610501 + } + }, + "outputs": [], + "source": [ + "run.download_file(name=run.get_file_names()[-1], output_file_path=f'logs/'+run.get_file_names()[-1])\n", + "with open('logs/'+run.get_file_names()[-1], 'r') as f:\n", + " logs = f.readlines()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "nteract": { + "transient": { + "deleting": false + } + } + }, + "source": [ + "### Printing the output logs" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "gather": { + "logged": 1670306610784 + } + }, + "outputs": [], + "source": [ + "for idx, log in enumerate(logs):\n", + " if log.startswith('Inference'):\n", + " break\n", + "\n", + "print(*logs[idx:], sep='\\n')" + ] + } + ], + "metadata": { + "kernel_info": { + "name": "python38-azureml" + }, + "kernelspec": { + "display_name": "Python 3.8 - AzureML", + "language": "python", + "name": "python38-azureml" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.8.5" + }, + "microsoft": { + "host": { + "AzureML": { + "notebookHasBeenCompleted": true + } + } + }, + "nteract": { + "version": "nteract-front-end@1.0.0" + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/python/OpenVINO_EP/azureml/quantization-aware-training-bert-squad-qa.ipynb b/python/OpenVINO_EP/azureml/quantization-aware-training-bert-squad-qa.ipynb new file mode 100644 index 000000000..ac802d3ca --- /dev/null +++ b/python/OpenVINO_EP/azureml/quantization-aware-training-bert-squad-qa.ipynb @@ -0,0 +1,600 @@ +{ + "cells": [ + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Copyright (C) 2022-2023, Intel Corporation\n", + "\n", + "SPDX-License-Identifier: MIT\n", + "\n", + "# Quantization Aware Training using AzureML\n", + "\n", + "In this sample we are demonstrating quantization aware training using Neural Networks Compression Framework though Optimum Intel. \n", + "Here we're using a BERT Model from the HuggingFace hub (transformers library) for Question-Answering usecase.\n", + "\n", + "\n", + "This notebook is made with the reference of examples mentioned on the below links: \n", + "- https://learn.microsoft.com/en-us/azure/machine-learning/how-to-train-with-custom-image\n", + "- https://learn.microsoft.com/en-us/azure/machine-learning/quickstart-create-resources\n" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": { + "nteract": { + "transient": { + "deleting": false + } + } + }, + "source": [ + "# Getting Required Files\n", + "\n", + "We need to get the required files from the repository [here](https://github.com/intel/nlp-training-and-inference-openvino/tree/v1.1)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "gather": { + "logged": 1670236174499 + }, + "jupyter": { + "outputs_hidden": false, + "source_hidden": false + }, + "nteract": { + "transient": { + "deleting": false + } + } + }, + "outputs": [], + "source": [ + "import os\n", + "SCRIPT_DIR = 'bert_qat/'\n", + "if not os.path.exists(SCRIPT_DIR):\n", + " os.mkdir(SCRIPT_DIR)\n", + "if not os.path.exists(SCRIPT_DIR+\"trainer_qa.py\"):\n", + " !cd bert_qat/ && wget https://raw.githubusercontent.com/intel/nlp-training-and-inference-openvino/v1.1/question-answering-bert-qat/quantization_aware_training/training_scripts/trainer_qa.py\n", + "\n", + "if not os.path.exists(SCRIPT_DIR+\"run_qa.py\"):\n", + " !cd bert_qat/ && wget https://raw.githubusercontent.com/intel/nlp-training-and-inference-openvino/v1.1/question-answering-bert-qat/quantization_aware_training/training_scripts/run_qa.py\n", + "\n", + "if not os.path.exists(SCRIPT_DIR+\"utils_qa.py\"):\n", + " !cd bert_qat/ && wget https://raw.githubusercontent.com/intel/nlp-training-and-inference-openvino/v1.1/question-answering-bert-qat/quantization_aware_training/training_scripts/utils_qa.py\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "nteract": { + "transient": { + "deleting": false + } + } + }, + "source": [ + "# Import necessary Libraries\n", + "\n", + "First we need to import the necessary libraries to perform the desired task" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": { + "gather": { + "logged": 1670236174855 + }, + "jupyter": { + "outputs_hidden": false, + "source_hidden": false + }, + "nteract": { + "transient": { + "deleting": false + } + } + }, + "outputs": [], + "source": [ + "import os\n", + "from pathlib import Path\n", + "from azureml.core import Workspace\n", + "from azureml.core import ScriptRunConfig, Experiment, Environment" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "nteract": { + "transient": { + "deleting": false + } + } + }, + "source": [ + "# Initialize Workspace\n", + "\n", + "The Azure Machine Learning workspace is the top-level resource for the service. It gives you a centralized place to work with all the artifacts that you create. In the Python SDK, you can access the workspace artifacts by creating a Workspace object." + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": { + "gather": { + "logged": 1670236177007 + } + }, + "outputs": [], + "source": [ + "from azureml.core import Workspace\n", + "from azureml.core import Datastore\n", + "ws = Workspace.from_config()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "nteract": { + "transient": { + "deleting": false + } + } + }, + "source": [ + "# Create or attach a compute target\n", + "\n", + "A Compute target is a machine where we intend to run our code. It can be a compute instance or a compute clusters. \n", + "Here we are using a compute cluster. If the cluster already exists it'll attach our workspace to that cluster, else it'll create a cluster according to the specification mentioned and attach to our workspace." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "gather": { + "logged": 1670236178132 + }, + "jupyter": { + "outputs_hidden": false, + "source_hidden": false + }, + "nteract": { + "transient": { + "deleting": false + } + } + }, + "outputs": [], + "source": [ + "from azureml.core.compute import ComputeTarget, AmlCompute\n", + "from azureml.core.compute_target import ComputeTargetException\n", + "\n", + "# Choose a name for your cluster.\n", + "cluster_name = \"cpu-cluster4\"\n", + "\n", + "try:\n", + " compute_target = ComputeTarget(workspace=ws, name=cluster_name)\n", + " print('Found existing compute target.')\n", + "except ComputeTargetException:\n", + " print('Creating a new compute target...')\n", + " compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D16DS_V4',\n", + " max_nodes=4)\n", + " # Create the cluster.\n", + " compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n", + "\n", + " compute_target.wait_for_completion(show_output=True)\n", + "\n", + "# Use get_status() to get a detailed status for the current AmlCompute.\n", + "print(compute_target.get_status().serialize())" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "nteract": { + "transient": { + "deleting": false + } + } + }, + "source": [ + "# Setting the correct input paths\n", + "\n", + "All scripts & files present in the `script_dir` script folder are uploaded to the compute target, data stores are mounted or copied, and the script is executed. \n", + "Outputs from stdout and the `./logs` folder are streamed to the run history and can be used to monitor the run. For further details please refer [here](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-train-pytorch#what-happens-during-run-execution)." + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": { + "gather": { + "logged": 1670236178538 + }, + "jupyter": { + "outputs_hidden": false, + "source_hidden": false + }, + "nteract": { + "transient": { + "deleting": false + } + } + }, + "outputs": [], + "source": [ + "script_dir = os.path.abspath(SCRIPT_DIR)+\"/\"\n", + "script_name = \"run_qa.py\"" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": { + "nteract": { + "transient": { + "deleting": false + } + } + }, + "source": [ + "# Environment Definition\n", + "\n", + "Environment definition allows us to define a custom Docker Environment with all the required dependencies making sure our script runs as expected.\n", + "\n", + "## Location of the Dockerfile\n" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "metadata": { + "gather": { + "logged": 1670236178934 + }, + "jupyter": { + "outputs_hidden": false, + "source_hidden": false + }, + "nteract": { + "transient": { + "deleting": false + } + } + }, + "outputs": [], + "source": [ + "# Copyright (C) 2022-2023 Intel Corporation\n", + "# SPDX-License-Identifier: MIT\n", + "dockerfile = r\"\"\"\n", + "FROM openvino/ubuntu20_runtime:2022.2.0\n", + "\n", + "USER root\n", + "\n", + "RUN dpkg --get-selections | grep -v deinstall | awk '{print $1}' > base_packages.txt\n", + "\n", + "RUN apt-get update && apt-get install -y wget\\\n", + " python3.8 \\\n", + " python3.8-venv; \\\n", + " rm -rf /var/lib/apt/lists/*;\n", + "\n", + "RUN update-alternatives --install /usr/bin/python python /usr/bin/python3.8 70; \\\n", + " update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.8 70;\n", + "\n", + "RUN apt-get update && apt-get install -y git\n", + "\n", + "RUN python -m pip install --upgrade pip\n", + "\n", + "RUN pip install --no-cache-dir \"git+https://github.com/huggingface/optimum-intel.git@v1.5.2#egg=optimum-intel[openvino,nncf]\"\n", + "RUN pip install --no-cache-dir \"git+https://github.com/AlexKoff88/nncf_pytorch.git@ak/qdq_per_channel#egg=nncf\"\n", + "RUN pip install --no-cache-dir \"protobuf==3.19.5\"\n", + "RUN pip install --no-cache-dir \"seqeval==1.2.2\"\n", + "RUN pip install --no-cache-dir \"accelerate==0.15.0\"\n", + "RUN pip install --no-cache-dir \"evaluate==0.3.0\"\n", + "RUN pip install --no-cache-dir \"datasets==2.7.1\"\n", + "RUN pip install --no-cache-dir \"torch==1.12.0\"\n", + "RUN pip install --no-cache-dir \"openvino-dev==2022.2.0\"\n", + "WORKDIR /home/training\n", + "\n", + "RUN chown openvino -R /home/training\n", + "\n", + "USER openvino\n", + "\n", + "RUN ls -l\n", + "\"\"\"" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "nteract": { + "transient": { + "deleting": false + } + } + }, + "source": [ + "## Azure ML settings\n", + "Assigning a name to our environment for easier tracking and monitoring" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "metadata": { + "gather": { + "logged": 1670236179188 + }, + "jupyter": { + "outputs_hidden": false, + "source_hidden": false + }, + "nteract": { + "transient": { + "deleting": false + } + } + }, + "outputs": [], + "source": [ + "environment_name = \"qat-example\"\n", + "experiment_name = \"qat-test\"" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "nteract": { + "transient": { + "deleting": false + } + } + }, + "source": [ + "## Creating Environment" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "metadata": { + "gather": { + "logged": 1670236179482 + }, + "jupyter": { + "outputs_hidden": false, + "source_hidden": false + }, + "nteract": { + "transient": { + "deleting": false + } + } + }, + "outputs": [], + "source": [ + "env = Environment(environment_name)\n", + "env.docker.base_image = None\n", + "env.docker.base_dockerfile = dockerfile\n", + "env.python.user_managed_dependencies = True" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": { + "nteract": { + "transient": { + "deleting": false + } + } + }, + "source": [ + "## Arguments for Quantization Aware Training\n", + "\n", + "To run the Quantization Aware Training we need to pass few arguments to the script. The arguments can be easily passed by combining them together in a list in the below order: \n", + "`arguments = ['--first_arg', first_val, '--second_arg', second_val, ...]`\n", + "\n", + "For further details please refer [here](https://azure.github.io/azureml-cheatsheets/docs/cheatsheets/python/v1/cheatsheet/)" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "metadata": { + "gather": { + "logged": 1670236179786 + }, + "jupyter": { + "outputs_hidden": false, + "source_hidden": false + }, + "nteract": { + "transient": { + "deleting": false + } + } + }, + "outputs": [], + "source": [ + "model_name = \"bert-large-uncased-whole-word-masking-finetuned-squad\"\n", + "arguments = [\"--model_name_or_path\", model_name,\n", + " \"--dataset_name\", \"squad\",\n", + " \"--do_train\", True,\n", + " \"--do_eval\", True,\n", + " \"--max_seq_length\", 256,\n", + " \"--per_device_train_batch_size\", 3,\n", + " \"--max_train_samples\",10,\n", + " \"--max_eval_samples\",10,\n", + " \"--learning_rate\", 3e-5,\n", + " \"--num_train_epochs\", 2,\n", + " \"--output_dir\", \"./outputs/bert_finetuned_model\"]" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "nteract": { + "transient": { + "deleting": false + } + } + }, + "source": [ + "## Create job config\n", + "\n", + "Job config allows us to define how we want to execute our training procedure. We need to pass the following informations to `ScriptRunConfig` object to initialize the job config instance.\n", + "- `source_directory` $\\rightarrow$ All the contents of this source directory are copied to the compute target instance. \n", + "- `script` $\\rightarrow$ Our desired python script which we wish to execute \n", + "- `arguments` $\\rightarrow$ Necessary arguments for the `script` \n", + "- `env` $\\rightarrow$ Our target environment to execute the `script`\n", + "- `compute_target` $\\rightarrow$ Our target compute preference (i.e. cluster or instance) to run the `script`" + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "metadata": { + "gather": { + "logged": 1670236180060 + }, + "jupyter": { + "outputs_hidden": false, + "source_hidden": false + }, + "nteract": { + "transient": { + "deleting": false + } + } + }, + "outputs": [], + "source": [ + "src = ScriptRunConfig(source_directory=script_dir,\n", + " script=script_name,\n", + " arguments=arguments,\n", + " environment=env,\n", + " compute_target=cluster_name)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "nteract": { + "transient": { + "deleting": false + } + } + }, + "source": [ + "# Submit job\n", + "After submitting the job, we can see logs from the Outputs + logs tab of the Web View link generated. Once job is completed, we can see output model from Web View link > Outputs + logs > outputs" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "gather": { + "logged": 1669972749717 + }, + "jupyter": { + "outputs_hidden": false, + "source_hidden": false + }, + "nteract": { + "transient": { + "deleting": false + } + } + }, + "outputs": [], + "source": [ + "run = Experiment(ws, experiment_name).submit(src)\n", + "run.wait_for_completion(show_output=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "nteract": { + "transient": { + "deleting": false + } + } + }, + "source": [ + "## Download the model outputs" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "gather": { + "logged": 1669888436945 + }, + "jupyter": { + "outputs_hidden": false, + "source_hidden": false + }, + "nteract": { + "transient": { + "deleting": false + } + } + }, + "outputs": [], + "source": [ + "run.get_file_names()\n", + "print('filenames',run.get_file_names())\n", + "run.download_file(name='./outputs/bert_finetuned_model/model.onnx',\n", + " output_file_path='./models/model.onnx')\n", + "run.download_file(name='./outputs/bert_finetuned_model/config.json',\n", + " output_file_path='./models/config.json')" + ] + } + ], + "metadata": { + "kernel_info": { + "name": "python38-azureml" + }, + "kernelspec": { + "display_name": "Python 3.8 - AzureML", + "language": "python", + "name": "python38-azureml" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.8.5" + }, + "microsoft": { + "host": { + "AzureML": { + "notebookHasBeenCompleted": true + } + } + }, + "nteract": { + "version": "nteract-front-end@1.0.0" + } + }, + "nbformat": 4, + "nbformat_minor": 4 +}