Skip to content

Commit 3b18ac8

Browse files
authored
Init a sparse model auto tracing workflow. (#394)
* Init a sparse model auto tracing workflow. Signed-off-by: conggguan <[email protected]> * Change the minimum-approvals of sparse model uploader to 2. Add some test case. Remove some redundant lines. Signed-off-by: conggguan <[email protected]> * Fix some test cases. Signed-off-by: conggguan <[email protected]> * Remove the temp test jupyter notebook. Signed-off-by: conggguan <[email protected]> * Change the variable name of inner model, and optimize the license verification. Signed-off-by: conggguan <[email protected]> * Address some comments, and nox format. Signed-off-by: conggguan <[email protected]> * Fix a bug for NeuralSparseModel's init. And remove a redundant save_pretrained. Signed-off-by: conggguan <[email protected]> * [Fix] Deleted some redundant code caused a faiure test case, fixed it. Signed-off-by: conggguan <[email protected]> * [Style]:Run a nox -s format to make format identical. Signed-off-by: conggguan <[email protected]> * [Fix] Simplify the SparseEncodingModel and fix a bug for multiple texts embeddings. Signed-off-by: conggguan <[email protected]> * [Fix] Make register_and_deploy_sparse_encoding_model return proper list but not single map. Signed-off-by: conggguan <[email protected]> * [Fix] Fix a bug for register_and_deploy_sparse_encoding_model, it now generate correct list of embedding of input texts. Signed-off-by: conggguan <[email protected]> * [Fix] Fix sparse encoding mdoel's test_check_required_fields test case. Signed-off-by: conggguan <[email protected]> * [Fix] Renamed a unproper variable name. Signed-off-by: conggguan <[email protected]> * [Refactor] Add some comments and extract some constants to a new file. Signed-off-by: conggguan <[email protected]> * [Refactor] Simplify and reuse some code from model auto tracing. Signed-off-by: conggguan <[email protected]> * [Refactor] Simplify and reuse some code from model auto tracing. Signed-off-by: conggguan <[email protected]> * [Refactor] Add a function comments and merge the sparse model trace workflow and dense. Signed-off-by: conggguan <[email protected]> * [Refactor] Merge the sparse and dense model's ci branch. Signed-off-by: conggguan <[email protected]> * [Refactor] Change for more common API, add a line of comments. Signed-off-by: conggguan <[email protected]> --------- Signed-off-by: conggguan <[email protected]>
1 parent ec7e023 commit 3b18ac8

File tree

16 files changed

+1503
-253
lines changed

16 files changed

+1503
-253
lines changed

.ci/run-repository.sh

Lines changed: 17 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -65,7 +65,7 @@ elif [[ "$TASK_TYPE" == "doc" ]]; then
6565

6666
docker cp opensearch-py-ml-doc-runner:/code/opensearch-py-ml/docs/build/ ./docs/
6767
docker rm opensearch-py-ml-doc-runner
68-
elif [[ "$TASK_TYPE" == "trace" ]]; then
68+
elif [[ "$TASK_TYPE" == "SentenceTransformerTrace" || "$TASK_TYPE" == "SparseTrace" ]]; then
6969
# Set up OpenSearch cluster & Run model autotracing (Invoked by model_uploader.yml workflow)
7070
echo -e "\033[34;1mINFO:\033[0m MODEL_ID: ${MODEL_ID}\033[0m"
7171
echo -e "\033[34;1mINFO:\033[0m MODEL_VERSION: ${MODEL_VERSION}\033[0m"
@@ -74,6 +74,17 @@ elif [[ "$TASK_TYPE" == "trace" ]]; then
7474
echo -e "\033[34;1mINFO:\033[0m POOLING_MODE: ${POOLING_MODE:-N/A}\033[0m"
7575
echo -e "\033[34;1mINFO:\033[0m MODEL_DESCRIPTION: ${MODEL_DESCRIPTION:-N/A}\033[0m"
7676

77+
if [[ "$TASK_TYPE" == "SentenceTransformerTrace" ]]; then
78+
NOX_TRACE_TYPE="trace"
79+
EXTRA_ARGS="-ed ${EMBEDDING_DIMENSION} -pm ${POOLING_MODE}"
80+
elif [[ "$TASK_TYPE" == "SparseTrace" ]]; then
81+
NOX_TRACE_TYPE="sparsetrace"
82+
EXTRA_ARGS=""
83+
else
84+
echo "Unknown TASK_TYPE: $TASK_TYPE"
85+
exit 1
86+
fi
87+
7788
docker run \
7889
--network=${network_name} \
7990
--env "STACK_VERSION=${STACK_VERSION}" \
@@ -84,9 +95,12 @@ elif [[ "$TASK_TYPE" == "trace" ]]; then
8495
--env "TEST_TYPE=server" \
8596
--name opensearch-py-ml-trace-runner \
8697
opensearch-project/opensearch-py-ml \
87-
nox -s "trace-${PYTHON_VERSION}" -- ${MODEL_ID} ${MODEL_VERSION} ${TRACING_FORMAT} -ed ${EMBEDDING_DIMENSION} -pm ${POOLING_MODE} -md ${MODEL_DESCRIPTION:+"$MODEL_DESCRIPTION"}
88-
98+
nox -s "${NOX_TRACE_TYPE}-${PYTHON_VERSION}" -- ${MODEL_ID} ${MODEL_VERSION} ${TRACING_FORMAT} ${EXTRA_ARGS} -md ${MODEL_DESCRIPTION:+"$MODEL_DESCRIPTION"}
99+
100+
# To upload a model, we need the model artifact, description, license files into local path
101+
# trace_output should include description and license file.
89102
docker cp opensearch-py-ml-trace-runner:/code/opensearch-py-ml/upload/ ./upload/
90103
docker cp opensearch-py-ml-trace-runner:/code/opensearch-py-ml/trace_output/ ./trace_output/
104+
# Delete the docker image
91105
docker rm opensearch-py-ml-trace-runner
92106
fi

.github/CODEOWNERS

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
* @dhrubo-os @greaa-aws @ylwu-amzn @b4sjoo @jngz-es @rbhavna
1+
* @dhrubo-os @greaa-aws @ylwu-amzn @b4sjoo @jngz-es @rbhavna

.github/workflows/model_uploader.yml

Lines changed: 19 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -17,13 +17,21 @@ on:
1717
required: true
1818
type: string
1919
tracing_format:
20-
description: "Model format for auto-tracing (torch_script/onnx)"
20+
description: "Model format for auto-tracing (torch_script/onnx), now the sparse model only support torchscript model."
2121
required: true
2222
type: choice
2323
options:
2424
- "BOTH"
2525
- "TORCH_SCRIPT"
2626
- "ONNX"
27+
model_type:
28+
description: "Model type for auto-tracing (SentenceTransformer/Sparse)"
29+
required: true
30+
type: choice
31+
options:
32+
- "SentenceTransformer"
33+
- "Sparse"
34+
default: "SentenceTransformer"
2735
embedding_dimension:
2836
description: "(Optional) Embedding Dimension (Specify here if it does not exist in original config.json file, or you want to overwrite it.)"
2937
required: false
@@ -66,14 +74,14 @@ jobs:
6674
run: |
6775
model_id=${{ github.event.inputs.model_id }}
6876
echo "model_folder=ml-models/${{github.event.inputs.model_source}}/${model_id}" >> $GITHUB_OUTPUT
69-
echo "sentence_transformer_folder=ml-models/${{github.event.inputs.model_source}}/${model_id%%/*}/" >> $GITHUB_OUTPUT
77+
echo "model_prefix_folder=ml-models/${{github.event.inputs.model_source}}/${model_id%%/*}/" >> $GITHUB_OUTPUT
7078
- name: Initiate workflow_info
7179
id: init_workflow_info
7280
run: |
7381
embedding_dimension=${{ github.event.inputs.embedding_dimension }}
7482
pooling_mode=${{ github.event.inputs.pooling_mode }}
7583
model_description="${{ github.event.inputs.model_description }}"
76-
84+
model_type=${{ github.event.inputs.model_type }}
7785
workflow_info="
7886
============= Workflow Details ==============
7987
- Workflow Name: ${{ github.workflow }}
@@ -84,6 +92,7 @@ jobs:
8492
========= Workflow Input Information =========
8593
- Model ID: ${{ github.event.inputs.model_id }}
8694
- Model Version: ${{ github.event.inputs.model_version }}
95+
- Model Type: ${{ github.event.inputs.model_type }}
8796
- Tracing Format: ${{ github.event.inputs.tracing_format }}
8897
- Embedding Dimension: ${embedding_dimension:-N/A}
8998
- Pooling Mode: ${pooling_mode:-N/A}
@@ -103,7 +112,7 @@ jobs:
103112
echo "unverified=- [ ] :warning: The license cannot be verified. Please confirm by yourself that the model is licensed under Apache 2.0 :warning:" >> $GITHUB_OUTPUT
104113
outputs:
105114
model_folder: ${{ steps.init_folders.outputs.model_folder }}
106-
sentence_transformer_folder: ${{ steps.init_folders.outputs.sentence_transformer_folder }}
115+
model_prefix_folder: ${{ steps.init_folders.outputs.model_prefix_folder }}
107116
workflow_info: ${{ steps.init_workflow_info.outputs.workflow_info }}
108117
verified_license_line: ${{ steps.init_license_line.outputs.verified }}
109118
unverified_license_line: ${{ steps.init_license_line.outputs.unverified }}
@@ -133,7 +142,7 @@ jobs:
133142
if: github.event.inputs.allow_overwrite == 'NO' && (github.event.inputs.tracing_format == 'TORCH_SCRIPT' || github.event.inputs.tracing_format == 'BOTH')
134143
run: |
135144
TORCH_FILE_PATH=$(python utils/model_uploader/save_model_file_path_to_env.py \
136-
${{ needs.init-workflow-var.outputs.sentence_transformer_folder }} ${{ github.event.inputs.model_id }} \
145+
${{ needs.init-workflow-var.outputs.model_prefix_folder }} ${{ github.event.inputs.model_id }} \
137146
${{ github.event.inputs.model_version }} TORCH_SCRIPT)
138147
aws s3api head-object --bucket ${{ secrets.MODEL_BUCKET }} --key $TORCH_FILE_PATH > /dev/null 2>&1 || TORCH_MODEL_NOT_EXIST=true
139148
if [[ -z $TORCH_MODEL_NOT_EXIST ]]
@@ -145,7 +154,7 @@ jobs:
145154
if: github.event.inputs.allow_overwrite == 'NO' && (github.event.inputs.tracing_format == 'ONNX' || github.event.inputs.tracing_format == 'BOTH')
146155
run: |
147156
ONNX_FILE_PATH=$(python utils/model_uploader/save_model_file_path_to_env.py \
148-
${{ needs.init-workflow-var.outputs.sentence_transformer_folder }} ${{ github.event.inputs.model_id }} \
157+
${{ needs.init-workflow-var.outputs.model_prefix_folder }} ${{ github.event.inputs.model_id }} \
149158
${{ github.event.inputs.model_version }} ONNX)
150159
aws s3api head-object --bucket ${{ secrets.MODEL_BUCKET }} --key $ONNX_FILE_PATH > /dev/null 2>&1 || ONNX_MODEL_NOT_EXIST=true
151160
if [[ -z $ONNX_MODEL_NOT_EXIST ]]
@@ -168,7 +177,7 @@ jobs:
168177
cluster: ["opensearch"]
169178
secured: ["true"]
170179
entry:
171-
- { opensearch_version: 2.7.0 }
180+
- { opensearch_version: 2.11.0 }
172181
steps:
173182
- name: Checkout
174183
uses: actions/checkout@v3
@@ -181,7 +190,7 @@ jobs:
181190
echo "POOLING_MODE=${{ github.event.inputs.pooling_mode }}" >> $GITHUB_ENV
182191
echo "MODEL_DESCRIPTION=${{ github.event.inputs.model_description }}" >> $GITHUB_ENV
183192
- name: Autotracing ${{ matrix.cluster }} secured=${{ matrix.secured }} version=${{matrix.entry.opensearch_version}}
184-
run: "./.ci/run-tests ${{ matrix.cluster }} ${{ matrix.secured }} ${{ matrix.entry.opensearch_version }} trace"
193+
run: "./.ci/run-tests ${{ matrix.cluster }} ${{ matrix.secured }} ${{ matrix.entry.opensearch_version }} ${{github.event.inputs.model_type}}Trace"
185194
- name: Limit Model Size to 2GB
186195
run: |
187196
upload_size_in_binary_bytes=$(ls -lR ./upload/ | awk '{ SUM += $5} END {print SUM}')
@@ -226,7 +235,7 @@ jobs:
226235
- name: Dryrun model uploading
227236
id: dryrun_model_uploading
228237
run: |
229-
dryrun_output=$(aws s3 sync ./upload/ s3://${{ secrets.MODEL_BUCKET }}/${{ needs.init-workflow-var.outputs.sentence_transformer_folder }} --dryrun \
238+
dryrun_output=$(aws s3 sync ./upload/ s3://${{ secrets.MODEL_BUCKET }}/${{ needs.init-workflow-var.outputs.model_prefix_folder }} --dryrun \
230239
| sed 's|s3://${{ secrets.MODEL_BUCKET }}/|s3://(MODEL_BUCKET)/|'
231240
)
232241
echo "dryrun_output<<EOF" >> $GITHUB_OUTPUT
@@ -301,7 +310,7 @@ jobs:
301310
- name: Copy Files to the Bucket
302311
id: copying_to_bucket
303312
run: |
304-
aws s3 sync ./upload/ s3://${{ secrets.MODEL_BUCKET }}/${{ needs.init-workflow-var.outputs.sentence_transformer_folder }}
313+
aws s3 sync ./upload/ s3://${{ secrets.MODEL_BUCKET }}/${{ needs.init-workflow-var.outputs.model_prefix_folder }}
305314
echo "upload_time=$(TZ='America/Los_Angeles' date "+%Y-%m-%d %T")" >> $GITHUB_OUTPUT
306315
outputs:
307316
upload_time: ${{ steps.copying_to_bucket.outputs.upload_time }}

CHANGELOG.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ Inspired from [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
1414
- Add support for model profiles by @rawwar in ([#358](https://github.com/opensearch-project/opensearch-py-ml/pull/358))
1515
- Support for security default admin credential changes in 2.12.0 in ([#365](https://github.com/opensearch-project/opensearch-py-ml/pull/365))
1616
- adding cross encoder models in the pre-trained traced list ([#378](https://github.com/opensearch-project/opensearch-py-ml/pull/378))
17-
17+
- Add workflows and scripts for sparse encoding model tracing and uploading process by @conggguan in ([#394](https://github.com/opensearch-project/opensearch-py-ml/pull/394))
1818

1919
### Changed
2020
- Modify ml-models.JenkinsFile so that it takes model format into account and can be triggered with generic webhook by @thanawan-atc in ([#211](https://github.com/opensearch-project/opensearch-py-ml/pull/211))

noxfile.py

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -166,3 +166,20 @@ def trace(session):
166166
"utils/model_uploader/model_autotracing.py",
167167
*(session.posargs),
168168
)
169+
170+
171+
@nox.session(python=["3.9"])
172+
def sparsetrace(session):
173+
session.install(
174+
"-r",
175+
"requirements-dev.txt",
176+
"--timeout",
177+
"1500",
178+
)
179+
session.install(".")
180+
181+
session.run(
182+
"python",
183+
"utils/model_uploader/sparse_model_autotracing.py",
184+
*(session.posargs),
185+
)

opensearch_py_ml/ml_commons/ml_common_utils.py

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@
1111
MODEL_CHUNK_MAX_SIZE = 10_000_000
1212
MODEL_MAX_SIZE = 4_000_000_000
1313
BUF_SIZE = 65536 # lets read stuff in 64kb chunks!
14-
TIMEOUT = 120 # timeout for synchronous method calls in seconds
14+
TIMEOUT = 240 # timeout for synchronous method calls in seconds
1515
META_API_ENDPOINT = "models/meta"
1616
MODEL_NAME_FIELD = "name"
1717
MODEL_VERSION_FIELD = "version"
@@ -24,6 +24,12 @@
2424
FRAMEWORK_TYPE = "framework_type"
2525
MODEL_CONTENT_HASH_VALUE = "model_content_hash_value"
2626
MODEL_GROUP_ID = "model_group_id"
27+
MODEL_FUNCTION_NAME = "function_name"
28+
MODEL_TASK_TYPE = "model_task_type"
29+
# URL of the license file for the OpenSearch project
30+
LICENSE_URL = "https://github.com/opensearch-project/opensearch-py-ml/raw/main/LICENSE"
31+
# Name of the function used for sparse encoding
32+
SPARSE_ENCODING_FUNCTION_NAME = "SPARSE_ENCODING"
2733

2834

2935
def _generate_model_content_hash_value(model_file_path: str) -> str:

opensearch_py_ml/ml_commons/ml_commons_client.py

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -498,6 +498,24 @@ def get_model_info(self, model_id: str) -> object:
498498
url=API_URL,
499499
)
500500

501+
def generate_model_inference(self, model_id: str, request_body: dict) -> object:
502+
"""
503+
Generates inference result for the given input using the specified request body.
504+
505+
:param model_id: Unique ID of the model.
506+
:type model_id: string
507+
:param request_body: Request body to send to the API.
508+
:type request_body: dict
509+
:return: Returns a JSON object `inference_results` containing the results for the given input.
510+
:rtype: object
511+
"""
512+
API_URL = f"{ML_BASE_URI}/models/{model_id}/_predict/"
513+
return self._client.transport.perform_request(
514+
method="POST",
515+
url=API_URL,
516+
body=request_body,
517+
)
518+
501519
def generate_embedding(self, model_id: str, sentences: List[str]) -> object:
502520
"""
503521
This method return embedding for given sentences (using ml commons _predict api)

opensearch_py_ml/ml_commons/model_uploader.py

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,9 +22,11 @@
2222
MODEL_CONTENT_HASH_VALUE,
2323
MODEL_CONTENT_SIZE_IN_BYTES_FIELD,
2424
MODEL_FORMAT_FIELD,
25+
MODEL_FUNCTION_NAME,
2526
MODEL_GROUP_ID,
2627
MODEL_MAX_SIZE,
2728
MODEL_NAME_FIELD,
29+
MODEL_TASK_TYPE,
2830
MODEL_TYPE,
2931
MODEL_VERSION_FIELD,
3032
TOTAL_CHUNKS_FIELD,
@@ -167,6 +169,7 @@ def _check_mandatory_field(self, model_meta: dict) -> bool:
167169
"""
168170

169171
if model_meta:
172+
170173
if not model_meta.get(MODEL_NAME_FIELD):
171174
raise ValueError(f"{MODEL_NAME_FIELD} can not be empty")
172175
if not model_meta.get(MODEL_VERSION_FIELD):
@@ -178,7 +181,11 @@ def _check_mandatory_field(self, model_meta: dict) -> bool:
178181
if not model_meta.get(TOTAL_CHUNKS_FIELD):
179182
raise ValueError(f"{TOTAL_CHUNKS_FIELD} can not be empty")
180183
if not model_meta.get(MODEL_CONFIG_FIELD):
181-
raise ValueError(f"{MODEL_CONFIG_FIELD} can not be empty")
184+
if (
185+
model_meta.get(MODEL_FUNCTION_NAME) != "SPARSE_ENCODING"
186+
and model_meta.get(MODEL_TASK_TYPE) != "SPARSE_ENCODING"
187+
):
188+
raise ValueError(f"{MODEL_CONFIG_FIELD} can not be empty")
182189
else:
183190
if not isinstance(model_meta.get(MODEL_CONFIG_FIELD), dict):
184191
raise TypeError(

opensearch_py_ml/ml_models/__init__.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,5 +7,6 @@
77

88
from .metrics_correlation.mcorr import MCorr
99
from .sentencetransformermodel import SentenceTransformerModel
10+
from .sparse_encoding_model import SparseEncodingModel
1011

11-
__all__ = ["SentenceTransformerModel", "MCorr"]
12+
__all__ = ["SentenceTransformerModel", "MCorr", "SparseEncodingModel"]
Lines changed: 117 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,117 @@
1+
# SPDX-License-Identifier: Apache-2.0
2+
# The OpenSearch Contributors require contributions made to
3+
# this file be licensed under the Apache-2.0 license or a
4+
# compatible open source license.
5+
# Any modifications Copyright OpenSearch Contributors. See
6+
# GitHub history for details.
7+
import json
8+
import os
9+
from abc import ABC, abstractmethod
10+
from zipfile import ZipFile
11+
12+
import requests
13+
14+
from opensearch_py_ml.ml_commons.ml_common_utils import (
15+
LICENSE_URL,
16+
SPARSE_ENCODING_FUNCTION_NAME,
17+
)
18+
19+
20+
class BaseUploadModel(ABC):
21+
"""
22+
A base class for uploading models to OpenSearch pretrained model hub.
23+
"""
24+
25+
def __init__(
26+
self, model_id: str, folder_path: str = None, overwrite: bool = False
27+
) -> None:
28+
self.model_id = model_id
29+
self.folder_path = folder_path
30+
self.overwrite = overwrite
31+
32+
@abstractmethod
33+
def save_as_pt(self, *args, **kwargs):
34+
pass
35+
36+
@abstractmethod
37+
def save_as_onnx(self, *args, **kwargs):
38+
pass
39+
40+
@abstractmethod
41+
def make_model_config_json(
42+
self,
43+
version_number: str,
44+
model_format: str,
45+
description: str,
46+
) -> str:
47+
pass
48+
49+
def _fill_null_truncation_field(
50+
self,
51+
save_json_folder_path: str,
52+
max_length: int,
53+
) -> None:
54+
"""
55+
Fill truncation field in tokenizer.json when it is null
56+
57+
:param save_json_folder_path:
58+
path to save model json file, e.g, "home/save_pre_trained_model_json/")
59+
:type save_json_folder_path: string
60+
:param max_length:
61+
maximum sequence length for model
62+
:type max_length: int
63+
:return: no return value expected
64+
:rtype: None
65+
"""
66+
tokenizer_file_path = os.path.join(save_json_folder_path, "tokenizer.json")
67+
with open(tokenizer_file_path) as user_file:
68+
parsed_json = json.load(user_file)
69+
if "truncation" not in parsed_json or parsed_json["truncation"] is None:
70+
parsed_json["truncation"] = {
71+
"direction": "Right",
72+
"max_length": max_length,
73+
"strategy": "LongestFirst",
74+
"stride": 0,
75+
}
76+
with open(tokenizer_file_path, "w") as file:
77+
json.dump(parsed_json, file, indent=2)
78+
79+
def _add_apache_license_to_model_zip_file(self, model_zip_file_path: str):
80+
"""
81+
Add Apache-2.0 license file to the model zip file at model_zip_file_path
82+
83+
:param model_zip_file_path:
84+
Path to the model zip file
85+
:type model_zip_file_path: string
86+
:return: no return value expected
87+
:rtype: None
88+
"""
89+
r = requests.get(LICENSE_URL)
90+
assert r.status_code == 200, "Failed to add license file to the model zip file"
91+
92+
with ZipFile(str(model_zip_file_path), "a") as zipObj:
93+
zipObj.writestr("LICENSE", r.content)
94+
95+
96+
class SparseModel(BaseUploadModel, ABC):
97+
"""
98+
Class for autotracing the Sparse Encoding model.
99+
"""
100+
101+
def __init__(
102+
self,
103+
model_id: str,
104+
folder_path: str = "./model_files/",
105+
overwrite: bool = False,
106+
):
107+
super().__init__(model_id, folder_path, overwrite)
108+
self.model_id = model_id
109+
self.folder_path = folder_path
110+
self.overwrite = overwrite
111+
self.function_name = SPARSE_ENCODING_FUNCTION_NAME
112+
113+
def pre_process(self):
114+
pass
115+
116+
def post_process(self):
117+
pass

0 commit comments

Comments
 (0)