Skip to content

Commit 7d45fe5

Browse files
enhancement/extensibility + readme/ml_streaming (#128)
* add ML model to init param of PyMiloServer * add init option to run pymilo server CLI * add testcase of init PyMiloServer with ML model * remove trailing whitespaces * start adding `ML Streaming` to the `README.md` * update docstring * `README.md` updated * `README.md` updated * `CHANGELOG.md` updated * apply autopep8.sh + refactorings * apply refactorings to scenario3 * `README.md` updated * doc : README.md mlstreaming section updated * doc : README.md mlstreaming section updated * doc : README.md mlstreaming section minor bug fixed * doc : README.md mlstreaming section minor bug fixed * doc : PymiloServer docstring updated * doc : README.md mlstreaming section minor bug fixed * rollback to str since it is more convenient --------- Co-authored-by: sepandhaghighi <[email protected]>
1 parent 8467776 commit 7d45fe5

File tree

9 files changed

+158
-19
lines changed

9 files changed

+158
-19
lines changed

CHANGELOG.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -35,6 +35,7 @@ and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.
3535
- `ML Streaming` RESTful testcases
3636
- `streaming-requirements.txt`
3737
### Changed
38+
- `README.md` updated
3839
- `ML Streaming` RESTful testcases
3940
- `attribute_call` function in `RESTServerCommunicator`
4041
- `AttributeCallPayload` class in `RESTServerCommunicator`

README.md

Lines changed: 54 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -78,6 +78,7 @@ PyMilo is an open source Python package that provides a simple, efficient, and s
7878

7979

8080
## Usage
81+
### Import/Export
8182
Imagine you want to train a `LinearRegression` model representing this equation: $y = x_0 + 2x_1 + 3$. You will create data points (`X`, `y`) and train your model as follows.
8283
```pycon
8384
>>> import numpy as np
@@ -149,15 +150,66 @@ Now let's load it back. You can do it easily by using PyMilo `Import` class.
149150
```
150151
This loaded model is exactly the same as the original trained model.
151152

153+
### ML streaming
154+
You can easily serve your ML model from a remote server using `ML streaming` feature of PyMilo.
155+
156+
⚠️ `ML streaming` feature exists in versions `>=1.0`
157+
158+
⚠️ In order to use `ML streaming` feature, make sure you've installed the `streaming` mode of PyMilo
159+
160+
#### Server
161+
Let's assume you are in the remote server and you want to import the exported JSON file and start serving your model!
162+
```pycon
163+
>>> from pymilo import Import
164+
>>> from pymilo.streaming import PymiloServer
165+
>>> my_model = Import("model.json").to_model()
166+
>>> communicator = PymiloServer(model=my_model, port=8000).communicator
167+
>>> communicator.run()
168+
```
169+
Now `PymiloServer` runs on port `8000` and exposes REST API to `upload`, `download` and retrieve **attributes** either **data attributes** like `model._coef` or **method attributes** like `model.predict(x_test)`.
170+
171+
#### Client
172+
By using `PymiloClient` you can easily connect to the remote `PymiloServer` and execute any functionalities that the given ML model has, let's say you want to run `predict` function on your remote ML model and get the result:
173+
```pycon
174+
>>> from pymilo.streaming import PymiloClient
175+
>>> pymilo_client = PymiloClient(mode=PymiloClient.Mode.LOCAL, server_url="SERVER_URL")
176+
>>> pymilo_client.toggle_mode(PymiloClient.Mode.DELEGATE)
177+
>>> result = pymilo_client.predict(x_test)
178+
```
179+
180+
ℹ️ If you've deployed `PymiloServer` locally (on port `8000` for instance), then `SERVER_URL` would be `http://127.0.0.1:8000`
181+
182+
You can also download the remote ML model into your local and execute functions locally on your model.
183+
184+
Calling `download` function on `PymiloClient` will sync the local model that `PymiloClient` wraps upon with the remote ML model, and it doesn't save model directly to a file.
185+
186+
```pycon
187+
>>> pymilo_client.download()
188+
```
189+
If you want to save the ML model to a file in your local, you can use `Export` class.
190+
```pycon
191+
>>> from pymilo import Export
192+
>>> Export(pymilo_client.model).save("model.json")
193+
```
194+
Now that you've synced the remote model with your local model, you can run functions.
195+
```pycon
196+
>>> pymilo_client.toggle_mode(mode=PymiloClient.Mode.LOCAL)
197+
>>> result = pymilo_client.predict(x_test)
198+
```
199+
`PymiloClient` wraps around the ML model, either to the local ML model or the remote ML model, and you can work with `PymiloClient` in the exact same way that you did with the ML model, you can run exact same functions with same signature.
200+
201+
ℹ️ Through the usage of `toggle_mode` function you can specify whether `PymiloClient` applies requests on the local ML model `pymilo_client.toggle_mode(mode=Mode.LOCAL)` or delegates it to the remote server `pymilo_client.toggle_mode(mode=Mode.DELEGATE)`
202+
203+
152204
## Supported ML models
153205
| scikit-learn | PyTorch |
154206
| ---------------- | ---------------- |
155207
| Linear Models &#x2705; | - |
156-
| Neural networks &#x2705; | - |
208+
| Neural Networks &#x2705; | - |
157209
| Trees &#x2705; | - |
158210
| Clustering &#x2705; | - |
159211
| Naïve Bayes &#x2705; | - |
160-
| Support vector machines (SVMs) &#x2705; | - |
212+
| Support Vector Machines (SVMs) &#x2705; | - |
161213
| Nearest Neighbors &#x2705; | - |
162214
| Ensemble Models &#x2705; | - |
163215
| Pipeline Model &#x2705; | - |

pymilo/streaming/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,4 +2,4 @@
22
"""PyMilo ML Streaming."""
33
from .pymilo_client import PymiloClient
44
from .pymilo_server import PymiloServer
5-
from .compressor import Compression
5+
from .compressor import Compression

pymilo/streaming/pymilo_server.py

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,17 +11,19 @@
1111
class PymiloServer:
1212
"""Facilitate streaming the ML models."""
1313

14-
def __init__(self, port=8000, compressor=Compression.NULL):
14+
def __init__(self, model=None, port=8000, compressor=Compression.NULL):
1515
"""
1616
Initialize the Pymilo PymiloServer instance.
1717
18+
:param model: the ML model which will be streamed
19+
:type model: any
1820
:param port: the port to which PyMiloServer listens
1921
:type port: int
2022
:param compressor: the compression method to be used in client-server communications
2123
:type compressor: pymilo.streaming.compressor.Compression
2224
:return: an instance of the PymiloServer class
2325
"""
24-
self._model = None
26+
self._model = model
2527
self._compressor = compressor.value
2628
self._encryptor = DummyEncryptor()
2729
self.communicator = RESTServerCommunicator(ps=self, port=port)

tests/test_ml_streaming/run_server.py

Lines changed: 24 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,33 @@
11
import argparse
2-
from pymilo.streaming import Compression
3-
from pymilo.streaming import PymiloServer
2+
from sklearn.linear_model import LinearRegression
3+
from pymilo.streaming import PymiloServer, Compression
4+
from pymilo.utils.data_exporter import prepare_simple_regression_datasets
45

56

67
def main():
78
parser = argparse.ArgumentParser(description='Run the Pymilo server with a specified compression method.')
8-
parser.add_argument('--compression', type=str, choices=['NULL', 'GZIP', 'ZLIB', 'LZMA', 'BZ2'], default='NULL',
9-
help='Specify the compression method (NULL, GZIP, ZLIB, LZMA, or BZ2). Default is NULL.')
9+
parser.add_argument(
10+
'--compression',
11+
type=str,
12+
choices=['NULL', 'GZIP', 'ZLIB', 'LZMA', 'BZ2'],
13+
default='NULL',
14+
help='Specify the compression method (NULL, GZIP, ZLIB, LZMA, or BZ2). Default is NULL.'
15+
)
16+
parser.add_argument(
17+
'--init',
18+
action="store_true",
19+
default=False,
20+
help='the `init` command specifies whether or not initializing the PyMilo Server with a ML model.',
21+
)
1022
args = parser.parse_args()
11-
communicator = PymiloServer(compressor=Compression[args.compression]).communicator
23+
communicator = None
24+
if args.init:
25+
x_train, y_train, _, _ = prepare_simple_regression_datasets()
26+
linear_regression = LinearRegression()
27+
linear_regression.fit(x_train, y_train)
28+
communicator = PymiloServer(model=linear_regression, port=9000, compressor=Compression[args.compression]).communicator
29+
else:
30+
communicator = PymiloServer(port=8000, compressor=Compression[args.compression]).communicator
1231
communicator.run()
1332

1433
if __name__ == '__main__':

tests/test_ml_streaming/scenarios/scenario1.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,12 @@
11
import numpy as np
2-
from pymilo.streaming import Compression
3-
from pymilo.streaming import PymiloClient
2+
from pymilo.streaming import PymiloClient, Compression
43
from sklearn.metrics import mean_squared_error
54
from sklearn.linear_model import LinearRegression
65
from pymilo.utils.data_exporter import prepare_simple_regression_datasets
76

87

98
def scenario1(compression_method):
9+
# [PyMilo Server is not initialized with ML Model]
1010
# 1. create model in local
1111
# 2. train model in local
1212
# 3. calculate mse before streaming

tests/test_ml_streaming/scenarios/scenario2.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,12 @@
11
import numpy as np
2-
from pymilo.streaming import Compression
3-
from pymilo.streaming import PymiloClient
2+
from pymilo.streaming import PymiloClient, Compression
43
from sklearn.metrics import mean_squared_error
54
from sklearn.linear_model import LinearRegression
65
from pymilo.utils.data_exporter import prepare_simple_regression_datasets
76

87

98
def scenario2(compression_method):
9+
# [PyMilo Server is not initialized with ML Model]
1010
# 1. create model in local
1111
# 2. upload model to server
1212
# 3. train model in server
Lines changed: 34 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,34 @@
1+
import numpy as np
2+
from sklearn.metrics import mean_squared_error
3+
from pymilo.streaming import PymiloClient, Compression
4+
from pymilo.utils.data_exporter import prepare_simple_regression_datasets
5+
6+
7+
def scenario3(compression_method):
8+
# [PyMilo Server is initialized with ML Model]
9+
# 1. calculate mse in server
10+
# 2. download model in local
11+
# 3. calculate mse in local
12+
# 4. compare results
13+
14+
# 1.
15+
_, _, x_test, y_test = prepare_simple_regression_datasets()
16+
client = PymiloClient(
17+
mode=PymiloClient.Mode.LOCAL,
18+
compressor=Compression[compression_method],
19+
server_url="http://127.0.0.1:9000",
20+
)
21+
client.toggle_mode(PymiloClient.Mode.DELEGATE)
22+
result = client.predict(x_test)
23+
mse_server = mean_squared_error(y_test, result)
24+
25+
# 2.
26+
client.download()
27+
28+
# 3.
29+
client.toggle_mode(mode=PymiloClient.Mode.LOCAL)
30+
result = client.predict(x_test)
31+
mse_local = mean_squared_error(y_test, result)
32+
33+
# 4.
34+
return np.abs(mse_server-mse_local)

tests/test_ml_streaming/test_streaming.py

Lines changed: 36 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -5,9 +5,11 @@
55
from sys import executable
66
from scenarios.scenario1 import scenario1
77
from scenarios.scenario2 import scenario2
8+
from scenarios.scenario3 import scenario3
9+
810

911
@pytest.fixture(scope="session", params=["NULL", "GZIP", "ZLIB", "LZMA", "BZ2"])
10-
def prepare_server(request):
12+
def prepare_bare_server(request):
1113
compression_method = request.param
1214
path = os.path.join(
1315
os.getcwd(),
@@ -26,10 +28,39 @@ def prepare_server(request):
2628
yield (server_proc, compression_method)
2729
server_proc.terminate()
2830

29-
def test1(prepare_server):
30-
_, compression_method = prepare_server
31+
32+
@pytest.fixture(scope="session")
33+
def prepare_ml_server():
34+
compression_method = "ZLIB"
35+
path = os.path.join(
36+
os.getcwd(),
37+
"tests",
38+
"test_ml_streaming",
39+
"run_server.py",
40+
)
41+
server_proc = subprocess.Popen(
42+
[
43+
executable,
44+
path,
45+
"--compression", compression_method,
46+
"--init",
47+
],
48+
)
49+
time.sleep(2)
50+
yield (server_proc, compression_method)
51+
server_proc.terminate()
52+
53+
54+
def test1(prepare_bare_server):
55+
_, compression_method = prepare_bare_server
3156
assert scenario1(compression_method) == 0
3257

33-
def test2(prepare_server):
34-
_, compression_method = prepare_server
58+
59+
def test2(prepare_bare_server):
60+
_, compression_method = prepare_bare_server
3561
assert scenario2(compression_method) == 0
62+
63+
64+
def test3(prepare_ml_server):
65+
_, compression_method = prepare_ml_server
66+
assert scenario3(compression_method) == 0

0 commit comments

Comments
 (0)