Skip to content

Commit adaf2fb

Browse files
committed
Draft notebook for workflow with vineyard example
Signed-off-by: trafalgarzzz <[email protected]>
1 parent 8205a30 commit adaf2fb

File tree

3 files changed

+389
-0
lines changed

3 files changed

+389
-0
lines changed
167 KB
Loading
126 KB
Loading
Lines changed: 389 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,389 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"metadata": {},
6+
"source": [
7+
"# 在数据处理流水线中使用VineyardRuntime实现高效中间数据管理"
8+
]
9+
},
10+
{
11+
"cell_type": "markdown",
12+
"metadata": {},
13+
"source": [
14+
"## 概述\n",
15+
"当下的大数据/AI应用,往往需要使用端到端的流水线来实现,以下图所示的一个风控作业数据操作流为例:1首先,需要从数据库中导出订单相关数据;随后,图计算引擎会处理这些原始数据,构建 用户-商品 关系图,并通过图算法,初筛出其中隐藏的潜在作弊团伙;接下来,机器学习算法会对这些潜在团伙进行作弊归因,筛选出更准确的结果;最后这些结果会经过人工筛查,并最终做出业务处理。\n",
16+
"\n",
17+
"![Workflow](./static/workflow.png)"
18+
]
19+
},
20+
{
21+
"cell_type": "markdown",
22+
"metadata": {},
23+
"source": [
24+
"在这样的场景下,我们常常会遇到如下问题:\n",
25+
"1. 开发环境和生产环境的差异导致数据工作流的开发和调试变得复杂且低效:\n",
26+
"数据科学家在自己的计算机上开发数据操作的操作使用 Python 代码,但是又需要在生产环境中将代码转化为他们并不熟悉的 YAML 文件从而利用 Argo、Tekton 等基于 Kubernetes 的工作流引擎,这大大降低了开发和部署效率,也带来了开发和生产环境差异性大带来的风险。\n",
27+
"2. 需要引入新分布式存储实现中间临时数据交换,带来额外的开发、费用、运维成本:\n",
28+
"端到端任务的子任务之间的数据交换通常依赖分布式文件系统或对象存储系统(如 HDFS、S3、OSS),这使得整个工作流需要进行大量的数据格式转换和适配工作,导致冗余的 I/O 操作,并由于中间数据的短期性,使用分布式存储系统会导致额外的成本。\n",
29+
"\n",
30+
"3. 在大规模 Kubernetes 集群环境中的数据处理的效率问题:\n",
31+
"在大规模的 Kubernetes 集群中,使用现有的分布式文件系统处理数据时,由于调度系统对数据的读写本地性缺乏足够的理解,并未有效地考虑到数据的位置问题,没有充分利用数据的局部性,导致在处理节点间的数据交换时,无法避免大量的数据重复拉取操作。这种操作既增加了 I/O 消耗,也降低了整体的运行效率。"
32+
]
33+
},
34+
{
35+
"cell_type": "markdown",
36+
"metadata": {},
37+
"source": [
38+
"![workflow with vineyard](./static/workflow_with_vineyard.png)\n",
39+
"为了解决现有大数据/AI中的数据流操作存在的上述问题,我们结合了 Vineyard 的数据共享机制和 Fluid的数据编排能力。\n",
40+
"1. Fluid 的 Python SDK 能够方便地对数据流进行编排,为熟悉 Python 的数据科学家提供了一种简单的方式来构建和提交以数据集操作为中心的工作流。特别地,在开发环境和云上生产环境通过一套代码进行数据流管理。\n",
41+
"2. Vineyard 使端到端工作流中任务之间的数据共享更加高效, 通过内存映射的方式实现零拷贝数据共享,从而避免了额外的 IO 开销,这个是数据共享效率提升的关键。\n",
42+
"3. 通过利用 Fluid 的数据亲和性调度能力,在 Pod 调度策略考虑数据写入节点的信息,从而减小数据迁移引入的网络开销,提升端到端性能。"
43+
]
44+
},
45+
{
46+
"cell_type": "markdown",
47+
"metadata": {},
48+
"source": [
49+
"## 代码示例\n",
50+
"\n",
51+
"在接下来的示例中,我们将使用Fluid中的VineyardRuntime以及DataFlow功能展示如何在数据处理流水线中实现高效中间数据管理。DataFlow功能是Fluid内建提供的数据流编排能力,可将数据处理过程中的多个数据操作串联,实现简单的逻辑编排。如果希望使用更为高级的工作流编排能力,VineyardRuntime同样支持与Argo Workflow等工作流编排引擎集成使用。"
52+
]
53+
},
54+
{
55+
"cell_type": "markdown",
56+
"metadata": {},
57+
"source": [
58+
"### 1. 数据集准备"
59+
]
60+
},
61+
{
62+
"cell_type": "code",
63+
"execution_count": null,
64+
"metadata": {},
65+
"outputs": [],
66+
"source": [
67+
"import numpy as np\n",
68+
"import pandas as pd\n",
69+
"\n",
70+
"# 生成大小约为22G的dataframe\n",
71+
"num_rows = 6000 * 10000\n",
72+
"df = pd.DataFrame({\n",
73+
" 'Id': np.random.randint(1, 100000, num_rows),\n",
74+
" 'MSSubClass': np.random.randint(20, 201, size=num_rows),\n",
75+
" 'LotFrontage': np.random.randint(50, 151, size=num_rows),\n",
76+
" 'LotArea': np.random.randint(5000, 20001, size=num_rows),\n",
77+
" 'OverallQual': np.random.randint(1, 11, size=num_rows),\n",
78+
" 'OverallCond': np.random.randint(1, 11, size=num_rows),\n",
79+
" 'YearBuilt': np.random.randint(1900, 2022, size=num_rows),\n",
80+
" 'YearRemodAdd': np.random.randint(1900, 2022, size=num_rows),\n",
81+
" 'MasVnrArea': np.random.randint(0, 1001, size=num_rows),\n",
82+
" 'BsmtFinSF1': np.random.randint(0, 2001, size=num_rows),\n",
83+
" 'BsmtFinSF2': np.random.randint(0, 1001, size=num_rows),\n",
84+
" 'BsmtUnfSF': np.random.randint(0, 2001, size=num_rows),\n",
85+
" 'TotalBsmtSF': np.random.randint(0, 3001, size=num_rows),\n",
86+
" '1stFlrSF': np.random.randint(500, 4001, size=num_rows),\n",
87+
" '2ndFlrSF': np.random.randint(0, 2001, size=num_rows),\n",
88+
" 'LowQualFinSF': np.random.randint(0, 201, size=num_rows),\n",
89+
" 'GrLivArea': np.random.randint(600, 5001, size=num_rows),\n",
90+
" 'BsmtFullBath': np.random.randint(0, 4, size=num_rows),\n",
91+
" 'BsmtHalfBath': np.random.randint(0, 3, size=num_rows),\n",
92+
" 'FullBath': np.random.randint(0, 5, size=num_rows),\n",
93+
" 'HalfBath': np.random.randint(0, 3, size=num_rows),\n",
94+
" 'BedroomAbvGr': np.random.randint(0, 11, size=num_rows),\n",
95+
" 'KitchenAbvGr': np.random.randint(0, 4, size=num_rows),\n",
96+
" 'TotRmsAbvGrd': np.random.randint(0, 16, size=num_rows),\n",
97+
" 'Fireplaces': np.random.randint(0, 4, size=num_rows),\n",
98+
" 'GarageYrBlt': np.random.randint(1900, 2022, size=num_rows),\n",
99+
" 'GarageCars': np.random.randint(0, 5, num_rows),\n",
100+
" 'GarageArea': np.random.randint(0, 1001, num_rows),\n",
101+
" 'WoodDeckSF': np.random.randint(0, 501, num_rows),\n",
102+
" 'OpenPorchSF': np.random.randint(0, 301, num_rows),\n",
103+
" 'EnclosedPorch': np.random.randint(0, 201, num_rows),\n",
104+
" '3SsnPorch': np.random.randint(0, 101, num_rows),\n",
105+
" 'ScreenPorch': np.random.randint(0, 201, num_rows),\n",
106+
" 'PoolArea': np.random.randint(0, 301, num_rows),\n",
107+
" 'MiscVal': np.random.randint(0, 5001, num_rows),\n",
108+
" 'TotalRooms': np.random.randint(2, 11, num_rows),\n",
109+
" \"GarageAge\": np.random.randint(1, 31, num_rows),\n",
110+
" \"RemodAge\": np.random.randint(1, 31, num_rows),\n",
111+
" \"HouseAge\": np.random.randint(1, 31, num_rows),\n",
112+
" \"TotalBath\": np.random.randint(1, 5, num_rows),\n",
113+
" \"TotalPorchSF\": np.random.randint(1, 1001, num_rows),\n",
114+
" \"TotalSF\": np.random.randint(1000, 6001, num_rows),\n",
115+
" \"TotalArea\": np.random.randint(1000, 6001, num_rows),\n",
116+
" 'MoSold': np.random.randint(1, 13, num_rows),\n",
117+
" 'YrSold': np.random.randint(2006, 2022, num_rows),\n",
118+
" 'SalePrice': np.random.randint(50000, 800001, num_rows),\n",
119+
"})\n",
120+
"\n",
121+
"import oss2\n",
122+
"import io\n",
123+
"from oss2.credentials import EnvironmentVariableCredentialsProvider\n",
124+
"# 请将您的 OSS accessKeyID 和 accessKeySecret 分别设置成环境变量 OSS_ACCESS_KEY_ID 和 OSS_ACCESS_KEY_SECRET\n",
125+
"auth = oss2.ProviderAuth(EnvironmentVariableCredentialsProvider())\n",
126+
"# 请将 OSS_ENDPOINT 和 BUCKET_NAME 替换为您的 OSS Endpoint 和 Bucket\n",
127+
"bucket = oss2.Bucket(auth, 'OSS_ENDPOINT', 'BUCKET_NAME')\n",
128+
"\n",
129+
"bytes_buffer = io.BytesIO()\n",
130+
"df.to_pickle(bytes_buffer)\n",
131+
"bucket.put_object(\"df.pkl\", bytes_buffer.getvalue())"
132+
]
133+
},
134+
{
135+
"cell_type": "markdown",
136+
"metadata": {},
137+
"source": [
138+
"### 2. 创建Fluid Dataset和VineyardRuntime"
139+
]
140+
},
141+
{
142+
"cell_type": "code",
143+
"execution_count": null,
144+
"metadata": {},
145+
"outputs": [],
146+
"source": [
147+
"import fluid\n",
148+
"\n",
149+
"from fluid import constants\n",
150+
"from fluid import models\n",
151+
"\n",
152+
"# 使用默认kubeconfig文件连接到 Fluid 控制平台,并创建 Fluid 客户端实例\n",
153+
"client_config = fluid.ClientConfig()\n",
154+
"fluid_client = fluid.FluidClient(client_config)\n",
155+
"\n",
156+
"# 在default namespace下创建名为vineyard的数据集\n",
157+
"fluid_client.create_dataset(\n",
158+
" dataset_name=\"vineyard\",\n",
159+
")\n",
160+
"\n",
161+
"# 获取vineyard数据集实例\n",
162+
"dataset = fluid_client.get_dataset(dataset_name=\"vineyard\")\n",
163+
"\n",
164+
"# 初始化vineyard runtime的配置,并将vineyard数据集实例绑定到该runtime。\n",
165+
"# 副本数为2,内存分别为30Gi\n",
166+
"dataset.bind_runtime(\n",
167+
" runtime_type=constants.VINEYARD_RUNTIME_KIND,\n",
168+
" replicas=2,\n",
169+
" cache_capacity_GiB=30,\n",
170+
" cache_medium=\"MEM\",\n",
171+
" wait=True\n",
172+
")"
173+
]
174+
},
175+
{
176+
"cell_type": "markdown",
177+
"metadata": {},
178+
"source": [
179+
"在上述代码片段中:\n",
180+
"- 创建 Fluid 客户端: 这段代码负责使用默认的kubeconfig文件建立与Fluid控制平台的连接,并创建一个Fluid客户端实例。\n",
181+
"- 创建和配置 vineyard 数据集与运行时环境: 接下来,代码创建了一个名为Vineyard的数据集,然后获取该数据集实例,并初始化vineyard运行时的配置,设置副本数和内存大小,将数据集绑定到运行时环境。"
182+
]
183+
},
184+
{
185+
"cell_type": "markdown",
186+
"metadata": {},
187+
"source": [
188+
"### 3. 定义Fluid DataFlow"
189+
]
190+
},
191+
{
192+
"cell_type": "code",
193+
"execution_count": null,
194+
"metadata": {},
195+
"outputs": [],
196+
"source": [
197+
"from kubernetes.client import models as k8s_models\n",
198+
"# 定义任务运行模版,并挂载OSS Volume\n",
199+
"def create_processor(script):\n",
200+
" return models.Processor(\n",
201+
" # 当按照前面的可选步骤开启fuse亲和性调度后, 添加下列标签, 从而实现数据处理的最佳性能\n",
202+
" # pod_metadata=models.PodMetadata(\n",
203+
" # labels={\"fuse.serverful.fluid.io/inject\": \"true\"},\n",
204+
" # ),\n",
205+
" script=models.ScriptProcessor(\n",
206+
" command=[\"bash\"],\n",
207+
" source=script,\n",
208+
" image=\"python\",\n",
209+
" image_tag=\"3.10\",\n",
210+
" volumes=[k8s_models.V1Volume(\n",
211+
" name=\"data\",\n",
212+
" persistent_volume_claim=k8s_models.V1PersistentVolumeClaimVolumeSource(\n",
213+
" claim_name=\"pvc-oss\"\n",
214+
" )\n",
215+
" )],\n",
216+
" volume_mounts=[k8s_models.V1VolumeMount(\n",
217+
" name=\"data\",\n",
218+
" mount_path=\"/data\"\n",
219+
" )],\n",
220+
" ) \n",
221+
" )"
222+
]
223+
},
224+
{
225+
"cell_type": "markdown",
226+
"metadata": {},
227+
"source": [
228+
"在上述代码片段中:\n",
229+
"- **创建任务模版:** 代码中封装了一个名为`create_processor`的任务模板函数,该函数接收一个bash脚本并把它传入作为某个容器的启动命令。该容器中定义了Python 3.10的运行环境,并在`/data`目录下挂载了OSS存储数据源。"
230+
]
231+
},
232+
{
233+
"cell_type": "code",
234+
"execution_count": null,
235+
"metadata": {},
236+
"outputs": [],
237+
"source": [
238+
"# 定义数据预处理脚本\n",
239+
"preprocess_data_script = \"\"\"\n",
240+
"pip3 install numpy pandas pyarrow requests vineyard scikit-learn==1.4.0 joblib==1.3.2\n",
241+
"#!/bin/bash\n",
242+
"set -ex\n",
243+
"\n",
244+
"cat <<EOF > ./preprocess.py\n",
245+
"from sklearn.model_selection import train_test_split\n",
246+
"\n",
247+
"import pandas as pd\n",
248+
"import vineyard\n",
249+
"\n",
250+
"df = pd.read_pickle('/data/df.pkl')\n",
251+
"\n",
252+
"# Preprocess Data\n",
253+
"df = df.drop(df[(df['GrLivArea']>4800)].index)\n",
254+
"X = df.drop('SalePrice', axis=1) # Features\n",
255+
"y = df['SalePrice'] # Target variable\n",
256+
"\n",
257+
"del df\n",
258+
"\n",
259+
"X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)\n",
260+
"\n",
261+
"del X, y\n",
262+
"\n",
263+
"vineyard.put(X_train, name=\"x_train\", persist=True)\n",
264+
"vineyard.put(X_test, name=\"x_test\", persist=True)\n",
265+
"vineyard.put(y_train, name=\"y_train\", persist=True)\n",
266+
"vineyard.put(y_test, name=\"y_test\", persist=True)\n",
267+
"\n",
268+
"EOF\n",
269+
"\n",
270+
"python3 ./preprocess.py\n",
271+
"\"\"\"\n",
272+
"\n",
273+
"# 定义模型训练脚本\n",
274+
"train_data_script = \"\"\"\n",
275+
"pip3 install numpy pandas pyarrow requests vineyard scikit-learn==1.4.0 joblib==1.3.2\n",
276+
"#!/bin/bash\n",
277+
"set -ex\n",
278+
"\n",
279+
"cat <<EOF > ./train.py\n",
280+
"from sklearn.linear_model import LinearRegression\n",
281+
"\n",
282+
"import joblib\n",
283+
"import pandas as pd\n",
284+
"import vineyard\n",
285+
"\n",
286+
"x_train_data = vineyard.get(name=\"x_train\", fetch=True)\n",
287+
"y_train_data = vineyard.get(name=\"y_train\", fetch=True)\n",
288+
"\n",
289+
"model = LinearRegression()\n",
290+
"model.fit(x_train_data, y_train_data)\n",
291+
"\n",
292+
"joblib.dump(model, '/data/model.pkl')\n",
293+
"\n",
294+
"EOF\n",
295+
"python3 ./train.py\n",
296+
"\"\"\"\n",
297+
"\n",
298+
"# 定义模型测试脚本\n",
299+
"test_data_script = \"\"\"\n",
300+
"pip3 install numpy pandas pyarrow requests vineyard scikit-learn==1.4.0 joblib==1.3.2\n",
301+
"#!/bin/bash\n",
302+
"set -ex\n",
303+
"\n",
304+
"cat <<EOF > ./test.py\n",
305+
"from sklearn.linear_model import LinearRegression\n",
306+
"from sklearn.metrics import mean_squared_error\n",
307+
"\n",
308+
"import vineyard\n",
309+
"import joblib\n",
310+
"import pandas as pd\n",
311+
"\n",
312+
"x_test_data = vineyard.get(name=\"x_test\", fetch=True)\n",
313+
"y_test_data = vineyard.get(name=\"y_test\", fetch=True)\n",
314+
"\n",
315+
"model = joblib.load(\"/data/model.pkl\")\n",
316+
"y_pred = model.predict(x_test_data)\n",
317+
"\n",
318+
"err = mean_squared_error(y_test_data, y_pred)\n",
319+
"\n",
320+
"with open('/data/output.txt', 'a') as f:\n",
321+
" f.write(str(err))\n",
322+
"\n",
323+
"EOF\n",
324+
"\n",
325+
"python3 ./test.py\n",
326+
"\"\"\"\n",
327+
"\n",
328+
"preprocess_processor = create_processor(preprocess_data_script)\n",
329+
"train_processor = create_processor(train_data_script)\n",
330+
"test_processor = create_processor(test_data_script)"
331+
]
332+
},
333+
{
334+
"cell_type": "markdown",
335+
"metadata": {},
336+
"source": [
337+
"上述代码片段分别定义了数据处理流水线中的三个步骤:数据预处理、模型训练和模型测试。这三个步骤对应的Bash脚本传入`create_processor`函数以被封装为三个processor。"
338+
]
339+
},
340+
{
341+
"cell_type": "code",
342+
"execution_count": null,
343+
"metadata": {},
344+
"outputs": [],
345+
"source": [
346+
"# 创建线性回归模型的任务工作流:数据预处理 -> 模型训练 -> 模型测试\n",
347+
"# 下列的挂载路径\"/var/run\"是vineyard配置文件的默认路径\n",
348+
"flow = dataset.process(processor=preprocess_processor, dataset_mountpath=\"/var/run\") \\\n",
349+
" .process(processor=train_processor, dataset_mountpath=\"/var/run\") \\\n",
350+
" .process(processor=test_processor, dataset_mountpath=\"/var/run\")"
351+
]
352+
},
353+
{
354+
"cell_type": "code",
355+
"execution_count": null,
356+
"metadata": {},
357+
"outputs": [],
358+
"source": [
359+
"# 将线性回归模型的数据处理任务工作流提交,并等待其运行完成\n",
360+
"run = flow.run(run_id=\"linear-regression-with-vineyard\")\n",
361+
"run.wait()"
362+
]
363+
},
364+
{
365+
"cell_type": "markdown",
366+
"metadata": {},
367+
"source": [
368+
"### 4. 资源清理"
369+
]
370+
},
371+
{
372+
"cell_type": "code",
373+
"execution_count": null,
374+
"metadata": {},
375+
"outputs": [],
376+
"source": [
377+
"# 清理所有资源\n",
378+
"dataset.clean_up(wait=True)"
379+
]
380+
}
381+
],
382+
"metadata": {
383+
"language_info": {
384+
"name": "python"
385+
}
386+
},
387+
"nbformat": 4,
388+
"nbformat_minor": 2
389+
}

0 commit comments

Comments
 (0)