You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Assuming you have configured your Spark cluster with GPU support, if not yet, please
169
+
Assuming you have configured your Spark cluster with GPU support. Otherwise, please
150
170
refer to `spark standalone configuration with GPU support <https://nvidia.github.io/spark-rapids/docs/get-started/getting-started-on-prem.html#spark-standalone-cluster>`_.
151
171
152
172
.. code-block:: bash
@@ -158,10 +178,13 @@ refer to `spark standalone configuration with GPU support <https://nvidia.github
158
178
--master spark://<master-ip>:7077 \
159
179
--conf spark.executor.resource.gpu.amount=1 \
160
180
--conf spark.task.resource.gpu.amount=1 \
161
-
--archives xgboost-env.tar.gz#environment \
181
+
--archives xgboost_env.tar.gz#environment \
162
182
xgboost_app.py
163
183
164
184
185
+
The submit command sends the Python environment created by pip or conda along with the
186
+
specification of GPU allocation. We will revisit this command later on.
187
+
165
188
Model Persistence
166
189
=================
167
190
@@ -186,26 +209,27 @@ To export the underlying booster model used by XGBoost:
186
209
# the same booster object returned by xgboost.train
187
210
booster: xgb.Booster = model.get_booster()
188
211
booster.predict(...)
189
-
booster.save_model("model.json")
212
+
booster.save_model("model.json")# or model.ubj, depending on your choice of format.
190
213
191
-
This booster is shared by other Python interfaces and can be used by other language
192
-
bindings like the Cand R packages. Lastly, one can extract a booster file directly from
193
-
saved spark estimator without going through the getter:
214
+
This booster is not only shared by other Python interfaces but also used by all the
215
+
XGBoost bindings including the C, Java, and the R package. Lastly, one can extract the
216
+
booster file directly from a saved spark estimator without going through the getter:
0 commit comments