You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+52-16Lines changed: 52 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -114,6 +114,8 @@ Additionally, you can take a closer look at the examples in our **[🖥️ Live
114
114
115
115
# ⚡ Quick start
116
116
117
+
### RD-Agent currently only supports Linux.
118
+
117
119
You can try above demos by running the following command:
118
120
119
121
### 🐳 Docker installation.
@@ -153,7 +155,7 @@ More details can be found in the [development setup](https://rdagent.readthedocs
153
155
- whether the docker installation was successful.
154
156
- whether the default port used by the [rdagent ui](https://github.com/microsoft/RD-Agent?tab=readme-ov-file#%EF%B8%8F-monitor-the-application-results) is occupied.
155
157
```sh
156
-
rdagent health_check
158
+
rdagent health_check --no-check-env
157
159
```
158
160
159
161
@@ -220,7 +222,15 @@ More details can be found in the [development setup](https://rdagent.readthedocs
220
222
REASONING_THINK_RM=True
221
223
```
222
224
223
-
- You can also use a deprecated backend if you only use `OpenAI API` or `Azure OpenAI` directly. For this deprecated setting and more configuration information, please refer to the [documentation](https://rdagent.readthedocs.io/en/latest/installation_and_configuration.html).
225
+
You can also use a deprecated backend if you only use `OpenAI API` or `Azure OpenAI` directly. For this deprecated setting and more configuration information, please refer to the [documentation](https://rdagent.readthedocs.io/en/latest/installation_and_configuration.html).
226
+
227
+
228
+
229
+
- If your environment configuration is complete, please execute the following commands to check if your configuration is valid. This step is necessary.
230
+
231
+
```bash
232
+
rdagent health_check
233
+
```
224
234
225
235
### 🚀 Run the Application
226
236
@@ -261,44 +271,70 @@ The **[🖥️ Live Demo](https://rdagent.azurewebsites.net/)** is implemented b
# Specifically, you need to create a folder for storing competition files (e.g., competition description file, competition datasets, etc.), and configure the path to the folder in your environment. In addition, you need to use chromedriver when you download the competition descriptors, which you can follow for this specific example:
281
+
282
+
# 1. Download the dataset, extract it to the target folder.
**NOTE:** For more information about the dataset, please refer to the [documentation](https://rdagent.readthedocs.io/en/latest/scens/data_science.html).
298
+
264
299
- Run the **Automated Kaggle Model Tuning & Feature Engineering**: self-loop model proposal and feature engineering implementation application <br />
265
-
> Using **sf-crime** *(San Francisco Crime Classification)* as an example. <br />
300
+
> Using **tabular-playground-series-dec-2021** as an example. <br />
266
301
> 1. Register and login on the [Kaggle](https://www.kaggle.com/) website. <br />
267
302
> 2. Configuring the Kaggle API. <br />
268
303
> (1) Click on the avatar (usually in the top right corner of the page) -> `Settings` -> `Create New Token`, A file called `kaggle.json` will be downloaded. <br />
269
304
> (2) Move `kaggle.json` to `~/.config/kaggle/` <br />
270
305
> (3) Modify the permissions of the kaggle.json file. Reference command: `chmod 600 ~/.config/kaggle/kaggle.json` <br />
271
-
> 3. Join the competition: Click `Join the competition` -> `I Understand and Accept` at the bottom of the [competition details page](https://www.kaggle.com/competitions/sf-crime/data).
306
+
> 3. Join the competition: Click `Join the competition` -> `I Understand and Accept` at the bottom of the [competition details page](https://www.kaggle.com/competitions/tabular-playground-series-dec-2021/data).
272
307
```bash
273
308
# Generally, you can run the Kaggle competition program with the following command:
# Specifically, you need to create a folder for storing competition files (e.g., competition description file, competition datasets, etc.), and configure the path to the folder in your environment. In addition, you need to use chromedriver when you download the competition descriptors, which you can follow for this specific example:
277
-
278
-
# 1. Install chromedriver.
279
-
280
-
# 2. Add the competition description file path to the `.env` file.
281
-
mkdir -p ./git_ignore_folder/kaggle_data
282
-
dotenv set DS_LOCAL_DATA_PATH "$(pwd)/git_ignore_folder/kaggle_data"
311
+
# 1. Configure environment variables in the `.env` file
312
+
mkdir -p ./git_ignore_folder/ds_data
313
+
dotenv set DS_LOCAL_DATA_PATH "$(pwd)/git_ignore_folder/ds_data"
314
+
dotenv set DS_CODER_ON_WHOLE_PIPELINE True
283
315
dotenv set DS_IF_USING_MLE_DATA True
316
+
dotenv set DS_SAMPLE_DATA_BY_LLM True
317
+
dotenv set DS_SCEN rdagent.scenarios.data_science.scen.KaggleScen
- You can run the following command for our demo program to see the run logs.
291
325
292
326
```sh
293
-
rdagent ui --port 19899 --log_dir <your log folder like "log/">
327
+
rdagent ui --port 19899 --log_dir <your log folder like "log/"> --data_science <True or False>
294
328
```
295
329
296
-
**Note:** Although port 19899 is not commonly used, but before you run this demo, you need to check if port 19899 is occupied. If it is, please change it to another port that is not occupied.
330
+
- About the `data_science` parameter: If you want to see the logs of the data science scenario, set the `data_science` parameter to `True`; otherwise set it to `False`.
331
+
332
+
- Although port 19899 is not commonly used, but before you run this demo, you need to check if port 19899 is occupied. If it is, please change it to another port that is not occupied.
297
333
298
334
You can check if a port is occupied by running the following command.
Copy file name to clipboardExpand all lines: docs/installation_and_configuration.rst
+40Lines changed: 40 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -107,6 +107,46 @@ Besides, when you are using reasoning models, the response might include the tho
107
107
108
108
For more details on LiteLLM requirements, refer to the `official LiteLLM documentation <https://docs.litellm.ai/docs>`_.
109
109
110
+
Configuration Example 2: Azure OpenAI Setup
111
+
-------------------------------------------
112
+
Here’s a sample configuration specifically for Azure OpenAI, based on the `official LiteLLM documentation <https://docs.litellm.ai/docs>`_:
113
+
114
+
If you're using Azure OpenAI, below is a working example using the Python SDK, following the `LiteLLM Azure OpenAI documentation <https://docs.litellm.ai/docs/providers/azure/>`_:
messages = [{ "content": "Hello, how are you?", "role": "user" }]
130
+
)
131
+
132
+
To align with the Python SDK example above, you can configure the `CHAT_MODEL` based on the `response` model setting and use the corresponding `os.environ` variables by writing them into your local `.env` file as follows:
0 commit comments